Input
stringlengths 128
43.6k
⌀ | Output
stringlengths 141
10k
|
---|---|
This paper proposes to linearize GRU and LSTM cells (as error terms should be negligible when inputs are small in magnitude). Putting these linearized, or, really, affine, RNN cells together into a single-layer sequence processor, thanks to the affine-ness, we can decompose the score that is obtained by taking dot products with a query at each timestep into contribution by immediate unigram features and all subsequences leading to this unigram. The authors evaluate these scores, showing that they do and don't capture phenomena in a synthetic dataset and proceeding to show that when both training and evaluating with this simplified network on SST yields strong results. #### Strengths/what I loved: - Motivation and Related Work seemed nicely done, set this paper up nicely, and made me excited to read on! - I like the idea of testing double negation and omitting it during training and it was interesting to see the networks then fail to pick up on it (assuming the synthetic dataset is reasonable). - The visualizations of sequence-level scores (Figures 2 and 4) are very cleverly chosen and powerful, even if they take a bit of getting used to. - Figures 3 and 5 are striking: even on SST unigrams and perhaps bigrams seem to get you most of the way if you trust the approximate interpretation. #### Criticism/weaknesses: - It is unclear how the simplified/affine-ized architecture relates to vanilla RNNs---those, too, can be decomposed like that and one could just as well look at these features. To paraphrase: I don't see why any of the analysis and results in this paper is only true for gated cells and I wouldn't be surprised to see a vanilla RNN yield equally good approximate cells, since after all the task is very simplistic (Section 5.2 concludes as much, as even without sequence information the task is easy to solve, but the conclusion drawn from that result, namely that there is something about GRUs and LSTMs to read from this doesn't follow in my opinion). - The synthetic dataset is a mystery: yes, it contains sentences that contain the words shown in Table 6 (Appendix A.2), but... what are these sentences? Are they actual text from some dataset? Text sampled from some model or grammar? Random words without any sequential coherence? What is the vocabulary size? What does appearing "mostly" in positive or negative instances really mean? Just giving one or two examples would have made me a lot less worried and confused about what is going on here, but to base many if not most of the results on it, this dataset is woefully underdescribed. #### Questions: - Notation/definitions in section 5.1.1 are either very unclear or flat-out wrong: Sun & Lu (2020) do indeed define a notion of a "token-level polarity score" for each output class, but the notion of output classes is not mentioned at all here, in fact instead of the output embedding matrix W that Sun & Lu (2020) posit, this paper speaks only of a single vector w. I assume that that is the vector associated with the positive class and the prediction thus is strictly binary---very much unlike Sun & Lu (2020). In addition to that, the notion of a "sequence-level polarity score" that here complements the token-level score is not at all mentioned in Sun & Lu (2020) as far as I can see, so to say that their methodology is used is misleading in more than one way. Finally, the model described in Sun & Lu (2020) is one that uses attention, i.e., that does mean-pooling after the linear layer that is transforming the individual states. This paper mentions the linear layer, but not the pooling/attention, so it's unclear if that is a poor paraphrase of Sun & Lu or whether this paper here too diverges from the paper it claims to build on. - Are Figures 2 and 4 cherry-picked? Since nothing else is stated, I would assume so. In light of that, the longest subsequence on the right of Figure 2 being negative is rather strange. Do you have an explanation? Is "those" a negative word? - I think it would've been very interesting to see whether or not this simplification is legitimate, that is whether the assumptions made in the derivation are justified: train with the original cell formulation, but then evaluate *quantitatively* using the approximate cell to essentially create a table like Table 2 or perhaps even a scatter plot for individual logits to see how much things change or don't change. That would go a long way to convince me that the approximation is at least reasonable. - A.3: "there's an apparent difference" What is that difference? I don't see any. - A.4: What is the change here, can you highlight it or motivate it? The results certainly aren't particularly impressive, so I'm tempted to say this section hurts more than it helps... #### Typos and other small things: - 5.1: As you have multiple runs, which one was selected? The best according to some metric? The median or mean somehow? Or randomly chosen? Either way, I strongly feel that empirical results should always come with a sense of stability: what was the variance between runs? Between hyperparameters? How sensitive are results and what can we say about statistical significance? - The overlaid histograms definitely need to have some transparency or be shown another way---right now it is impossible to see what is "happening" in the blue bars as they are hidden by the orange bars. - The last sentence of Section 5 should link to A.4, I guess? --- I read and appreciated the response, but my overall rating is still leaning negative.<doc-sep>This paper examines n-gram level features encoded within the hidden states of recurrent neural models. The proposed method approximates the hidden states of LSTMs and GRUs with a first-order Taylor series, which the authors claim is an adequate approximation with small enough inputs. The authors apply their method to models trained on synthetic sentiment analysis and language modeling datasets. The paper is difficult to understand, and many assumptions are not properly justified. The experiments are also not convincing, as a large portion of the analysis focuses on small synthetic data, and some of them do not have clear takeaways or explanations as to why they were conducted in the first place. Overall, I cannot recommend the paper's acceptance in its current form. comments/questions: - is the scenario of extremely low input magnitudes realistic? how generalizable are these findings to standard initializations used in NLP architectures - similarly, on page 3 the authors assume that the higher order terms of h_{t-1} are "insignificant"; in practice, it is unclear how often this is true. i wish the paper would contain more justification behind these assumptions, as they are critical for judging how faithful the approximation is and thus how useful the proposed method is for diagnosing RNNs. - can't the authors actually show quantitatively how good the approximations are? if the higher-order terms indeed do not affect the quality of the approximation, that could be justified by some experiments. i'm not really convinced by Table 2: the approximations could be quite different from the original model but still yield good downstream accuracy. - what is a "polarity score" (bottom of page 5)? I didn't quite understand what this is supposed to represent, is it how predictive of a label a particular span is? - why are synthetic datasets used at all here? experiments on a small set of sentences with a tiny vocabulary and artificial "double negatives" are not compelling. Appendix A.2 does not fully specify this dataset (nor motivate why it was created); what is e.g., its average sentence length? - the results on a real sentiment dataset (SST2) are confusing (sec 5.1.3): what does figure 4 show me that I couldn't already learn by simply passing those two phrases into the model as separate inputs and looking at the model's prediction? - what is the point of training models with the approximations instead of the original GRU/LSTM cell equations? i don't understand the significance of Table 3.<doc-sep>This paper attempts to add a contribution on understanding how gated recurrent neural networks like GRUs and LSTMs can learn the representation of n-grams. The authors expand the sigmoid function and the hyperbolic tangent function using Taylor series to obtain approximated closed-form mathematical expression of hidden representation when using the GRU or the LSTM as the update rules. The approximated hidden representation of the GRU and the LSTM update rules can be separated into two terms, (1) the current token-level input feature and (2) the sequence-level feature, which is a weighted sum of all previous tokens. As the hidden representation consists of two feature terms, one can take each feature (either token-level or sequence-level) separately for a downstream task, e.g., evaluate how good when sequence-level feature is used for predicting polarity score in sentiment analysis. The idea of improving theoretical understanding on how n-grams are modelled by gated recurrent activation functions is sound. However, I am not entirely satisfied with what has been investigated after obtaining the approximated closed-form expression of gated recurrent activation functions. The tasks that were used in the experiments are sentiment analysis and language modelling. In sentiment analysis, most of the plots were there to show how token-level features or sequence-level features align with the polarity score, and we can observe some sort of individual implication from each term. However, it is predictable that sequence-level feature should be meaningful. I don't see much of insights by showing that the polarity score from sequence-level features indeed align with this prediction. If we can to apply Taylor expansion to simple recurrent neural networks (RNNs), such that we can expand the hidden representation of a standard RNN into two terms: the current token-level input feature and sequence-level feature, how would the results look like and how can we relate them with what were reported in this paper? Is this paper particularly showing how gated RNNs are modelling n-grams or RNNs in general? A comparison would be nice to show how sequence features get improved in gated RNNs. It is interesting to see that the approximated versions of GRUs and LSTMs can perform on a par with the original models on language modelling tasks, however, these results don't necessarily improve our understanding on how gated RNNs are capable of learning good representations of n-grams. They confirm that sequence features are indeed helpful though. In Section 5.1, if there were multiple trials of experiments on the same task, why not report the average and the variance of the results instead of one set out of multiple results? In Section 5.2, Adpative softmax (Joulin et al., 2017) was used for Wikitext-130. -> Adpative softmax (Joulin et al., 2017) was used for Wikitext-103.<doc-sep>This paper provides a reliable interpretation of modern RNN models, through unrolling GRU and LSTM cells. The approximate state representations include a token-level term that only depends on the current input token and a sentence-level term that depends on all inputs until the current token. The deriving process is clear and illuminating. The experiment section shows that the approximation shares similar behavior and performance as the original model. Overall, the paper is well written and easy to follow. Although GRU and LSTM are no longer the default model for SOTA performance in the NLP community. I believe that this study still provides interesting insights for those who want to develop better recurrent or non-recurrent models in the future. My major concern is that the language model experiment didn’t include a stronger baseline method, such as AWD-LSTM, which provides a significantly lower ppl compared to these in the paper. It would be interesting to see a more detailed ablation study, that studies the importance of each term in A(x_t). <doc-sep>DISCLAIMER: this is not my field of research. With strong arguments I could be persuaded to change my score. This paper introduces a method to unroll the Gated Recurrent Unit (GRU) and the Long Short-Term Memory (LSTM) unit using a taylor expansion. Essentially a linearization of the GRU and LSTM. Given the simplifications, the authors argue that their model captures N-gram information for the sequential information. The paper presents results suggesting that this approximation is able to capture much of the same sequential information as the LSTM and GRU on benchmarks such as SST-2, PDB, Wikitext-2, and Wikitext-103. What this paper excels at is a thorough theoretical formulation of the proposed approximation, and a comparison with an approximation without the sequential information. This shows that the approximate version is able to capture sequential information. However, the relevance of this paper is not clear to me, and the introduction and related works does little more to explain that relevance than stating: “understanding the essential features captured by GRU/LSTM”. The tasks that the method is tested on are synthetic data for sentiment and tasks/models that haven't been relevant since 2016. The motivation for these tasks, and the qualitative analysis is hard for me to understand. The paper could use some reformulations and more emphasis on what exactly the purpose of the paper is, in particular I find the lack of consistency in present/past tense disrupts the reading experience. Below I have made a few comments/questions: Abstract: “Sequential data in practice” - unclear what this means “sequence-level representations brought in by the gating mechanism” - I dont understand this sentence “essential components” - vaquely defined “Based on such a finding” - rephrase Introduction: “gradient vanishing or explosion issues” - vanishing or exploding gradients “While such models ...” - this whole sentence is a little vague Related work: “With the variants” - its variants General comment (also for introduction): While you mention many interesting findings in recent years, it is difficult for me to assess how exactly your work differs. Please use the related works to emphasize what you are doing differently than previous work in your field. LSTM: “A LSTM cell” - An LSTM cell Experiments: “Figure 2” I don’t get what each bar represents “Subphrase labels” - are subphrase labels the node annotation? Why is negation, and a synthetic variant, important to explain the relationship between N-grams and LSTMs? Why do you choose the datasets you do? Why is SST-2 and benchmarking against older language models of interest? Why is an N-gram comparison interesting? Perhaps the authors should look into contemporary research on formal methods in sequential models for inspiration of tasks and where an interesting hypothesis might be: https://arxiv.org/abs/1906.03648 Update: I have read the rebuttal and the updated paper. I don't see my issue of relevance addressed. My score remains the same. | the authors demonstrated that vanilla RNN, GRU and LSTM compute at each timestep a hidden state which is the sum of the current input and the weighted sum of the previous hidden states (weights can be either unit or complicated functions), when sigmoid and tanh functions are replaced by their second-order taylor series each. they refer to the first term as token-level and the second term as sequence-level, and claim that the latter can be thought of as summing n-gram features in the case of GRU & LSTM due to the complicated weight matrices used for the weighted sum, largely arising from the gating mechanisms. the reviewers are largely unsure about the significance of the findings in this paper due to a couple of reasons with which i agree. first, it is unclear whether the proposed approximation scheme is enough to capture much of what happens within either GRU or LSTM. if we consider a single step, it's likely fine to ignore the O(x^3) term arising from either sigmoid or tanh, but when unrolled over time, it's unclear whether these error terms will accumulate or cancel each other. without either empirically or theoretically verifying the sanity of this approximation, it's difficult to judge whether the authors' findings are specific to this approximation scheme or do indeed reflect what happens within GRU/LSTM. second, because the authors have used relative simple benchmarks to demonstrate their points, it is difficult, if not impossible, to tell whether the authors' findings are about the datasets themselves (which are all well known to be easily solvable or solvable very well with n-gram classification models and n-gram language models) or about GRU/LSTM, which is related to the first weakness shared by the reviewer. the observations that n-gram models and simplified GRU/LSTM models work as well as the original GRU/LSTM models on these datasets might simply imply that these datasets don't require any complicated interaction among the tokens beyond counting n-grams, which lead to the original GRU/LSTM trained to be simplified (n-gram detectors.) that said, i still believe this direction is important and is filled with many interesting observations to be made. i suggest the authors (1) verify the efficacy of their approximation scheme (probably empirical validation is enough, and (2) demonstrate their point with more sophisticated problems (carefully designed synthetic datasets are perfectly fine.) |
This work proposes a two-stage human motion forecasting framework that explicitly models human-scene contact. It proposes to decouple the problem into two stages: past pose conditioned contact forecasting, and contact-conditioned pose forecasting. Specifically, it proposes to use a Discrete Cosine Transform (DCT) based network to predict contacts based on past contact, human pose, and scene point clouds. After the future contact is predicted, a series of networks are used to predict the human’s global translation, rotation, and body joint positions. ## Strength **Explicit Contact Modelling** - The proposed two-stage pipeline is intuitive and performs well in the context of human motion prediction. Contact and physical constraints play an important role in governing human motion and based on contact human motion are a lot less ambiguous. The idea of explicitly predicting the future contact of humans in a known scene to guide motion prediction is interesting. **Performance compared to State-of-the-art** - The proposed method outperforms SOTA methods in the motion prediction task. ## Weakness **Novelty in lieu of motion generation methods** - Given the existence of methods such as SAMP [1], where human motion is generated based on path and scene context, the proposed framework has limited novelty. While the settings are slightly different (motion and interaction generation vs forecasting), the methodology is largely similar. The two-stage modeling has been largely explored (first generate goals or subgoals, then generate local motion), and this work mainly excels at better modality (explicit contact). **Lack of generative modeling** - While human motion is multi-modal, the lack of generative and stochastic modeling means that the estimated human motion could be memorizing past observed interactions (especially in PROX and GTA-IM datasets, where the motion are largely similar). **Lacking Qualitative Results** - Since motion is better seen in videos, it would be better if more qualitative samples are provided. [1] Hassan, Mohamed et al. “Stochastic Scene-Aware Motion Prediction.” 2021 IEEE/CVF International Conference on Computer Vision (ICCV) (2021): 11354-11364. The authors have discussed limitations adequately. <doc-sep>This paper proposes to tackle scene-aware 3D human motion forecasting by explicitly modeling the human-scene interactions, i.e., representing the contact between human body joints and scene points with a distance-based contact map. They also introduce a two-stage pipeline that first predicts the future contact map with the given motion history; then forecasts the future global translation and local poses. The proposed method can predict more physically plausible motions and avoid artifacts such as “ghost motion”. **Main Contributions:** This paper proposes a contact map representation explicit modeling human-scene interactions and propose a two-stage framework to forecast human motions with given motion histories and 3D scenes. - Strengths: 1. This paper explicitly models the human-scene interactions with a contact map, which measures the distance between the human joints and the scene points. The contact map enables more physically plausible and realistic human-scene interaction generation. 2. The proposed two-stage prediction pipeline disentangles contact prediction and human pose forecasting, thus capable of explicitly encouraging consistency between human motions and contact points in given 3D scenes. - Weaknesses: 1. The contact map computed between joints and scene points is too coarse. It is more plausible to compute the contact map between body surface vertices and scene points because the human body surface rather than joints contact with the environment. 2. The contact map has been widely used in grasp generation tasks[ Jiang et al., ICCV2021; Brahmbhatt et al., CVPR2019 ]. I think you should discuss the contact map in the literature review. And the idea of using the contact map to model human-scene interactions is not very appealing. When considering the contact between human and scene, I think the shape-based human body representation, e.g., SMPL and marked-based representation, is more reasonable to model the contact between body surface and scene. This representation can produce a more fine-grained contact map, which thus can model more realistic details of human-scene interactions. <doc-sep>This paper promotes explicit contacts modeling when handling the challenging scene-aware human motion forecasting problem, To achieve that, a dense scene-joint distance maps are utilized to densely model human dynamics when interacting with the static scenes, followed by a novel discrete pre-processing with DCT to get sparse principal features, while also enabling residual motion prediction of the contacts point in frequency domain. A two-stage pipeline is assembled together to get better future motion predictions for all 3D body joints, with stage 1 combining GRU-based dynamics modeling and PVCNN-based 3D scene encoding for sequential contact distance maps prediction, and stage 2 carrying out contacts-guided sequential motion forecasting in stage 2. **Strengths** [Novelty] As mentioned above, dynamic contacts modeling and the way to use it are the shining points in this paper, in which the authors leverage an effective point-joint distance field followed by a novel frequency transformation(DCT) to better capture sparse smooth motion patterns. Per-joint closest scene point is used to better guide the motion prediction. This design [Completeness] Three baselines and their proposed methods are validated in two common datasets, including both real and synthetic ones. They also provide both quantitative and qualitative results, including a video demo in the supp. [Effectiveness] The author gets consistently better motion forecasting results(global and local) on long-term motion predictions on two benchmarks compared to all the baselines. **Weaknesses** 1. Firstly, even though the whole method seems to be novel, I do not think that line 49-54 is carefully written to well capture the whole work, in (ii) the two-stage pipeline itself should not be considered as a key contribution, it has overlaps with (i), also I did not see any unique technical contribution statements when tackling this conditional motion synthesis task. The author needs a better clarification on the contributions. 2. Even though so many efforts(DCT/IDCT transformation, GRU models, PVCNN) have been conducted to get better contact maps forecasting, it seems that only one nearest scene point is selected and used per joint point, I am wondering whether such a heavy pipeline is really necessary. 3. The result in table 1 seems to show that the proposed method does not perform well enough on short-term predictions, even though it performs consistently better on predictions >= 1s See above Minor: Line 179, should be '...are then fed into...' | Three expert reviewers have recommended accepting the paper after the discussion period. Reviewers like the overall idea and framework. The AC agrees and recommends acceptance. Please carefully revise the paper based on the reviews. |
The paper focuses on Byzantine defense through malicious node detection in a Federated Learning setting. Namely, by ranking the gradients and then computing the mean/SD, the paper shows that the malicious and benign clients will cluster separately. Assuming that the number of malicious clients is fewer than the number of benign clients, and the clusters correctly separate the malicious from benign, the smaller cluster is removed, and training is done on the gradients in the larger cluster. Appropriate experiments are done to show the ability of the model in malicious node detection along with analysis of performance and computational requirements. (+) The main strength of the paper lie in the novel introduction of using the rank domain in order to detect malicious nodes. All of the claims are properly supported and experiments and results are clear. (-) The weaknesses in the paper are in the number of datasets/attacks. Having more experiments on a larger range of datasets and attacks will support claims more. Additionally, the inclusion of a strong, recent Byzantine defense algorithm in the robust learning area, such as FLTrust, will also help show the performance against state-of-the-art robust learning defense algorithms. This paper introduces a new perspective in Byzantine defense in comparison to typical robust learning defense or other detection defenses. While the algorithm and experiments done are relatively simple, the paper serves more as a starting point for research and application in the new domain. Claims are properly supported and the method's competitive performance compared to other methods is promising. <doc-sep>The paper presents a novel technique for defending against Byzantine types of attacks on federated learning systems. The paper presents both theoretical justifications for why the using moments of ranks of gradient updates works as well as empirical justification showing that it works in practice. __Strong points__: The paper’s strengths are its novelty (both technical and theoretical), the nature of the problem it is attacking and its clarity. - The paper presents a novel technique and way to address the Byzantine attack problem in federated learning (which is a significant problem) by identifying malicious nodes. The actual technique of using the moments of gradient ranks is especially intriguing because of its simple elegance and great theoretical guarantees. The paper does a great job showing how using the first and second moment of ranks can distinguish certain nodes under attack from a theoretical standpoint. The paper also does an excellent job going beyond theoretical guarantees, which hold with large numbers of samples, to show that the proposed MANDERA technique also works in practice and as good as other state-of-the-art methods - As more of a question, really, than a comment, but the assumption that the gradient ranks, $R_{:,j}$ , are statistically independent probably deserves some more questioning. I can appreciate that the paper shows that this is empirically so, with a few, limited examples, at the bottom of section 2.3, but is this always so at least for neural network models? And, if so, why? - The paper is well written with only a few typos in it (.i.e “week” instead of “weak” in the last paragraph of section 2.3). The figures are clear and it is very easy for a reader to understand both why the technique works and how it works. - One area where the clarity could be improved is to provide a brief 1-2 sentence explanation of the label flipping attack in the first paragraph of section 3. All of the other techniques receive a much more robust explanation in the previous section with the theoretical grantees, so it would be good for the reader to better understand why that technique was used and what it is (at a high level). __Weak points__: The paper is, overall, a strong paper with very few weak points. Mostly, the paper leaves one wondering about future work that could build on what is established within the paper, most of which the paper does comment on. - For example, what about using more moments or combining rank statistics with other statistics of the gradient updates for more robust malicious node identification. - What about the use of a better clustering technique? Looking at figure 3 and then looking at the recall problems with SF, it seems like a better clustering technique could probably solve this problem. - As with other points, the paper does explicitly mention this in its ethics statement, but how would one design an attack to counter this defense? I recommend accepting this paper, as it provides a novel technique with sound theoretical and convincing empirical justifications for attacking an important problem in the application of federated learning. <doc-sep>The paper proposes to transfer the update matrix to a ranking matrix before clustering to detect malicious nodes in federated learning. The paper proposes to use ranking instead of numeric values to cluster the user's update in federated learning to find malicious nodes. The authors list several attacks and provide analysis of the different behaviors between benign nodes and malicious nodes. The work is rather complete. The results, AFAIK, are correct and the experiments are solid. The writing is clear and easy to follow. I really enjoy the illustrations in the paper especially Fig 3 and 4. On the other hand, I have several concerns about the methodology itself. Most importantly, the paper does give any formal robustness guarantee. The method is designed based on several known attacks. Because security should not be preserved via obscurity, if this method is applied in real-world applications, the attackers can construct targeted attacks against this approach. For example, in high-dimensional case, the attackers can insert contaminated records close to the benign ones but still deviate the model from converging. To achieve real robustness, theoretically strong robustness [1] should be proved so as to prevent most available attacks (even not known). Second, the method is also not compatible with secure aggregation which prevents cumulative protection. Overall, I do not think the paper is ready for publication until formal robustness guarantee is added. [1] Wang, Lun, Qi Pang, Shuai Wang, and Dawn Song. "F2ED-Learning: Good Fences Make Good Neighbors." I do not recommend acceptance because formal robustness guarantee is missing. | This manuscript proposes a ranking approach to identify Byzantine agents in federated learning. Distinct from existing methods, the mitigation is implemented by computing ranks for each gradient, then computing rank statistics across agents. The primary intuition is that adversarial agents can be identified by examining these rank statistics. There are three reviewers, all of whom agree that the method addresses an interesting and timely issue -- giving the growing interest in both Byzantine-robust learning and federated learning in the community. However, reviewers are mixed on the paper score -- with a strong accept a weak accept, and a strong reject. Common issues raised include the generality of the approach beyond the outlined attacks, Other issues brought up, but addressed in the rebuttal include some weaknesses in the evaluation and comparison to additional baselines. There is also an interesting discussion of using higher-order statistics, which does not seem to help the methods when evaluated by the authors. Nevertheless, after reviews and discussion, the reviewers are mixed at the end of the discussion. The area chair finds, first, that the paper is much improved, and much more applicable in the updated form than in the original version. However, the area chair agrees with the reviewer who notes that the moniker "Byzantine-robust" implies the methods should be provably robust to worst-case adversaries, not only to a selected set of adversaries with pre-selected attacks. The specified setting may be too narrow for interest by the community. To this end, the area chair suspects that the method may be robust to a more general set of attacks than noted -- working to outline sufficient conditions for robustness would significantly strengthen this work. The asymptotic nature of the robustness guarantees is also of concern. An additional concern of the area chair is that the system setting investigated assumes gradient communication and IID data across devices. While this is not an issue on its own, the setting is closer to distributed learning than federated learning, where one generally communicates model updates, or model differences after multiple local updates, and not gradients. This difference can have a significant effect on robustness methods that depend on identifying benign vs. adversarial statistics of parameters. Non-IID data is also common in the federated setting, though this is less concerning, as robust methods for non-IID settings are only now emerging. A simple fix for this issue would be to rename the setting from "Federated" to "Distributed." Authors are encouraged to address the highlighted technical concerns in any future submission of this work. The primary concern may simply be a naming issue (i.e., removing "Byzantine" might fix this concern. Nevertheless, taken together, the opinion of the area chair is that the manuscript is not ready for publication. Again, the area chair believes that many of the issues noted can be fixed, the paper can be strengthened, and this paper may be publishable with limited additional work. |
The authors study the best-of-both-worlds guarantees for model selection in linear bandits. The paper proposes a novel algorithm called Arbe, which achieves the first high probability regret bound for adversarial model selection. In addition, the paper proposes a two-stage algorithm called Arbe-Gap, which achieves best-of-both-worlds high probability regret bounds. Strength: 1. The writing is mostly clear. 2. The best-of-both-worlds guarantees for model selection is an interesting and important problem. 3. The proposed algorithms are novel. It is the first algorithm that achieves high probability best-of-both-worlds regret bounds for linear bandits with nested model classes. Weakness: 1. Both stochastic and adversarial regret bounds are suboptimal with respect to the dimension of the reward vectors. 2. The paper does not contain numerical experiments for the proposed algorithms. Since the paper does not provide any empirical comparisons, it is still unknown whether the proposed algorithms are practical. <doc-sep>This work studies the problem of model selection with bandit feedback in the presence of a sequence of policy classes, in other words, a monotonically increasing sequence of sets of polices. The goal is to achieve the "best of both worlds" high-probability guarantee between the stochastic world and the adversarial world, in other words, ensuring $polylog(T)$ regret for stochastic setting and $\\sqrt{T}$ regret for the adversarial setting. It follows similar ideas of Lee et al. (2021) and Wei et al. (2021) and applies the idea of regret balancing technique discussed in Cutkosky et al. (2021) to design the algorithm Arbe for model selection. To achieve this, the authors extend the techniques to adversarial rewards: they have a few necessary constraints on the algorithms, and enhance the test for mis-specification. This work provides the first algorithm which achieves the best of both worlds high-probability guarantees for model selection within (linear) bandit scenarios. The problem is well-defined and very general (for h-stability and extendability). The techniques which are further improved from the previous work of Lee et al. (2021) and Wei et al. (2021) following the idea of regret balancing are very interesting, and may be applied in solving other problems. Overall, this work does not have any other specific weakness. The paper is clearly written and easy to understand. None. <doc-sep>This work considers model selection from a set of contextual bandit algorithms with nested policy classes. Each of these base learners has a known regret guarantee, that may or may not hold. The environment may be either stochastic with a reward gap or adversarial, and a best-of-both-worlds high-probability bound is provided for the proposed meta-algorithm. On each round this meta-algorithm chooses a base learner with probability that depends on their known bounds. It then performs mis-specification tests and eliminates the learners that violate their regret bounds. Additionally, it performs a test to reliably detect the stochastic environment and switch to exploitation mode in this case. An important building block is extension of the policy class of each of the base learners to include special actions that delegate the decision to another base learner (with a larger policy class). This allows the elimination test to work in the adversarial environment. In the case that the environment is known to be adversarial, the algorithm can run without the stochastic detection test, in which case it achieves a tighter bound. This bound is a high-probability version of the one known from prior art. **Originality** To the best of my knowledge, this is the first work to provide best-of-both-worlds guarantees for bandit model selection. It extends the idea of regret balancing to the adversarial environment by modification of the base learners and carefully designed tests. **Quality** The proofs are provided in the appendix, but I did not verify their soundness. The proof sketches in the main paper seem reasonable. The dependence of the stochastic case bound on the complexity of the largest policy class is justified by proving a lower bound. **Clarity** This work is highly theoretical, and therefore it is not surprising that it is not easy to understand. However, some improvements to the clarity and the organization would be helpful. For example, the detailed example in lines 46-65 is too technical to be part of the introduction, and is better suited for section 2. Subsection 2.1 feels out of scope for section 2. The extension of the policy space and linking of the base learner performances need more explanation. For example, before the definition of extendability, linking of the learners can be mentioned to explain how the extendability will be used by the meta-algorithm. **Significance** Demonstration of best-of-both-worlds bounds is important for the understanding of the underlying relations between different settings and eventually designing practical algorithms that can optimally exploit the properties of the environment. On the other hand, switching between operation modes based on tests is intuitively less generic than a framework that tunes its parameters continuously. Some more discussion on future directions would help to realize the significance of this work. Can it be potentially generalized to other partial information settings? Can some additional assumptions improve the stochastic regret bound? The assumptions are clearly stated. A lower bound for stochastic case is provided. | This work advances the direction on model selection for bandit problems with nested model classes. Reviewers all agree that the results are significant, the contribution is solid, and the paper is well written. Clear accept. |
This paper proposes recursive reasoning to model the opponents in a multi-agent environment, especially when the opponents are capable to learn and reason. The proposed method MBOM models the environment and the joint opponents. Particularly, MBOM simulates the recursive reasoning process and fine-tune the opponent models on multiple levels. The multiple-level models are combined by a Bayesian mixing strategy. The experiments are performed extensively and show improved performance on several benchmarks. Strengths: 1. The paper aims to solve an interesting and important problem that how to model the reasoning and learning opponents in multi-agent tasks. The study can inspire the community potentially; 2. The paper is well presented and the experiments are extensive; Weakness: 1. The proof of MBOM is not convincing and may present some problematic deductions; 2. The experiments cannot fully prove the effectiveness of the proposed method, especially it is hard to claim the MBOM has learned a more accurate model of the opponents even though the improved performance. Some more specific ablation study needs to be performed. It would be more convincing if the ablation study on the model accuracy is directly performed. In addition, personally, I would like to see a baseline with the same number of ensemble models. <doc-sep>This paper introduces an approach model-based opponent modeling (MBOM) for deep multiagent reinforcement learning (MARL) that combines (1) recursive imagination to estimate the reasoning of other agents at different hierarchical levels of reasoning (common in Theory of Mind reasoning like planning-based I-POMDPs, but novel in RL) and (2) Bayesian mixing that estimates the collective behaviors of all other agents in the system as a mix of the lower level recursive reasoning. Such an approach enables the agent to learn how to behave when its oppponents follow static or adaptive policies without requiring explicit models of the neighbor's reasoning (including learning algorithm or learning gradients). Theoretical analysis establishes error bounds on the value estimates. Empirical evaluation on zero-sum games, competitive, and cooperative tasks from MPE demonstrate the advantages of the approach over appropriate baselines. Overall, the research will be of interest to the popular MARL community at NeurIPS. The approach is novel and well evaluated. The paper is relatively easy to follow and has a thorough literature review. I appreciated the inclusion of both theoretical and empirical analysis to evaluate the approach and derive key properties. The supplement provided important information for reproducibility. The limitations of the approach are adequately described. <doc-sep>This paper proposes a model-based opponent modeling method to handle the interaction with other agents in multiagent systems. The main contribution of the proposed method is that it can adapt to different kinds of opponent policy, e.g., fixed-type, adaptive-type, and reasoning-type. Concretely, the proposed method first lets the agent interact with diverse opponents and collects the interaction experience. Then it uses the collected experience to train an environment model. Using this environment model, the agent imagines the adaption of the opponent policy and uses the imagined best response of the opponent to refine the opponent model. The agent repeats this process to get a set of opponent models reflecting different recursive reasoning levels. The authors further use Baysian mixing to get the final opponent model based on the opponent model set. As the Bayesian mixing is non-parametric, the final opponent model can quickly adapt to the true dynamic opponent policy. Strengths: this paper tackles a challenging opponent modeling problem and the proposed method is a good combination of well-known techniques. The existing works are discussed in detail and this paper situates itself in the literature well. The presentation of this paper is also clear. The experiment results are sound and support the authors' claim. Weaknesses: some technical parts need more description and explanation. Some existing works may also be worthy to compare. 1. The assumption in Lemma 3 seems strong. Although the authors claim that "larger M improves the representation capability of IOPs and thus better satisfies the assumption in Lemma 3", larger M also brings higher approximation error (which hinders the representation capability). Therefore, there is no guarantee that the assumption in lemma 3 can hold. If it does not hold, the Bayesian mixing may not approximate the true probability of the opponent well. Moreover, the recursive imagination will incur a considerable amount of computational cost. 2. There are duplicate texts on page 6: "However, larger M also has advantages. To analyze, we first define the benefit using the mixed IOP as..." <doc-sep>This work tackles the problem of modelling agent behaviour in multiagent systems. It focuses explicitly on modelling agents that, in their decision-making process, either (i) adapt their policy to other agents' exhibited behaviour or (ii) reason about other agents' behaviour. This work proposes MBOM, a model-based recursive reasoning-based approach to model agents that reason about others' behaviour. This approach is combined with a method that mixes inferred agent policies from different levels of recursive reasoning to (a) improve the model accuracy for predicting agents' behaviour while also (b) helping model learning agents that adapt their policies to other agents' behaviour. This work evaluates MBOM in the Triangle Game, One-on-One, and Predator-Prey environment against two types of baseline agent modelling techniques. The first type of baselines consists of approaches designed for modelling adaptive agents that change their policies according to others' behaviour. The second baselines are ablations of MBOM designed to elucidate the importance of MBOM's main components. When a controlled agent uses MBOM and the baseline agent modelling techniques for decision making, the experiments show agents equipped with MBOM achieving higher returns than agents that use the baseline agent modelling techniques. Strengths To my knowledge, MBOM's deep model-based approach to learning an agent's optimal policy at each level of reasoning is novel. Furthermore, its approach of mixing agent policies inferred at various levels of reasoning to improve modelling accuracy is also novel. By contrast, other methods such as PR2 (Wen et al., 2019) or GR2 (Wen et al., 2019) only consider the policy resulting from the deepest level of reasoning for decision making. The proposed ablation study over MBOM's component is designed well. The baseline selection elucidates the importance of MBOM's (i) recursive reasoning process and (ii) its usage of model mixing. In future iterations of this work, this ablation study should remain a part of the paper. Except for the lack of descriptions regarding MBOM's model architecture in Figure 1, the model description is clear. The author's effort to explain the role of each of MBOM's components makes it easier to understand the model. Weaknesses This work lacks citations to older works in recursive reasoning for opponent modelling. In particular, works based on the I-POMDP framework (Gmytrasiewicz et al., 2005) should also be relevant to this work. Furthermore, an approach that applies deep neural networks to solve I-POMDPs has been explored by Han et al. (2019). In the case of the I-POMDP-Net proposed by Han et al. (2019), their approach does not rely on the CTDE learning paradigm, which the authors used to characterise prior deep learning-based works on recursive reasoning. Major weaknesses: 1. The uncertain role of policy mixing in modeling adaptive learning agents Despite positioning MBOM as an approach for modelling (i) adaptive learning agents and (ii) agents that also learn models of the controlled agent for decision-making, it is not clear which components of MBOM contribute toward modelling adaptive learning agents. While the authors attributed MBOM's strong performance when dealing with (i) to its policy mixing method in various parts of the manuscript (lines 195 and 279), the intuition behind why policy mixing helps in modelling (i) is not clear. More specifically, the non-parametric design of the policy mixer does not seem to help in learning the changes in modelled agent's policy resulting from the learning agent's actions. My scepticism regarding the policy mixer's role in modelling (i) is further illustrated in the ablation study, which results are displayed in Figures 3c and 3d. Notice that there are ablations of MBOM that deliver similar performance to MBOM even without policy mixing (MBOM-{\\phi_{0}}, MBOM-{\\phi_{1}}, and MBOM-{\\phi_{2}}) as long as the recursive reasoning process is done to an appropriate level. Comparing this to results in Figures 3a and 3b where we have an ablation of MBOM without recursive reasoning, the most significant drop in performance when dealing with adaptive learning agents results from not applying any recursive reasoning. Thus, this indicates that recursive reasoning is why MBOM models (i) well. The manuscript lacks further explanations why recursive reasoning results in improved modelling performance against (i) despite not modelling the changes in agents' policies resulting from learning. 2. Baseline selection and implementation The baseline selection and implementation are also highly questionable. In particular, the authors selected baselines designed for modelling adaptive learning agents. Yet against adaptive learning agents, these baselines performed significantly worse than PPO and the proposed approach, which is not equipped with anything to model adaptive learning agents. This raises the question of whether these baselines are correctly implemented in the first place. To demonstrate the need to model adaptive agents (agents that learn) as opposed to modeling just fixed agents, it would be helpful to include agent modeling baselines which were not specifically designed to model learning agents, such as LIAM: Georgios Papoudakis, Filippos Christianos, Stefano V. Albrecht. Agent Modelling under Partial Observability for Deep Reinforcement Learning. NeurIPS 2021 Although previous works model (i) with recursive reasoning, there is a lack of recursive reasoning baselines. While the authors mentioned other recursive reasoning methods' reliance on the CTDE paradigm as justification for not comparing against them (line 89), I highly believe a comparison against these methods must be made. Since the learning agent decides its action without any centralised component during execution, we can see this as utilising privileged information that only exists during training. Even MBOM uses some privileged information that is not ordinarily accessible during execution (e.g. modelled agent's rewards) for training. In the worst case, recursive reasoning methods not based on CTDE training like I-POMDP-Net can be used as a baseline. 3. Robustness to diverse opponents While one of the central claims of this work is that MBOM allows the learning agent to perform well against a diverse set of opponents, MBOM has not been evaluated against agents whose reward function is highly different to those encountered during training. This is particularly important since many applications require learning agents to deal with previously unseen decision-makers whose reward functions are unknown (e.g. humans in autonomous driving scenarios). In this case, an experiment to evaluate MBOM's capability in those scenarios is also necessary. 4. Lack of experiments against agents with deeper levels of recursive reasoning. While the reasoning learner is an example of agents that model the controlled agent, there is a lack of evaluation against agents with deeper levels of recursive reasoning. A potential improvement is to evaluate MBOM against another MBOM agent with various depths of reasoning. 5. A lack of analysis on model accuracy The current work's analysis only reports the returns resulting from applying MBOM and the baselines. Nevertheless, it is important to report the model accuracy in any work in opponent modelling. Future iterations of this work can measure the log-likelihood of modelled agents' predicted actions. 6. Scalability to scenarios with more opponents. It would be interesting to see whether MBOM scales to scenarios with more modelled agents. In particular, the rollout procedure done at every level of recursive reasoning requires more rollouts as the joint action space increases. Yet, the experiments have been limited to scenarios with small number of agents. Clarity 1. Imprecise statements. Overall, there are a few statements that are imprecise in the manuscript. Since these may potentially confuse readers, I recommend fixing these highlighted sentences: (Line 8) "All kinds of opponents" --> Be precise on the type of agents that are evaluated in this work's experiments. (Line 20-21) "interacting with diverse opponents makes the environment nonstationary from the agent's perspective." --> This is only true when other agents' policies are changing. When agents' policies are fixed, one can account for the effect of agents' actions in the transition function. Even when agents' policies are unknown, the problem can be seen as a POMDP as long as these policies remain fixed. 2. Lack of details in Figure 1 If done correctly, Figure 1 can help readers understand the proposed approach. However, the lack of captions in Figure 1 explaining the components of MBOM makes it challenging to understand the Figure. Also, consider adding labels associated to the dashed boxes to indicate which components of MBOM they represent. 3. Justification on baseline and opponent design The experiment section can be improved by highlighting why specific baselines or opponents are designed the way they are. Focusing on the insights gained from comparisons against specific baselines or experiments using certain opponents can also help highlight the claims provided in this work. Significance While this work presents an interesting approach that has potentially major significance for people working in MARL and opponent modelling, further experiments and comparisons are required to fully demonstrate its use in modelling (i) adaptive learning agents and (ii) agents that also learn models of the learning agent. Its limited comparison against prior recursive reasoning methods and the lack of analysis in terms of model accuracy particularly stands out as why this work has limited significance as is. As mentioned in the above points, the work has not provided sufficient analysis on the limitations of MBOM to (i) deal with agents with previously unseen reward functions and (ii) to environments with more number of agents. At the same time, the work is currently fairly constricted to generic MARL setting. Thus, I do not believe it requires any additional statements on its potential societal impact. | This paper tackles the problem of modeling agents that are simultaneously learning or are able to reason during interaction. The proposed approach employs an environment model to simulate the opponent's reasoning process. The initial reviews are split and main concern is that the results do not adequately demonstrate that the proposed approach actually models opponents better. I believe the added ablation study and baselines have adequately addressed the concerns. Two reviewers also support acceptance after discussion (the other two didn't respond). I believe the work tackles and important problem in MARL and would spur valuable discussion at the conference. Thus, I'm leaning towards acceptance. |
This paper shows that an auxiliary self-supervised task that enforces temporal consistency of latents improves sample efficiency in continuous control environemnts, and identifies important implementation details that make it work. Data-efficiency of RL algorithms is an important research area, and this paper explores how auxiliary self-supervised learning can improve data-efficiency in continuous control domains. The paper is very rich with experimental details, the implementation choices are carefully ablated and the paper is overall well written and explained. ### Similarities with SPR ------ My main issue with this paper is the fact that it positions the K-step latent objective as a “new representation learning method”, whereas in fact the K-step latent is exactly the representation learning method used in SPR [1]. Throughout the abstract, intro and the methods section, the paper positions KSL as a new representation method: but in practice it’s an adaptation of SPR to continuous control that requires some implementation changes. The entirety of section 3.2 and the Figure 1 is exactly SPR (see Section 2.2 and Figure 2 in the SPR paper), but the paper makes no references to it except in the related work section. The related work section again fails to acknowledge that KSL and SPR share the same representation learning objective, and not only the architecture. The authors could very well have positioned this paper as “We applied SPR to continuous control, here’s some implementation changes we needed to make along the way to make it work”, and it would have been a much more honest and accurate description of the work. The empirical contribution here and the detailed analyses itself would have been valuable on its own. ### Questions on Experiments ------ The description of the “Generalization of Encoders” experiments is very sparse. Can you specify what exactly the training tasks and evaluation tasks are in the generalization experiment? For the invariance experiment, it would have been nicer to see invariance to real-world distractors and not artificial noise. I would recommend the Distracting Control Suite [2] for more convincing experiments around these. For a lot of results, the performance seems to be under-reported than the original results in papers, especially for RAD and DrQ. Here’s a link to the raw performance scores for baseline methods used in [3] https://console.cloud.google.com/storage/browser/rl-benchmark-data/dm_control, and these were reportedly obtained from the corresponding authors. Can you clarify the discrepancy in the performance data? This seems to be a major issue. Additionally, in a lot of performance curves, the standard deviation regions overlap, making it harder to establish stochastic dominance of one method over another. It would be nicer to see a better stochastic analysis using stratified CIs on multiple normalized metrics (see Figure 11 in [3]). You can do this easily via the colab: https://bit.ly/statistical_precipice_colab [1] Schwarzer, M., Anand, A., Goel, R., Hjelm, R. D., Courville, A., & Bachman, P. (2020). Data-efficient reinforcement learning with self-predictive representations. ICLR 2021. https://arxiv.org/abs/2007.05929 [2] Stone, A., Ramirez, O., Konolige, K., & Jonschkowski, R. (2021). The Distracting Control Suite—A Challenging Benchmark for Reinforcement Learning from Pixels. https://arxiv.org/abs/2101.02722 [3] Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A., & Bellemare, M. G. (2021). Deep reinforcement learning at the edge of the statistical precipice. NeurIPS 2021 https://arxiv.org/abs/2108.13264 Data-efficiency of RL algorithms is an important research area, and this paper explores how auxiliary self-supervised learning can improve data-efficiency in continuous control domains. The paper is very rich with experimental details, the implementation choices are carefully ablated and the paper is overall well written and explained. <doc-sep>The paper tackles the problem of sample inefficiency in continuous control, by noting that standard RL methods deal with both policy optimisation and representation learning jointly with a single supervisory signal, namely the reward. Consequently, authors propose to leverage long-term temporal connections between actions in the representation learning, and introduce k-Step Latent (KSL), a representation learning module for learning temporally consistent representations of the state space. Authors show that KSL improves previous state-of-the-art methods in PlaNet benchmark suite and provide some analysis of the representations learned by KSL. The paper addresses a very important problem in RL: data efficiency. The author's perspective on leveraging long-term temporal connection is not exactly novel (see Self-Predictive Representation [Schwarzer et al. 2021], Successor Features [Kulkarni, et al. 2016, Barreto et al. 2017]), but the specific method introduced seems to be novel, to the best of my knowledge. On the method: The motivation is clearly stated and makes sense to me. I would have like to see discussion on how this differ/relates to successor features, when learned jointly or separately. A claimed "that learned representations of the state space should relate to reward" is not really justified: there are data-efficient method that disentangle the reward from the representation. It also seems to contradict another desired property: generalisation from one task to another. On the experiment: The experiments are extensive and the method is compared against sensible baselines. The results with respect to data-efficiency are promising. minor comments: - the font is very small on most figures axes - figures 5.2, 5.3 and 6 would make more sense with a different y axis scale. The main idea behind the paper is not novel, but the implementation is, and the results are promising. Some claims are not founded and somehow contradictory, and some related works are missing. But overall an interesting contribution! <doc-sep>The paper applied "Bootstrap your own latent" (BYOL) to the case of RL by introducing an additional learned transition model and show that this can improve sample efficiency. Strenghts: * Simple approach * Clearly written * Representation learning for better generalization or sample efficiency is an important topic in RL * Positive experimental results Weaknesses: * My main worry with this application of BYOL to RL is that the introduction of the transition model T changes the 'support' of z_m away from the support of z_o. In other words, while psi_o expects an output of T, psi_m gets the direct output of phi_m for which it was not trained and which might be entirely different from the output of T. Based on the experiments, this still seems to do something useful, but I would argue that psi_m should be seen more as random mapping than as the slow moving state-encoding. In any case, I think this out-of-distribution problem for psi_m should be addressed in the paper. * For Figure 3: Why not use t-SNI instead of only the first two dimensions of PCA? In particular, while it's not wrong to say that "DrQ's projections show little sign of reward-based orgnaization by 15k steps", that is slightly misleading as it doesn't say anything about the latent representation as we're only looking at 2 principal axes. Additional Questions: * How is translation augmentation applied? * Nit: How would the results change when removing the sg before the policy? An interesting direct application of BYOL to RL. However, the necessity to include action-conditioned transition models in RL raises additional complications compared to BYOL which have not yet been addressed (or discussed) and I believe these should be included in the paper before publication. <doc-sep>This paper proposes a representation-learning method (k-step latent, or KSL) that uses a self-supervised auxiliary loss between recurrently-predicted action-conditioned representations of the state space and non-recurrently predicted target representations, in the style of BYOL. The method is trained using two separate optimizers on different parts of the models, to avoid interference in the statistics maintained by the optimizers. Results at 100k and 500k steps on 6 tasks from the DM control suite (from pixels) compared to methods using alternative self-supervised auxiliary losses show that the proposed method improves data efficiency. Analysis of the learned latent representations shows that those from KSL produce more robust encoders and are more consistent with the underlying MDP. **Clarity.** For the most part, the writing is clear. One minor concern is that the description of KSL (sec 3.2) is difficult to understand, and could be made much more clear with better choices of notation, including the pseudocode in the main text, and an explanation of the algorithm that follows the pseudocode. That said, the main issue with the clarity and writing of the paper is that the contribution is not made clear both in the paper nor in the experimental analysis. Is the paper about using k-step latents plus an auxiliary loss tying these to observations? Or is the paper about the specific form this auxiliary loss takes? Most of the writing and the naming (KSL) imply the former but the evaluation speaks to the latter. **Novelty and significance.** Clarity aside, the paper does not propose a sufficiently novel contribution for acceptance. There is a very large body of work in model-based reinforcement learning that uses k-step model-based predictions and corresponding losses to improve data efficiency and performance, and reduce model rollout errors. Some of this is even cited in this paper, but others that are not cited include Recurrent environment simulators by Chiappa et al 2017, TreeQN by Farquhar et al 2018, MuZero by Schrittwieser et al 2019, and Muesli by Hessel et al 2021. These methods all use k-step latents with a loss tying them to (encodings of) observations. The specific form of the auxiliary loss here, adapted from BYOL (which is not even mentioned until the related work for some reason), is very similar to that of SPR, which is cited by the paper. There are some minor differences but these mainly seem like implementation details and since there are no empirical comparisons I have to assume this is the case. Further, the experiments themselves are extremely limited. Evaluating only on 6 tasks from the DM control suite is not enough to show that this is a compelling and useful contribution, especially given the high overlap with prior work. The additional analysis of the learned representations are nice, but are not enough without showing the strength of the proposed approach, or else doing a much more thorough analysis. Finally, I’d like to see ablations of the components and choices made for the proposed method. Why have both \\Psi_o and P? Is using the EMA for the momentum pathway the best choice? Is a normalized L2 loss the best choice? Overall, this paper lacks sufficient novelty for acceptance. It recombines existing techniques in a slightly different way than previously and shows improvements on a very small and narrow set of environments without comparing to the most relevant related work. <doc-sep>Summary: The authors introduced k-Step Latent (KSL), a representation learning method for visual-based continuous control tasks. KSL utilises multi-step latent action-dependent predictive supervision for training the representation. The empirical evaluations are based on the dm-control suite benchmarks, KSL demonstrates improved sample efficiency (100k evaluation) and asymptotic performance (500k evaluation) across the six tasks presented, comparing to the baseline algorithms (mainly based on image augmentation). The authors further empirically examined the properties of the learned representations, and showed that the trained encoder quickly learns to be representative of the reward structure. The authors also argue that KSL supports more robust representation learning and stronger generalisability. Pros: - The paper is well-written and easy to comprehend; - The empirical evaluation on dm-control suite indeed shows that KSL yields state-of-the-art results on the six presented tasks; - The choice of using independent optimisers for training the representation learning module given the signals from the predictive latent supervision (Eq. 4) and critic training (Eq. 1) respectively raises a good point for representation learning in RL with auxiliary tasks, from multi-task learning perspective. Such inductive bias is additionally substantiated with empirical comparisons. Concerns: - The main evaluation on the overall performance is rather limited, it might worth showing more (especially the middle/hard tasks as defined in Yarats, et al. 2021); - The KSL model combines many existing techniques in representation learning for RL, such as image augmentation (Laskin, et al. 2020) and multi-step latent predictive supervision auxiliary task (Schwarzer, et al. 2021), leading to limited novelty of the proposed KSL model. - The multi-step latent predictive supervision objective for representation learning in KSL is highly similar to the SPR model (Schwarzer, et al. 2021), it seems like the authors acknowledge the similarity hence spent a paragraph discussing the difference, but discussions are mainly based on the architectural/training differences. From this perspective, KSL appears as an adaptation of the SPR model to the continuous control tasks (despite the admirable engineering efforts). The difference argument could be made convincing given that the authors could provide further clarifications of the differences between KSL and SPR, from an algorithmic perspective, and provide further empirical comparisons between the two agents (e.g., the authors state that "KSL’s architecture is general enough to be applied in both discrete- and continuous-action domains", hence it would be interesting to see an adaptation of KSL to discrete control and test it on atari benchmarks, and compare with the SPR results). - The arguments of the improved representation learning of KSL in terms of "Permutation Invariance" and "Temporal Coherence" seems poorly justified. The reported figures do not show significant or consistent improvement of the KSL over the baseline methods, only by largely zooming in the y-axis could one observe some differences. I doubt if such minuscule differences are consequential in the overall learning. - I think using the momentum encoder to provide input to the target Q-network in SAC critic-training is an interesting choice, but lacks further (theoretical) justification, i.e., why would this be better than simply using the online encoder as the inputs to the target Q-network, is it possibly because of the consistent temporal lags? Minor Points: - The main empirical evaluations of the learned representations is based on the walker-walk task, which is a simple task with dense reward structure(Yarats, et al. 2021). It would be more interesting to see how the learned representations are indicative of the reward structure in the sparse reward tasks, such that Cartpole-Swingup. - KSL is motivated by the inductive bias that "States that are nearby in time are likely to share high levels of mutual information", another similar work that utilises multi-step action-dependent latent predictions by Whitney, et al. (2019) is based on the inductive bias that the similarities between the embeddings for the states and/or action sequences should be based on their successor outcomes (e.g., successor representations). It would be nice to see some discussions on the relationship between the two seemingly independent inductive biases. Scores: I suggest marginal rejection (5/10). KSL indeed show state-of-the-art performance on the presented tasks and I like the way the authors assessed the quality of representation learning. However, KSL seems like a combination of existing methods, but lacks comprehensive empirical evaluation in that sense. Moreover, the high similarity with SPR is concerning without further clarification and empirical justification. Some arguments of the improvement learned representation is over-stated. | This paper presents a reinforcement learning architecture that uses an auxiliary k-step step loss in the context of continuous control from image-based states. While the topic is relevant and potentially impactful, several reviewers have major concerns about the manuscript. Among these, I highlight: - Reviewers J6YX, 38iT and Qru8 have concerns about the novelty and contribution of the approach compared to existing literature. - Reviewers J6YX, TKuY, 38iT and Qru8 have concerns about the experimental evaluation and the quality of comparisons to baselines. Overall, it seems that the paper would benefit from further polishing. |
f-divergences can be written as variational objectives, typically parameterised in terms of a density-ratio estimator between 'positive' and 'negative' distributions. This paper identifies a new equivalence between a particular kind of f-divergence and the arc length of the `optimal AUC curve' (i.e. the best possible AUC curve amongst all classification scoring-functions). Convergence guarantees are given when the estimated (arctan) log-density-ratio belongs to an RKHS. Finally, this estimator is used to (approximately) lower bound the optimal AUC, which is an important quantity in unbalanced binary classification settings, as shown in the experiments on Cifar-10. Strengths - It's an interesting, non-obvious finding that the ROC curve can be connected to variational f-divergence representations. Also, it takes some ingenuity to consider the arc-length rather than the more standard AUC. - As far as I can tell, the theoretical results look correct (although I have concerns about novelty - see below) - The experiment is interesting and shows some promise (but I have concerns - see below). Weaknesses - I have questions about the novelty of the theoretical results. See Questions below. - I think it is *very* important to report the wallclock times for the different methods used in the experiments. I understand that the proposed method has better asymptotic complexity, but I want a sense of how big those hidden constant factors are. - Section 4.1 (line 146 onwards) & Section 4.2 were hard to follow. In equation 5, the variational formula is reparameterised in term of an angle v \\in [0, \\pi/2]. At this point in the exposition, it is quite unclear why such a reparameterisation is a good idea; it is more standard to parameterise the variational formla in terms of the log-density-ratio, which belongs to [0, \\infty]. A whole page later, we discover that the log-density-ratio approach (combined with a linearity assumption) is theoretically less convenient because of nonconvexity. I think that this nonconvexity issue could be mentioned *before* Equation 5, so the reader understands where the argument is going. - Section 5 (lines 232-256) is very math-notation-heavy, making it difficult to read. More plain-english explanations would greatly improve clarity. For instance, many explanations could be stated in terms of the lines in Figure 2, rather than using math. - the abstract is rather vague, and I struggled to understand it (In contrast, the introduction was quite clear). I recommend clarifying what is meant by `score function', since it has multiple meanings (e.g. the gradient of log-likelihood w.r.t parameters/data). The authors are very upfront about certain limitations (e.g. the fact that they require differentiabiltity of the score function w.r.t its input, unlike prior work). One issue is that their `lower bound' on the AUC does not appear to truly be a lower-bound (since they have to replace certain terms with approximations). This limitation could be made clearer, since right now, the conclusion implies otherwise. <doc-sep>This paper discusses f-divergence with arc length of ROC curve. The authors find that several papers use ROC curves to compare two distributions. And then try to answer the reason why arc length of ROC curve could be a type of f-divergence. The authors build the connection between the arc length of ROC curve and f-divergence, and then derive the algorithm to estimate the arc length of ROC curve. The experiments on Cifar-10 show that the score function obtained from 2-step procedure produced promising results. Strength: 1. The paper provides a good presentation to the paper. 2. The paper starts with a question of using ROC curves to compare two distributions, and then answers this question step by step. The deviations in the main body make sense. Weakness: 1. Although the authors have convinced me that the arc length of ROC curve is a type of f-divergence similar to KL divergence, total variational distance, the relationship with them are not discussed well. I only find proposition 3 discuss the relationship with total variational distance. More relations are appreciated to be provided. In addition, in the experiments, the authors only show the rationality of the arc length of ROC curve, the discussion and comparison with other f-divergences are missed here. 2. Although the paper is of a good presentation, still has some minor types, like line 30 the font of f should be mathematical way, line 32, there has one more space between [28] and ‘.’. none <doc-sep>This paper builds a connection between the ROC curve of testing two distribution $p_{+}(\\boldsymbol{x})$, $p_{-}(\\boldsymbol{x})$ and the "distance" between these two distributions. In particular, the length of the optimal ROC curve corresponds to a type of $f$-divergence between $p_{+}(\\boldsymbol{x})$ and $p_{-}(\\boldsymbol{x})$. This $f$-divergence can be estimated utilizing its variational formulation. The optimal solution of the equivalent variational problem is $\\arctan[\\frac{p_{+}(\\boldsymbol{x})}{p_{-}(\\boldsymbol{x})}]$. Therefore, by solving the empirical version of this variational problem, an estimate of $\\arctan[\\frac{p_{+}(\\boldsymbol{x})}{p_{-}(\\boldsymbol{x})}]$ can be obtained. Strengths: The fact that the length of optimal ROC curve corresponds to a $f$-divergence is an interesting new result. Also the presentation of this paper is clear and fun to read. Weakness: I think the condition in Proposition 4, which requires that $\\arctan[\\frac{p_{+}(\\boldsymbol{x})}{p_{-}(\\boldsymbol{x})}] \\in \\mathcal{H}$ is a little stringent. In practice, to reduce computational complexity, the class $\\mathcal{H}$ may not be very rich. It is better to discuss what will happen when there exist some approximation error of $\\arctan[\\frac{p_{+}(\\boldsymbol{x})}{p_{-}(\\boldsymbol{x})}]$. Also I think it will be better if more numerical experiments are provided, e.g., comparison with other methods of choosing score function $t(x)$ (if any) and experiments in other datasets. NA <doc-sep>This paper propose a novel method that utilizes arc length of the AUROC curve to approximate AUROC itself. Then, propose to optimize the arc length of the curve with some theoretical justifications for the relationship with AUROC. Strengths: The idea is novel and interesting. The paper provides some illustrations for the idea. Weakness: i) Optimizing arc length is not consistent with optimizing AUROC itself. As the examples shown in Figure 1, people could have the same optimized arc length, but have different optimized AUROC (subfigure a and b). Or, people could have the same optimized AUROC but have different optimized arc length (subfigure b and c). ii) Although the idea is novel as an alternative way to optimizing AUROC, the paper lack some justifications for the advantages over the main-stream AUC optimization methods based on surrogate loss functions. (Gao, Wei, et al. "One-pass AUC optimization." International conference on machine learning. PMLR, 2013.; Ying, Yiming, Longyin Wen, and Siwei Lyu. "Stochastic online AUC maximization." Advances in neural information processing systems 29 (2016).; Liu, Mingrui, et al. "Fast stochastic AUC maximization with $ o (1/n) $-convergence rate." International Conference on Machine Learning. PMLR, 2018. etc.) iii) The experiments are insufficient. Only CIFAR10 is conducted for the experiment. And only logistic regression and a pairwise squared loss optimization method are considered as comparisons. There is no deep learning experiments. Moreover, even based on the current experimental results, the proposed method doesn't outperform the pairwise loss optimization. iv) There are some short claims for the computational efficiency of proposed method, but there is no justifications. I don't see any limitations for societal impact. | The paper shows that the arc length of the optimal ROC curve is an $f$-divergence, propose san estimator for it, and builds on the insights obtained to design a new algorithm for approximately maximizing the area under the ROC curve. The reviewers generally appreciate the theoretical results / insights. The only concern seems to be about the empirical evaluation of the AUC maximization procedure and about the lack of sufficient comparison to the state-of-the-art AUC maximization methods. Given that the main contribution is the connection drawn between the arc length of ROC curve and $f$-divergence, a majority of the reviewers are in favor of accepting the paper even if the empirical evaluation is not entirely satisfactory. **Authors inaccurate about offline AUC maximization taking $O(n^2)$ time** One of the selling points in the paper is that the new AUC maximization approach achieves a better run-time complexity than more traditional methods for AUC-maximization. Based on the authors' back-and-forth with Reviewer WsBr, it appears that the method of Ying et al. (2016) already achieves an O(n) run-time even for the offline setting. I would also like to point to the authors that, dating back to as early as 2005, there have been methods for offline AUC maximization with O(n log(n)) computational cost. For example, with the pairwise SVM loss, Joachims (2005, lemma 2) show that the loss/gradient computation only requires $O(n \\log(n))$ computation. This computational time applies to both linear and non-linear models, as I elaborate below. Suppose there are $n^+$ positive examples and $n^-$ negative examples, and we would like to minimize the pairwise hinge loss for scoring function $f$: $L(f) = \\sum_{i=1}^{n^+} \\sum_{j=1}^{n^-} \\phi(f(x^+_i) - f(x^-_j) )$ where $\\phi(z) = \\max(0, 1 - z)$. Computing this loss does not need us to explicitly enumerate $O(n^2)$ pairs. Instead it suffices to sort the positives according to $f(x^+_i)$ and the negatives according to $1 + f(x^-_j)$, and then compute the following cumulative stats by taking a single pass (O(n)) over the sorted examples: $N_{i}^{+}= \\sum^{n^-}_{j=1} \\mathbb{I}( 1 + f(x^-_j) \\geq f(x^+_i) )$ $L_{i}^{+}= \\sum_{j:~ 1 + f(x^-_j) \\geq f(x^+_i) } f(x^-_j)$ The pairwise loss can then be computed in O(n) time: $L(f) = \\sum_{i=1}^{n^+} L_{i}^{+} + N_{i}^{+} \\cdot (1 - f(x^+_i) )$ A similar procedure can be used to compute gradients for the pairwise loss, and would again require only $O(n\\log(n))$ computation (for the sorting step). *Ref*: Joachims, A Support Vector Method for Multivariate Performance Measures, ICML 2005. **Recommended changes/inclusions to camera-ready version** We are accepting this paper under the expectation that the authors will include a more accurate description off-line AUC maximization methods, and accurately describe the exact computational advantages their method has (if any) over prior AUC maximization methods. If there are none, please don't highlight them in the paper. A review of stochastic AUC maximization methods is also highly desirable. |
This paper proposes a novel VIO method which estimates 5 DoF poes (with the elevation fixed) from monocular camera and IMU. In particular the proposed method adopt Unscented Kalman Filter (UKF) which is natually differentiable for the state estimation. With the aid of filtered pitch and roll using UKF, the front-view images can be converted to bird eye view images which are then used for pose estimation with Differentiable Phase Correlation. The whole method is differentiable and can be backpropagated using ground-truth pose supervision to update the covariance matrices. Evaluation results show the the proposed method achieves competitive results on real-world KITTI dataset for odometry, CARLA and AeroGround datasets for map-based localization. The authors also conducted ablation studies on the learned covariance matrices. ## Strengths - The paper is very well written and one can read very smoothly. - The authors make use of the differentiable nature of Unscented Kalman Filter (UKF) and proposes to learn the covariance matrices in a end-to-end manner to capture the motion and measurement noise distribution. This way it retains the interpretability of the pipeline and also leverage the power of data-driven approach to learn the otherwise hand-tuned covariance matrices. - The estimated roll and pitch from UKF is integrated seamlessly to the bird eye view projection followed by camera pose estimation using Differentiable Phsace Correlation. This enables the differentiability of the whole piple with reasonable assumptions. - The results show very competitive results on KITTI for camera pose estimations. Further, the authors also show its effectiveness to apply it for map-based localization on CARLA and AeroGround datasets. - The authors provide the code in the supplement. ## Weaknesses - Altough the whole method only learns 4 parameters, it is a bit confusing why it needs 2 RTX 3090 to train. - No runtime and memory statistics are provided. - The authors did not show cross dataset generalization evaluation. It would be interesting to see if the covariances learned from one dataset/sensor can transfer to other datasets. <doc-sep>This paper proposes a visual inertial odometry system based on kalman filtering. The main idea of this work is to learn the noise filtering parameters by making the filter differentiable. While the idea of learning filters is not new (See [1]), the application to the problem of VIO from a bird-eye perspective is new to me. The approach is evaluated against a set of baselines on the KITTI dataset, where it obtains comparable or better results. [1] Krishnan et al,, Deep Kalman Filters The main strength of the paper is the nice balance between end-to-end learning and interoperability. Learning only a few parameters end-to-end makes the system easy to debug and investigate. It is to be said, however, that similar ideas have been long investigated in the community (See [1], [2] or many others), but none really took off. This could be because learning so few parameters makes the algorithm strongly reliant on algorithm priors, which might be inappropriate in several real-world applications. A second interesting takeaway from this work is that traditional VIO methods, like VINS, perform terribly on KITTI. However, I find this somehow difficult to believe since, in the KITTI benchmark, most monocular systems are traditional VIO methods. Is this setup somehow different? What makes it challenging for them in the proposed evaluation? Finally, I find the idea of passing through the Bird-eye view quite interesting. I would have expected to be a suboptimal choice given the presence of other vehicles and humans, whose upper part is generally occluded. However, this does not seem to be a problem for the proposed system, which I find quite interesting. The main weakness of the approach is, in my opinion, the evaluation setup and results. As mentioned before, the terrible performance of traditional VIO methods is difficult to interpret. In addition, I feel that the main claim of this paper (rely more on algorithmic priors than parameters) is not sufficiently justified. It would be interesting to see results on more challenging datasets, possibly in 3D, where the assumption on the motion model would not always hold. If the paper only wishes to specialize in AV scenarios, then such choices must be made clear from the beginning (abstract/intro) and claims weakened. The limitation section mentions that the approach would be applicable for all ground robots, but there is not enough evidence to support this claim. Overall, the main question I feel should be answered is: when is desirable to enforce algorithmic priors instead of parameters? Another weakness is the fact that previous work on deep state estimators (like [1],[2], and others) has not been sufficiently covered and discussed. In what does the approach differ? Why is it better? Finally, a (minor) limitation is that the approach is said to be lightweight, but computation time is not compared to existing methods. [1] Deep Kalman Filters, Krishnan et al. [2] Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors, Jonschkowski et al. <doc-sep>This paper proposed an interpretable architecture for VIO, which admits differentiable training. Experimental results on two domains demonstrate that the proposed approach is effective and competitive against recent deep learning-based architectures. Strengths: The architecture is novel, as far as I can tell. It also achieves very high performance. Weaknesses: Let me preface my comment by saying that I am not an expert on VIO, but come more from the general model interpretability side. With that said, I am quite disappointed about the lack of empirical evaluation of the model interpretability. While the model achieves comparable performance as deep models, which is refreshing to see as many interpretable models struggle to match the performance of their non-interpretable counterparts, it is not clear in what ways the proposed model is exactly interpretable. Demonstrating interpretability requires validating that people, developers and/or users, can gain concrete understanding of the model. Thus, at the very least, there should be more analysis on the model itself, besides that on model performance. Ideally, it would also show some specific aspects corresponding to the model understanding. For example, maybe the developer can identify some issues of the model so that they can re-train a better one, or that a domain expert can understand the weakness of it, so that they can choose when not to trust the model prediction. Without any of these demonstrations, I am not convinced that this model is any more interpretable than a standard neural network. <doc-sep>This work introduced a novel visual-inertial odometry system, which is learning-based but interpretable without deep neural networks. Specifically, the IMU data is used by a differentiable UKF to estimate the roll and pitch, which are subsequently used to project images to bird-eye view (BEV). The BEV images are transformed to the frequency domain to estimate SIM(2) camera motion. The leaning part comes from the UKF covariance; the authors maintain the following BEV projection and motion estimation differentiable so that the odometry error could be back-propagated to the UKF covariance, which makes it learnable and the estimated roll/pitch optimal. The evaluation is mainly based on the standard KITTI dataset. The proposed method achieves great results. Strength - The paper is well-written, and the methodology is clearly stated and technically sound. It is a really solid work with significant novelties and contributions. It is refreshing to review a learning-based VIO work that is not based on deep networks. Weakness - The unchanged height assumption for driving scenarios seems fine; however, the reviewer is slightly concerned about the sufficient distance assumption: what is the minimal distance? is it also related to the camera mounting height and angle? if there are objects in that yellow box, how will it distort the BEV images and decrease the VIO accuracy? - The evaluation is mainly based on the KITTI dataset, focusing on sequences 09-10 for testing. The validation would be more persuasive if the authors include more VIO testing. | The paper proposes a differentiable approach for monocular VIO estimation based on BEV, without relying on deep neural networks. The reviewers find the paper well written and the idea of using BEV to be interesting. This paper received highly mixed reviews. The major concerns raised by the reviewers include empirical evaluations of the model interpretability, justification for relying on algorithmic priors than parameters, results on more challenging datasets, positioning this work with respect to existing work on deep state estimators, and clarifications regarding the claims made, among others. Most of the concerns raised by the reviewers have been thoroughly addressed in the rebuttal. I thank the authors for the engaging discussions during the rebuttal. Some minor concerns still exist. Nevertheless, I agree with the reviewers that the paper is an interesting contribution. |
This paper addresses the problem of heterogeneous transfer learning for CATE estimation by using representation learning and a multi-task architecture to transfer information between potential outcome functions across domains, generalizing several existing CATE estimators to the transfer learning perspective. Strengths: - The paper deals with an important problem that is understudied. I appreciate the motivation to study multitask architectures for CATE transfer learning when CATE has often been examined in isolated examples which real-world medical data provide compelling motivation for multitask and transfer learning settings. - The paper effectively generalizes several CATE frameworks to the transfer learning setting which. - The empirical results are sufficient and effectively show individual effects (e.g. the information sharing and the selection bias in Figure 6 and Figure 7), although not extensive experiments. Weaknesses: - The framework deals with fixed datasets with source and target domains. How does this framework extend to more complicated settings? For example, what if we have multiple domains, the datasets are streaming, or the domains are unknown at training time? What about personalized feature spaces (e.g. different features are available for different samples)? Of course, I don’t expect all of these situations to be addressed in this manuscript but discussions about extensions would be helpful. Limitations and ethics are addressed appropriately. Some of the limitations and directions for future work discussed in the conclusion could also be mentioned more explicitly throughout the paper to make tradeoff decisions clear when introducing the framework. <doc-sep>This work aims to solve the heterogeneous transfer learning problem for CATE estimation by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Besides, they propose several building blocks to construct HTCE-learner, similar to the most common CATE learners. Strengths: 1. The paper is overall well written and it clearly defines the problem. 2. These building blocks involve handling the heterogeneous feature spaces, sharing information between PO functions across domains and sharing information between PO functions within a single domain. Weaknesses: 1. the model design is lack of innovation and major idea is based on the meta-learner. 2. limited baseline models. To the best of my knowledge, the work is not as novel as the authors claim it is. Please refer to the following works: * Pearl and Bareinboim, 2011, Transportability of Causal and Statistical Relations, https://ftp.cs.ucla.edu/pub/stat_ser/r372-a.pdf * Bareinboim and Pearl, 2016, Causal inference and the data-fusion problem, https://ftp.cs.ucla.edu/pub/stat_ser/r450-reprint.pdf * Magliacane et al, 2017, Causal Transfer Learning, https://www.semanticscholar.org/paper/Causal-Transfer-Learning-Magliacane-Ommen/b650e5d14213a4d467da7245b4ccb520a0da0312 * Mooij et al., 2016, Joint Causal Inference from Multiple Contexts, https://arxiv.org/abs/1611.10351 <doc-sep>This paper proposes a Heterogeneous Transfer Causal Effect (HTCE) framework to improve treatment effect estimations on the target dataset under the heterogeneous transfer learning problem. Originality: 3. This problem has already been noticed by some works. The basic structure of HTCE is actually the combination of layer sharing + FlexTENet. Quality: 4. I think it is good when authors combine HTCE structure with meta-learners and TARNet. They are explicit. Clarity: 5. This paper is well written. Section 4 is especially easy to follow. Good! Significance: 3. I think the argument and setting of estimating TEs for heterogeneous transfer learning is meaningful, but I don't think the experiment parts show the significance of the proposed method aiming to this problem. The limitations are well dicusessed. | The paper studies methods for estimating conditional average treatment effects (CATE) under a shift in domain where source and target feature spaces are heterogenous. It is assumed that the (respective) CATEs in both source and target domains are identifiable through ignorability and overlap. No formal assumptions are made regarding the similarity of potential outcome distributions across domains, but implicitly that there exists a shared structure in the outcome functions. A number of heuristics are proposed to modify popular neural network CATE estimators to this setting, including a wide array of meta-learners such as propensity weighting, doubly robust estimators and TARNet. Reviewers appreciated the setting of heterogenous feature domain adaptation which is understudied in the literature and representative of many transfer tasks of interest, such as transfer from a clinical trial to an observational cohort. Typically, the feature set collected in trials is smaller than in, say, a registry. However, as pointed out by one reviewer, the empirical evaluation does not consider such applications. In addition, no details are given in the main paper for how the heterogenous feature spaces are constructed for experiments (this is only given in the Appendix). The uniform sampling is quite unrealistic and most likely less challenging than real-world cases. The authors make assumptions of ignorability and overlap, referring to previous work that this renders the causal effect identifiable. While this is true, the interesting complication in this work is that no assumptions are made regarding similarities of feature sets or outcome functions; these are left implicit. As a result, no claims can be made about the usefulness of source data for this task, see e.g., [1] for a discussion on hardness of transfer. In other words, the authors rely on empirical evidence to demonstrate this usefulness. In semi-synthetic experiments, the authors find that their proposed approach improves significantly over using only shared features, even when the number of target samples is minimal. Reviewers were concern with the contextualisation of the work in the literature, given previous work on transportability of causal effects and on domain adaptation. Adding to this list, I would suggest that the authors refer to previous work on heterogenous-feature transfer learning. Under ignorability and overlap, the settings are not much different from each other, not least demonstrated by the fact that the T-learner solution performs well. The authors propose several "building blocks" but don't evaluate the importance of these in isolation, using, for example, an ablation study. This makes it difficult to assess which components are necessary and which are not. In summary, the considered setting is interesting and the algorithmic contributions appear useful empirically. The theoretical and methodological contributions are rather small, and the work should be better contextualised in the related topics of domain adaptation and transportability. [1] Ben-David, Shai, and Ruth Urner. "On the hardness of domain adaptation and the utility of unlabeled target samples." International Conference on Algorithmic Learning Theory. Springer, Berlin, Heidelberg, 2012. |
This paper proposes Cold Brew, a new method for learning cold start node embeddings in graphs. Cold Brew leverages teacher-student framework (knowledge distillation) to handle nodes without neighbors by transferring knowledge of teacher network (learned from head nodes) to student network (for tail or isolated nodes). Experiments are conducted to show that the proposed method outperforms some baseline methods. Strength 1 - Propose a knowledge-distillation based technique for learning cold start node representation. 2 - The problem is relatively new and interesting. 3 - Presentation is overall good. Weakness 1 - The novelty of proposed model is not significant. 2 - Experiments could be improved. 3 - Lacks related work discussion. Detailed Review It is interesting to develop new method for GNN to handle nodes without neighbors. The proposed knowledge distillation framework seems reasonable for me. Besides, FCR metric is proposed to measure the importance of node feature and graph structure. The proposed method works well for node classification task in several datasets. Following are some issues. The novelty of proposed method is not significant as it follows general teacher-student network and combines both neighbor aggregator and structure embedding to learn node embeddings. These parts are borrowed from existing techniques. It would be better to discuss model contribution. In addition, the current manuscript only studies node classification task while it is also possible to study link prediction task over cold-start nodes (e.g., tail nodes). Moreover, there are some existing works studying tail node representation learning that should be discussed or compared, such as: Towards locality-aware meta-learning of tail node embeddings on networks, CIKM'21 Tail-GNN: Tail-Node Graph Neural Networks, KDD'21 Minor issues exist, such as typo. For example, the first sentence of section 3.2 should be: to integrate the knowledge of GNN teacher. -- Update after rebuttal: The authors addressed some of my concerns in experiments. The novelty is still incremental for me. I change my score to borderline above. This work studies an interesting problem. The proposed method is reasonable for solving the problem. The novelty is not significant. In addition, experiments could be improved. <doc-sep>Many real-world graphs have power-law distributions of node degrees and learning the representations of nodes with few or even no connections may only depend on their attributes. This paper studies the problem of learning good representations of such nodes using inductive GNNs. It proposes a new method to generalize GNNs better for tail nodes compared to pointwise and graph-based models using a distillation approach. A metric, feature contribution ratio, has been proposed in quantifying the contribution of nodes' features in predicting labels. Experiments on several graph datasets demonstrate the effectiveness of the proposed method especially in learning better representations of the tail and isolated nodes. Overall, this paper is well-organized. Paying more attention to tail nodes and/or nodes with fewer neighbors is important and has been neglected in a lot of previous studies on GNN. It is also of practical value, e.g., cold start in the recommendation system. Using knowledge distill to learn a transformation from both structures and attributes to attribute is an interesting idea to solve this problem. Experimental results on several graphs demonstrate the effectiveness of the proposed method. My major concerns are as follows: - There are some previous studies on tail node representation learning, e.g. [1] and [2]. I suggest that the authors discuss these studies and compare the experimental result with these methods. - What's the reason for minimizing the structural embedding E in the loss function (Eq (3))? Since E represents the node-wise representation with label information, it is not intuitively clear why this embedding should be as small as possible. - The experimental studies only validate the performance using GCN. I wonder if the performance will be influenced by different GNN. For example, GraphSage uses a sampling strategy to aggregate embedding, and intuitively this may mitigate the impact of imbalanced distribution or noisy structural information. - Since label information has been incorporated into the embedding (with the structural embedding E), the ratio of labeled nodes may influence the performance. Could you give some empirical or theoretical analysis of this possible relationship between the performance and ratio of labels? I also have some minor comments: - Title: The title contains the terms incomplete or missing, but the main content discusses a more general case including tail nodes, isolated nodes, and nodes with (maybe) incomplete or missing neighborhoods. Please make the title and content consistent. - Notation z has been used as some element under Eq (4) and performance of models in Eq (5). - Typo on Page 4: this *motivate* us to strengthen.. - The information in Figure 1 top is clear but the way to show the graph (the grey nodes) on such a coordinate is misleading. [1] Tail-GNN: Tail-Node Graph Neural Networks, KDD 2021 [2] Towards Locality-Aware Meta-Learning of Tail Node Embeddings on Networks, CIKM 2020 This paper studies an interesting and practical problem of learning better representations for tail and isolated nodes. Making use of a distilling approach, the combined information of both structures and attributes can be learned to generalize in nodes with less or even no structural information. Some weakness of the paper includes missing baselines, comparison to other GNNs, and deeper investigation of the label information. ------------------------------- I appreciate the responses from the authors that addressed most of my concerns. I updated my rating. <doc-sep>The work focuses on a very practical problem of strict cold start(SCS) recommendations which is a highly prevalent and relevant problem. The author’s main contribution to address the SCS is to use GNN with knowledge distillation – this proposed solution does not have to rely exclusively on the node features. The authors also define an FCR (feature contribution ratio) that can help determine the ideal network architecture – FCR can optimize the model selection which significantly affects the quality of overall system performance. The proposed method to adopt GNN with knowledge distillation to solve SCS problem relies on well-known concepts and previous works, but the learnable Structural Embedding and model selection methodology is novel in contribution and can also have practical adoption for industry applications. The proposed solution is technically sound and is well supported by the extensive evaluation presented. Overall, the paper is easy to follow, and the motivations are also well justified. The experimental settings and the data preparation are well documented, the authors have performed extensive empirical studies against multiple public datasets and several baseline methods to show their proposed method’s efficacy. The studies consistently show the proposed Cold Brew solution to solve the SCS problem results in significant improvement – especially for the tail & isolation splits. It should be noted that the focus is primarily on graph-based solutions for solving the SCS problem, which is not the only possible setup. Also, the evaluation and problem formulations are mainly focused on node label prediction (accuracy metric), it would have been interesting to see some metrics like hit-rate (especially for tail & isolated splits of e-commerce datasets) – for ranked recall generation(link-injection) using node representations learned from SCS for a cold-start item which is crucial for generating recommendation. Overall , I think the paper studies an important and interesting problem and presents a good solution which is theoretically sound. It however could try to cover more broader and crucial task especially w.r.t cold-start recommendations . <doc-sep>The paper proposes Cold Brew to distill the knowledge of a GNN teacher into an MLP student to handle the tail and cold start generalization problem by using the head part of the graph to guide the discovery of the latent neighborhoods of tail and isolation nodes. The paper also proposes a new metric to measure the contribution ratio of node features w.r.t. the adjacency structure. The experiments on several public datasets and a proprietary e-commerce graph show the effectiveness of the proposed method. Strengths: 1. The paper is well-motivated. The problem studied in this paper is important for the graph domain. 2. The proposed knowledge distillation method sounds interesting and novel. 3. The paper proposes the feature-contribution ratio to guide the selection of model architectures. Weaknesses: 1. The authors claim that the related works about cold start do not address the case of noisy or missing neighborhoods. However, the paper also focuses on the general cold-start problem and does not address the noisy or missing neighborhoods. I did not find any discussion or designed strategy to handle the noisy or missing neighborhoods explicitly. 2. It is not clear why the structural embedding can encode the label information. 3. The assumption behind the proposed knowledge distillation method is not clear. Can I suppose that the paper has an implicit assumption that is "the nodes with similar features should have similar neighborhoods"? Because the student is to learn a mapping from the node features to $\\overline{E}$ that is learned graph structure. 4. The authors claim that the MLP student will behave like the GNN teacher but generalize better to tail and cold-start nodes. But as Table 3 shown, GCN+SE outperforms the student MLP in the tail scenario, and the results need more explanation and discussion. The paper provides an interesting and novel solution for the critical cold-start problem. However, several claims are not well-explained or well-supported. The authors addressed most of my concerns, I would like to update my score. | The reviewers agree that the paper studies an important and interesting problem and presents a good solution which is theoretically sound. The paper can be further improved by looking into more applications such as cold-start recommendations. |
This paper studies multilingual ASR with a focus on the long tail problem. A new method using dual adapters is proposed. Although there are several ingredients of the method, their effectiveness are all verified in detailed ablation studies. Therefore, I believe the results shown in this paper are valuable for future work. Pro: 1. The structure of dual adapters is novel. 2. To the best of my knowledge, this is the first work to verify the effectiveness of pretrained models in multilingual ASR. 3. The paper contains detailed experiments. Con: 1. The framework combines many techniques together and it is hard to tell if any one of those is the 'silver bullet'. 2. Some design/hyperparameter choices are rather magical. Questions: 1. Why did you choose to use distill-mBERT over other alternatives (mBERT, XLM etc.)? Would you expect more gain if using a larger model such as XLM-R? 2. Recent work [1] shows negative interference can impact low-resource languages in multilingual models. However, it seems like the opposite is true here: multilingual models can improve even high-resource languages (e.g. IT). Do you have any idea why? [1] On negative interference in multilingual models: findings and a meta-learning treatment. Wang et al., EMNLP 2020.<doc-sep>This paper aims to improve multilingual speech recognition on common voice, which contains 18 languages, some of which have little data (which the authors here refer to as the long-tail languages I believe). The problem of multilingual ASR is both a practical one as well as a challenging one from the perspective of multitask learning and fairness, and I'm happy to see work in this area. The paper proposes 3 techniques that together result in a modest improvement over the baseline on common voice. The 3 include logit re-balancing based on class priors, fusion of a BERT-based language model, and the use of a common and langauge-specific adapter layer in parallel. All of these techniques have been previously explored in slightly different forms for speech problems. They have not been combined in this way before though. To my knowledge, the logit adjustment has not been applied to the long-tail problem in speech recognition. Pros - Addresses an important problem in ASR - Overall, A2 improves over the baseline of balanced sampling by an average of 1% absolute CER, or a relative improvement of 6%. That is a moderate improvement but worthwhile enough to report. - Introduces class-based logit adjustment to the problem of long tail - Introduces minor tweaks that lead to improvement, and presents ablation study Cons - In large scale models such as this, it is improtant to report the computation requirements of the model in addition the to quality improvements, as often the quality grows with model size. Here there are no comparisons of parameter count here - Besides the ablation studies, there's not much to be learned on how the changes (dual adapter, logit adjustment, or the way mbert is fused) helped the quality. It would be nice to report a few failed versions that the authors tried to learn more about what works and what doesn't. - Overall the changes do not improve significantly over the baseline. Also there should be more competing baselines to consider, other than the adapter layers of Kannan et al. There's the multi-headed decoder approach of Pratap et al. or the language ID injection approach of Li et al. "Multi-Dialect Speech Recognition with a Single Sequence-to-Sequence Model". - It's quite unclear what the long tail refers to in this paper. Does it refer to the languages that have little data? Or does it refer to words that are rare or often misclassified? Most of the paper leads me to believe in the former, but Figures 5 and 6 in the appendix lead me to believe in the latter since the histograms are so dense. - There's a lack of specific examples that illustrate how the incorporation of the various techniques in this paper show an improvement in the transcription. Showing specific transcriptions would be convincing in terms how showing the wins from these techniques... Other comments: What is meant by the fourth bullet point in the contributions? Is there a new dataset? I do not understand the contribution The use of previous tokens as input, i.e. not using teacher forcing, during the later stages of training (Eq. 10) is unconventional. It would be more convincing if the author discussed this a little more, including why it improves quality. It's unclear how x_{CTC} is defined in fig 1. Is it the output of the encoder? Likewise it's unclear how the function f is defined in fig 1. Is it the same function and weights (assuming a linear transformation from the previous layer) for f(x_CTC) and f(y'_ATTN, h_enc)? Fig 7 and comments to it should be moved to the main paper. It is essential for understanding of how mbert is integrated into the decoder as that is a big part of the contribution. The grammar throughout the document is occasionally off which distracts from the content. Needs polish. <doc-sep>This paper addresses multi-lingual speech synthesis, where one ASR model is responsible for recognizing speech in multiple languages. In this example the authors look at 11 languages with between 80 and 4 hours of training data. The "long-tail problem" (which isn't clearly stated) that this work is addressing is that the discrepancy in available training data leads to a discrepancy in performance. The paper sets out two goals 1) "to improve the overall performance of multilingual ASR tasks" and 2) (implicitly) to flatten the distribution across languages. A major challenge in multilingual (or multidomain or multitask) modeling like this is that improvements to the tail often come with degradation at the head. This work demonstrates this phenomenon clearly. On the largest languages, English performance degrades from 13.3 to 22.0 and French from 11.5 to 17.7, while on the smallest languages, Kyrghyz improves from 30.0 to 12.1 and Swedish improves from 56.1 to 21.3. While the language average performance improves from 22.3 (monolingual) to 16.0 (proposed multilingual) it is not at all obvious that there is an application setting where this is clearly preferable. One way to mitigate this is to pose the problem not as solving universal, multilingual speech recognition, but rather improving performance specifically on tail languages through training on higher resource languages. If the authors were to focus on improving performance on the 8 languages with 20h or less training data, while including English (en) French (fr) and Spanish (es), but not actually caring whether the high resource languages are improved by multilingual modeling, the results here would be much more compelling. As written the story is somewhat muddled: On average (where average is taken over language, rather than, say expected usage or the system, or population, etc.) performance improves, but the improvement to lower resource languages comes at the cost of higher resource languages. Also A2 the proposed system on average does better than standard multilingual training, but only on the 9 lowest resource languages, on English and French A2 actually exacerbates this problem with these higher resource languages showing even larger regressions from monolingual modeling. Implicit in this approach and task is a desire for the distribution of performance across languages to be more consistent. I would recommend making this explicit and providing some measure of variance as well as average across languages. This could be standard deviation (if there is a belief that the performance is normally distributed) or an entropy measure. But it would provide another dimension over which to optimize when understanding tail performance. I believe there is a typo or error in Equation 6. First, there are mismatched subscripts for \\pi_y and c_i. I believe this should be \\pi_i or c_y. Second consider a distribution with three classes and label counts of c = [1, 0, 0], so C=1, n_0 = 2 and N = 3. Equation 3 would result in \\pi = [1/1 - 1/(2*1), 1/1, 1/1] = [1/2, 1, 1] which is not a valid distribution. Minor comment: Figure 7 is mentioned in Section 2.3 but is only included in the Appendix. It would be clearer to either describe Figure 7 where it is first mentioned, or present this information in Section 2.3 as forward referring to Appendix material. <doc-sep>The paper proposes three additions to improve a monolithic multilingual end-to-end ASR system. The problem of training a monolithic multilingual ASR system is that using data from multiple languages does not necessary improve over individual monolingual systems. The three additions are a large multilingual language model, the use of language adapters, and smoothing on the token probabilities. Mixing the three additions in a specific way helps improve the average word error rates. There are two major problems in the paper. One is the imprecise use of words, and the other is the disconnect between the additions and the problems they try to solve. Details are as follows. The paper contains a lot of imprecise use of words. The term "long tail" is used throughout the paper, but it is never clearly defined. The long tail of a distribution refers to a significant total amount of probability mass spread on a large support. In the context of this paper, when the paper talks about the long-tail problem, what distribution are we talking about? Is it a distribution that captures how likely a phone or a word piece is used in all of the world's languages? While the long-tail problem is not properly defined, the class imbalance problem more or less is. There is still a certain amount of ambiguity. For example, what are the classes? Are the classes languages, phones, or word pieces? Given that the long-tail problem is not defined, it is hard to see why the proposed additions solve the problem. I can understand using a larger language model would help the final performance, but how does this solve the long-tail problem and the class imbalanced problem? The same applies to language adapters. The smoothing technique does have a effect on generalizing to low frequency or even unseen tokens, but the paper does not mention the connection or cite the proper papers. The paper also ignores the relationships among languages. For example, it is obvious that none of the word pieces in Mandarin are shared with the other languages. It is also the only tonal language. As another example, Tatar is Turkic but uses the Cyrillic script; Turkish is also Turkic but it uses the Latin alphabet; Russian is not Turkic but uses the Cyrillic script. These relationships are important in interpreting the results when training multiple languages together. Here are a list of detailed comments. > x \\in R^{T,F} T,F is a rather unconventional notation. I would suggest T \\times F. > KL(y_{ATTN} || y) Are the y's labels? This is also an unconventional (if not wrong) notation. It should be the the KL of distributions, not labels. Later on, for example in equation (3), y is used as labels. > equation (3) \\mathcal{Y} is undefined. > Figure 7 depicts ... Figure 7 is in the appendix. The main content without the appendix should be as self-contained as possible. > Let t denote the current time step. This is confusing. It's actually not the time in the actual speech, but the t-th token. > A natural adjustment is to scale the raw logits ... The term logit is misused. Please look it up, stop misusing it, and define the symbols properly. > equation (6) The symbol * should really be \\times. > equation (9) It is confusing to denote the probability as y_t^{adj}. Again, because the bold face y is used as a sequence of labels else where, such as equation (11). > ... and 2 times gradient accumulation in a single GPU ... What does this mean exactly? Please elaborate. > This is due to the human languages share some common sub-phonetic articulatory features (Wang & Sim, 2014) ... 1. This sentence is ungrammatical. 2. This is a well-known fact, and the citation cannot be this recent. 3. No evidence in this paper is shown that this is the actual cause of the improvement. Please state it clearly if this is only a speculation. > ... even MT models improve the performance of the low-resource languages significantly. This is not exactly true. For example, the performance on Mandarin actually degrades quite significantly. > ... compared to the MT, the tail classes ... However, the head classes suffer ... Are the terms tail classes and head classes defined? > ... and possibly model overfitting to the tail classes. This is easy to check. What's the performance on the training set? > The gains ... of the head languages, although tail languages ... Again, what are head and tail languages?<doc-sep>This paper proposes an Adapt-and-Adjust framework to address the long-tail problem in multilingual ASR, which assembles three techniques: 1) leveraged a pre-trained model mBERT to initialize the decoder, 2) language-specific and language-agnostic adaptors, 3) class imbalance adjustments. Experiments on a multilingual ASR with 11 languages demonstrate the proposed method can achieve accuracy improvements. Overall this paper is clearly written and easy to follow. Each technique is presented with details and evaluated with corresponding ablation studies. It is a good paper in terms of application, experiments and systematic engineering efforts. However, I have several concerns on the overall novelty and technical contributions: 1) The three techniques alone are not novel enough, and each is proposed by previous works. E.g., initialized with a pre-train language model, class imbalance adjustment, and language-specific adaptors which are similar to mixture of language experts. 2) The proposed method can hardly be called as a framework since it has not demonstrated its necessity and applicability for each component. In another view, it is more like an assemble of different improvement tricks without much centralized logic towards a dedicated and focused problem. 3) The effectiveness of a component (mBERT) need to depend on other components, otherwise it does not work. This makes the proposed method not generalizable. Why mBERT is only effective when coupled with others? Is it necessary? Is the improvement by chance but not universal? 4) Initializing from mBERT (trained with MLM) but adjusting to autoregressive generation would harm the model capability of mBERT. Why not initialize from GPT model or more appropriate from sequence to sequence pre-trained models with an cross-attention module such as MASS or BART? This would be more effectiveness than simply using mBERT. | As one of the reviewers' comment, the paper presents "a mixed of tricks" for the multilingual speech recognition, which includes 1) the use of a pretrained mBERT, 2) dual-adapter and 3) prior adjusting. First, the relative gains of the pretrained mBERT is marginal (Section 3.3.1). Secondly, using 1) on top of 2) is unnecessary. These confuses the reader about what the conclusion of the paper is. It would be better if choosing one aspect of the problem and investigate it deeper. The decision is mainly because of the lack of novelty and clarity. |
Edit: Following response, I have updated my score from 6 to 7. I completed this review as an emergency reviewer - meaning that I had little time to complete the review. I did not have time to cover all of the material in the lengthy appendix but hope that I explored the parts most relevant to my comments below. Paper summary: The paper introduces QHM, a simple variant of classical momentum which takes a weighted average of the momentum and gradient update. The authors comprehensively analyze the relationships between QHM and other momentum based optimization schemes. The authors present an empirical evaluation of QHM and QHAdam showing comparable performance with existing approaches. Detailed comments: I'll use CM to denote classical momentum, referred to as "momentum" in the paper. 1) In the introduction, you reference gradient variance reduction as a motivation for QHM. But in Section 3 you defer readers to the appendix for the motivation of QHM. I think that the main paper should include a brief explanation of this motivation. 2) The proposed QHM looks quite similar to a special case of Aggregated Momentum [1]. It seems that the key difference is with the use of damping but I suspect that this can be largely eliminated by using different learning rates for each velocity (as in Section 4 of [1]) and/or adopting damping in AggMo. In fact, Section 4.1 in your paper recovers Nesterov momentum in a very similar way. More generally, could one think of AggMo as a generalization of QHM? It averages plain SGD and several momentum steps on different time scales. 3) I thought that some of the surprising relations to other momentum based optimizers was the most interesting part of the paper. However, I found the presentation a little difficult. There are many algorithms presented but none are explored fully in the main paper. I had to flick between the main paper and appendix to uncover the information I wanted most from the paper. Moreover, I found some of the arguments in the appendix a little tough to follow. For example, with AccSGD you should specify that epsilon is a constant typically chosen to be 0.7. When the correspondence to QHM is presented it is not obvious that QHM -> AccSGD but not the other way around. I would suggest that you present a few algorithms in greater detail, and list the other algorithms you explore at the end of Section 4 with pointers to the appendix. 4) I am not sure that the QHAdam algorithm adds much to the paper. It is not explored theoretically and I found the empirical analysis fairly limited. 5) In general, the empirical results support QHM as an improvement on SGD/NAG. But I have some (fairly minor) concerns. a) For Figure 1, it looks like QHM beats QHAdam on MLP-EMNIST. Why not show these on the same plot? This goes back to my point 4 - it does not look like QHAdam improves on QHM and so I am not sure why it is included. The idea of averaging gradients and momentum is general - why explore QHAdam in particular? b) For Figure 2, while I certainly appreciate the inclusion of error bars, they suggest that the performance of all methods are very similar. In Table 3, QH and the baselines are often not just within a standard deviation of eachother but also have very close means (relatively). 6) I feel that some of the claims made in the paper are a little strong. E.g. "our algorithms lead to significantly improved training in a variety of settings". I felt that the evidence for this was lacking. Overall, I felt that the paper offered many interesting results but clarity could be improved. I have some questions about the empirical results but felt that the overall story was strong. I hope that the issues I presented above can be easily addressed by the authors. Minor comments: - I thought the use of bold text in the introduction was unnecessary - Some summary of the less common tasks in Table 2 should be given in the main paper Clarity: I found the paper quite difficult to follow in places and found myself bouncing around the appendix frequently. While the writing is good I think that some light restructuring would improve the flow. Significance: The paper presents a simple tweak to classical momentum but takes care to identify its relation to existing algorithms. The empirical results are not overwhelming but at least show QHM as competitive with CM on tasks and architecture where SGD is typically dominant. Originality: To my knowledge, the paper presents original findings and places itself well amongst existing work. References: [1] Lucas et al. "Aggregated Momentum: Stability Through Passive Damping" https://arxiv.org/pdf/1804.00325.pdf<doc-sep>Update after the author response: I am changing my rating from 6 to 7. The authors did a good job at clarifying where the gain might be coming from, and even though I maintain that decoupling the two variables is a simple modification, it leads to some valuable insights and good results which would of interest to the larger research community. ------- In this paper the authors propose simple modifications to SGD and Adam, called QH-variants, that can not only recover the “parent” method but a host of other optimization tricks that are widely used in the applied deep learning community. Furthermore, the resulting method achieves better performance on a suit of different tasks making it an appealing choice over the competing methods. Training a DNN can be tricky and substantial efforts have been made to improve on the popular SGD baseline with the goal of making training faster or reaching a better minima of the loss surface. The paper introduces a very simple modification to existing algorithms with surprisingly promising results. For example, on the face of it, QHM which is the modification of SGD, is exactly like momentum except we replace \\beta in eq. 1 to \\nu*\\beta. Without any analysis, I am not sure how such a change leads to dramatic difference in performance like the first subfigure in Fig. 2. The authors say that the performance of SGD was similar to that of momentum, but performance of momentum with \\beta = 0.7*0.999 should be the same as that of QHM. So where is the gain coming from? What am I missing here? Outside of that, the results are impressive and the simplicity of the method quite appealing. The authors put in substantial efforts to run a large number of experiments and providing a lot of extra material in the appendix for those looking to dive into all the details which is appreciated. In summary, there are a few results that I don’t quite follow, but the rest of the paper is well organized and the method shows promise in practice. My only concern is the incremental nature of the method, which is only partly offset by the good presentation. <doc-sep>The authors introduce a class of quasi-hyperbolic algorithms that mix SGD with SGDM (or similar with Adam) and show improved empirical results. They also prove theoretical convergence of the methods and motivate the design well. The paper is well-written and contained the necessary references. Although I did feel that the authors could have better compared their method against the recent AggMom (Aggregated Momentum: Stability Through Passive Damping by Lucas et al.). Seems like there are a few similarities there. I enjoyed reading this paper and endorse it for acceptance. The theoretical results presented and easy to follow and state the assumptions clearly. I appreciated the fact that the authors aimed to keep the paper self-contained in its theory. The numerical experiments are thorough and fair. The authors test the algorithms on an extremely wide set of problems ranging from image recognition (including CIFAR and ImageNet), natural language processing (including the state-of-the-art machine translation model), and reinforcement learning (including MuJoCo). I have not seen such a wide comparison in any paper proposing training algorithms before. Further, the numerical experiments are well-designed and also fair. The hyperparameters are chosen carefully, and both training and validation errors are presented. I also appreciate that the authors made the code available during the reviewing phase. Out of curiosity, I ran the code on some of my workflows and found that there was some improvement in performance as well. | This paper presents quasi-hyperbolic momentum, a generalization of Nesterov Accelerated Gradient. The method can be seen as adding an additional hyperparameter to NAG corresponding to the weighting of the direct gradient term in the update. The contribution is pretty simple, but the paper has good discussion of the relationships with other momentum methods, careful theoretical analysis, and fairly strong experimental results. All the reviewers believe this is a strong paper and should be accepted, and I concur. |
In this paper, the authors consider the certification and attack of graph convolution network-based classifier. In this work, the authors proposed an orthogonal Gromov-Wasserstein (OGW )discrepancy to quantify the strength of the attack on graph topology. The OGW achieves a convex approximation of the original GW distance, which can be computed efficiently. Moreover, OGW yields a tight outer convex approximation for resistance distance on graph nodes. Experimental results demonstrate that the OGW-based resistance distance can be used to achieve kernel-based graph classification and works better than the shortest path-based method. Additionally, the authors also verified the rationality of using OGW-based threats to attack and certification of GCN classifiers. Weaknesses: (1) I am not an expert on model certification and attack, but I think the organization of the paper is questionable. I don't think introducing OGW in section 2 is a good idea. The authors should introduce the target problem or the task before introducing the technique part, such that the readers can obtain a big picture about what the authors did and what key challenges they solved. (2) There are many notations used without definitions, which makes the paper hard to follow. (See the questions below) Although I am not an expert in this field, I believe that this paper should be rewritten completely. Its current status is not reading-friendly. <doc-sep>The papers consider certifiable certificates under the Orthogonal Gromov Wasserstein discrepancy (OGW). This contrasts with other graphs “distances” used in the literature, such as L1 distance between the adjacency matrices. The advantage of OGW is that it considers symmetries/isometries and does not rely on a fixed node ordering. To compute the certificate, the authors use convex relation. More specifically, they propose using the resistance distance as the metric space on the graph (as opposed to the more commonly used shortest path metric) and design a convex relaxation of this computation. Secondly, they give a convex relation of the OGW discrepancy measure. The authors demonstrate the certificate on a single-layer GCN graph classifier. The paper addresses an interesting and important problem of certifying a graph classifier. The problem is extended beyond the usual global and local budget to consider a budget under the Gromov Wasserstein discrepancy (OGW). To the best of the authors knowledge, as well as my knowledge, they are the first to consider an extension to a budget that considers isometry. I do wonder however, if when combined with local and global budgets, if this can be taken advantage of. For example, if many edges are flipped to give an isometric graph, the OGW will be zero, but the local and global budgets will still be violated. In general I think the experimental section is well written. The authors do a good job of motivating the use of effective distance, and give a convincing argument for its use by through SVM classifiers and barycenter visualisation. Unfortunately, the authors only experimented with a very simple model. As I understand it the model is a GCN convolutional layer with 64 hidden units, followed by an averaging pooling and presumably a linear layer afterwards, with no activation functions. This can be written as 1/n \\mathbf{1} A X W_1 W_2 where 1/n \\mathbf{1} gives the pooling, A X W_1 is the GCN layer and W_2 is the final layer. Since there are no non-linearities, we can tie the weights W_1 W_2 to get an equivalent model and see its equivalent to a GCN with a single hidden unit followed by an averaging of the node outputs. It's not actually clear to me if this certificate could be used on more complex models without further modification and, if so if it would become too loose to be a useful certificate. It would be appreciated if the authors could comment on this or even better demonstrate their method on a slightly more complex model. The other weakness is the scalability of the certificate. The authors are upfront in the paper that the certificate is O(n^3). As shown in figure 9, the certificate takes a few minutes to compute, even for very small graphs. I wonder if it's feasible to run a certificate like this on something like COLLAB graphs? Would the authors consider extending figure 9 or commenting on the sorts of time required for graphs of sizes 50, 100 or 200 nodes? have some other minor comments. - Might help to point out that the coupling is the set of doubly stochastic matrices (line 104). - Expanding on some of the derivations in an appendix would be useful. For example, I could not follow lines (11) and (12). There was also some parts in Section 3.2 I did not follow, for example why setting Z is equivalent to (20). And why Z1 = -1. - Might be useful to clarify what “Vec. Attr.” and “Disc. Attr.” mean in table 2. In general, I think the paper is original and of high quality. Despite the weaknesses I've mentioned, I think it is a good paper demonstrating how to use OGW in a certificate. I think this paper can inspire future research into certificates that consider graph distances/discrepancies beyond those that assume fixed node orderings and that this is important as most GNNs are invariant to node reorderings (i.e. node permutation) The main limitations of the proposed method are that is does not scale that well, and it is not clear how it would perform with models beyond the very simple model they consider. <doc-sep>The authors propose a robustness certificate for graph classifiers under orthogonal Gromov-Wasserstein threats which are sensitive to whether two graphs are isomorphic. They use a (convex relaxation) of the resistance distance as the underlying metric, and derive a convex lower bound of OGW via Fenchel biconjugation. This gives them a sound but incomplete certificate. They also propose a complementary attack to show non-robustness. They evaluate their certificate on graph classification tasks using a single layer linear GCN. The paper is very well written and easy to follow. The problem that they tackle is very important since it is moving us beyond simple threat models such as global and local edge perturbations that effectively ignore the graph structure. The motivation to use the Gromov-Wasserstein discrepancy is sound, and the chosen relaxations (resistance distance instead of shortest path, orthogonal GW) are necessary to make the certificate tractable. The technical contributions are significant. Since the authors also propose a complementary attack it is possible to experimentally evaluate the tightness of the proposed certificate by looking at the fraction of non-verifiable graphs (neither robust nor non-robust). The main weakness of the paper is the experimental evaluation. The certificates and attacks are evaluated only fora a GCN model using a single linear convolutional layer followed by average pooling (see Q1). Moreover, most of the experiments are performed with a fixed local budget which might be over-constraining the set of solutions leading to false sense of security w.r.t. tightness (see Q2). While the final threat model is definitely a move in the right direction it is not as interpretable as simple edge perturbations. It would be helpful if the authors can provide more intuition (see Q3 and Q4). The authors do not discuss whether there is an accuracy vs. robustness tradeoff (see Q5 and Q6). While they do show the graph classification performance in section 5.1. these are for different models, and not the one they are certifying. Relatedly, I think that sections 5.1. and 5.2 distract the reader from the main message. While they provide interesting insights that should perhaps be relegated to the appendix. The additional space can be used to address some of the questions below. The authors do not explicitly discuss the limitations of the limitations and potential negative societal impact of their work. | This paper proposes a robustness certificate for graph classifiers under orthogonal Gromov-Wasserstein (OGW) threat models. OGW considers symmetries/isometries and does not rely on a fixed node ordering. The computation of the certificate is based on convex relations. The certificate is demonstrated on a single-layer GCN graph classifier. The paper addresses the interesting and important problem of certifying a graph classifier. The reviewers found the paper original and of high quality. During the rebuttal period, the authors provided an insightful discussion on their work and addressed most of the questions and concerns raised by the reviewers. |
This paper considers the use of a factored Q-function representation for reinforcement learning that takes advantage of a factored action space in certain "appropriate" MDPs, where "appropriate" is characterized both theoretically and qualitatively/practically in the paper. The authors show empirically that their factored representation can help in low-data healthcare settings, even when it is not a perfect match for the environment. It is argued that this is because it allows for a better bias-variance trade-off and provides a structural basis to generalize outside of the empirical distribution. This is a good paper. At first I was put off by the specificity of the representation---it really did seem to me that the factored action space representation used here was too specific to be of any practical use, but I felt that the authors did well to combat this and demonstrated applicability in healthcare at the very least. The presentation is clear and well done, and the development is logically consistent and flows well. While I found many of the theoretical bits "obvious" (e.g., props 2-4), I also felt that their presence adds to the overall depth of the paper as more or less a complete treatment of this "factored action space" representation of the Q-function. The fact that the abstraction used in Theorem 1 can be implicit is what makes this Theorem, and the paper as a whole, interesting, and sufficiently differentiated from, say, the literature on factored MDPs (but I'm not an expert in FMDPs). The empirical portions are well done---experiments are highly relevant, and the connection between the experiments and the method is plain---and the discussion is insightful. I have seen, but not read in any detail, many works in multi-agent RL (and otherwise), which use factored action spaces (there are 69 results for "factored action spaces" on Scholar). I am trusting the author's lit review on the novelty point, and I hope that at least one of the other reviewers is sufficiently familiar with the literature to give a good opinion on novelty. To me, it seems sufficiently novel, but I lack broad knowledge of other work on factored action spaces. I'm going to give this paper an "Accept" rating, since I don't see easy ways to improve it, and it feels complete. That being said, I do think the description for "Weak Accept" (which is pretty strong?) is more accurate, insofar as this paper is rather specific. I think this paper adequately addressed the limitations and potential negative societal impact of their work. The authors discuss the stringent conditions of their representations at length, and their experiments are performed in environments that *do not* satisfy their sufficient conditions. <doc-sep>This paper proposes the factored action spaces for offline RL in the applications of healthcare AI. The authors aim to leverage the factored action space in the form of linear Q-function decomposition to improve the sample efficiency compared with the baselines with combinational action spaces. The theoretical guarantees show that the linear Q-function decomposition can lead to zero bias under mild conditions. And even if the assumptions are violated, the method can still achieve good performances (nearly policy optimality). Empirical analysis verifies the proposed claims and effectiveness of the approach on simulated and real-world healthcare RL benchmarks. ## *Strengths* #### **1. Technical soundness and significance** The paper proposes the factored action space, where the action space is expressed as a Cartesian product of a few sub-actions spaces. Though similar ideas and methods have been proposed in the past few years, this paper gives a detailed and comprehensive analysis in both theoretical and empirical manners. The theory parts are complete and technically sound, and evaluation with practical consideration also verifies the theoretical claims. With the analyzed theorems in this paper, factored approaches (either in state-action space or action space) in RL would be more broadly applicable. #### **2. Presentation** Though the presentation can be further improved for better readability (see the weaknesses points below), the overall logical flow is clear, and the concrete examples (Fig. 2-Fig. 3) are helpful for readers to understand the theorems. ## *Weakness* #### **1. The contributions on the algorithmic aspects are vague** As the authors mentioned in the related work section, several works have been proposed to explore the benefits of factored action space for both model-based and model-free RL. Yes, this work offers the solution for leveraging factored actions with value-based methods in offline RL. However, from the current version, I cannot tell the significant contributions of the algorithmic aspects of learning the factored space. The Cartesian product for action space has been used extensively in previous works (i.e., [1]). It would be better to explicitly list the contributions to the algorithmic aspects in the revised version. A table with comparisons of all related approaches would be helpful. #### **2. About evaluation** More complicated benchmarks (e.g., Mujoco control tasks, DOTA2, StarCraft, etc.) have been tested in other works using factored action spaces. Can this framework also be applicable for other commonly-used RL benchmarks? **Please note that running on these benchmarks during the rebuttal phase is not a must, but any discussion or analysis would be highly appreciated.** #### **3. Possible directions to improve the writing for better readability** #### -> 3.1 Giving algorithmic frameworks into the main paper The authors can consider adding the algorithmic frameworks as algorithm pseudo-code or figures. The framework can explain the pipelines of learning or exploiting the factored spaces. Moving some justification contents in Section 3.3.2 into the appendix can save room for this (since I feel like the propositions in 3.3.2 is already very clear). #### -> 3.2 Adding one background section on factored MDP and factored action space The related work section in the appendix briefly introduces factored MDP and action space in RL. I think it is better to briefly give some formal definitions in the background or preliminary sections in the main paper. ### **References** [1] PIERROT, Thomas, et al. "Factored Action Spaces in Deep Reinforcement Learning." (2020). The limitations are more related to the algorithmic contribution and empirical evaluation (listed in the weakness section). I will increase my score if the authors give justifications during the rebuttal phase. <doc-sep>This study improved and extended the standard reinforcement learning (RL) to factored action space. A form of linear Q-function decomposition was proposed to handle factored action space. The novel method has been analyzed from a theoretical perspective, where the authors discussed the sufficient and necessary conditions for unbiases and studied its effect on variance reduction. The authors also demonstrated the proposed method through empirical experiments based on a simulation study and real-world MIMIC data analysis. The novel method provides both theoretical insights and empirical evidence for RL practitioners to consider this simple linear decomposition approach to factored action space. Strength: The research question is novel, and the authors found the unmet need of factored action spaces in the RL approach. Solving this problem will facilitate the use of offline RL in a broader application, especially in healthcare. The methodology is well-explained with details. This paper is complete with theoretical insights and empirical experiments. Weakness: The proposed method showed marginal improvement compared to the existing method. I also have some conservations on the applicability of the method (where many preconditions are required). The improvement has not addressed the key limitations of offline RL (i.e., how to improve exploration, discover high-reward regions and solve distributional shift problems). 1. The performance improvement brought by the proposed method seems marginal. I suggest the authors add some other performance indicators and provide additional discussions. 2. There is a lack of interpretability for the proposed method. The authors are suggested to elaborate on how to interpret the results at the individual level and how this can be applied to clinical practice. 3. There are some preconditions required for using this proposed method. I would suggest the author provide some discussions and give more guidelines for future researchers who would like to use your method. 4. Please also add more on the importance of your method and how your method can address the unmet need of current literature. <doc-sep>This paper proposes to decompose Q-function according to factorized action space. It conducts discussions on the bias-variance trade-off of the decomposed Q-function. Specifically, this paper provides sufficient conditions for zero bias and proves these conditions are not necessary. Furthermore, it shows that this form of Q-function decomposition leads to low variance because of the smaller lower bound on the empirical Rademacher complexity. Finally, it discusses what kinds of tasks can fulfill the sufficient conditions. The experiments on offline healthcare tasks demonstrate that the decomposed Q-function has the potential to help models improve their performance. Strengths: The paper is well-structured and easy to follow. This paper studies the guarantee of the unbiasedness of Q-function decomposition, which may inspire the researchers to focus on safe decomposition and improve performance in healthcare tasks. Besides, the proofs of bias-variance trade-off are provided in detail. On the sufficiency and necessity of these conditions, it validates them either by theoretical analysis or examples. Weakness: 1. According to Figure 1, Q-function decomposition can help reduce some calculation steps in the model. However, this paper does not demonstrate the optimization of computational efficiency, especially when it comes to experiments. 2. According to Section 3.4, it seems that the sufficient conditions are very hard to satisfy which may lead the model rather impractical. 3. The lack of comparison to more algorithms makes the effectiveness of the decomposition somewhat less convincing. The authors have discussed the limitations of their proposed approach in the manuscript, which may cause risky results in healthcare. | Reviewers agree that the problem of factored action spaces in RL is important and that this paper makes novel contributions to this setting. The reviewers were satisfied with the post-rebuttal discusion and have converged on an accept recommendation. On revision, the reviewers request that the authors revise the paper according to the clarifications that occurred during post-rebuttal discussion. Also, for context, it's important to note that the concept of factored action spaces goes back a long way in the factored MDP literature and I would request the authors to acknowledge this in their related work discussion as they prepare their final revision. To the best of my knowledge, the first mention of factored action spaces is in a 1996 multiagent MDP paper: Craig Boutilier. Planning, Learning and Coordination in Multiagent Decision Processes. (1996) https://www.cs.toronto.edu/~cebly/Papers/tark96.pdf Somewhat more recently, the following paper presented a sequential hindsight method for compositional MDPs that is an upper bound approximation for (weakly) coupled MDPs. I mention this specific paper since it discusses theoretical results relating to factored action MDP approximations and also presents a simple approximate decomposition methodology that I have found hard to beat empirically: Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tadepallia, Roni Khardon. Planning in Factored Action Spaces with Symbolic Dynamic Programming. (2012) https://ojs.aaai.org/index.php/AAAI/article/view/8364 |
The paper proposes a new approach to explain the effective behavior of SGD in training deep neural networks by introducing the notion of star-convexity. A function h is star-convex if its global minimum lies on or above any plane tangent to the function, namely h* >= h(x) + < h'(x), x*-x> for any x. Under such condition, the paper shows that the empirical loss goes to zero and the iterates generated by SGD converges to a global minimum. Extensive experiments has been conducted to empirically validate the assumption. The paper is very well organized and is easy to follow. The star-convexity assumption is very interesting which provides new insights about the landscape of the loss function and the trajectory of SGD. It is in general difficult to theoretically check this condition so several empirical verifications has been proposed. My main concern is about these empirical verifications. 1) The minimum of the cross entropy loss lies at infinity The experiments are performed respect to the cross entropy loss. However, cross entropy loss violates Fact 1 since for any finite weight, cross entropy loss is always strictly positive. Thus the zero is never attained and the global minimum always lies at infinity. As a result, the star-convexity inequality h* >= h(x) + < h'(x), x*-x> hardly makes sense since x* is at infinity and neither does the theorem followed. In this case, a plot of the norm of xk is highly suggested since it is a sanity check to see whether the iterates goes to infinity. 2) The phenomenon may depend on the reference point, i.e last iterate Since the minimum is never attained, the empirical check of the star-convexity maybe biased. More precisely, it might be possible that the behavior of the observed phenomenon depends on the reference point, i.e. the last iterate. Therefore, it will be interesting to see if the observed phenomenon still holds when varying the stopping time, for instance plot the star convexity check using the iterates at 60, 80, 100, 120 epochs as reference point. In fact, the experiments shown in Figure 4 implicitly supports that the behavior may change dramatically respect to different reference point. The reason is that the loss in these experiments are far away from 0, meaning that we are far from the minimum, thus checking the star-convexity does not make sense because the star-convexity is only defined respect to the minimum. Overall, the paper provides interesting idea but the empirical results may be biased due to ill-posed problem <doc-sep>This paper attempts to account for the success of SGD on training deep neural networks. Starting from two empirical observations: (1) deep neural networks can almost achieve zero training loss; (2) the path of iterates generated by SGD on these models follow approximately the “star convex path”, under the assumptions that individual functions share a global minima with respect to which the path of iterates generated by SGD satisfies the star convexity properties, the papers shows that the iterates converges to the global minima. In terms of clarity, I think the paper can definitely benefit if the observations/assumptions/definitions/theorems are stated in a more formal and mathematically rigorous manner. For example: - On page 3, “fact 1”: I don’t think “fact” is the right word here. “Fact” refers to what has been rigorously proved or verified, which is not the case for what is in the paper here. I believe “observation” is more appropriate. Also the assumption that l_i is non-negative should be formally added. - On page 3, section 3.1: the x^* here is the last iteration produced by SGD. Then how can it be called the “global minima”? The caption of Figure 1 on page 4 is simply misleading. - On page 4, the statement in definition 1 is more like a theorem than a definition. It is giving readers the impression that any path generated by SGD satisfies the star-convex condition, which is not the case here. A definition should look like “we call a path generated by SGD a star-convex path if it satisfies …”. Definition 2 on page 6 has the similar issue. In terms of quality, while I believe the paper is technically correct, I have one minor question here: Page 3, Fact 1: How can you conclude that the set of common global minimizers are bounded? In fact I don’t believe this is true at all in general. If you have a ReLu network, you can scale the parameters as described in [1], then the model is invariant. Therefore, the set of common minimizer is definitely NOT bounded. In terms of significance, I think this paper is very interesting as it attempts to draw the connection between the aforementioned observations and the convergence properties of SGD. Unfortunately I think that this paper is less significant than it has appeared to be, although the analysis appears to be correct. First of all, all the analysis of this paper is based on one very important and very strong assumption, namely, all individual functions $l_i$ share at least one common global minimizer. The authors have attempted to justify this assumption by empirical evidences (figure 1). However, achieving near-zero loss is completely different from achieving exact zero because only when the model achieves exact zero can you argue that a common global minimizer exists. Secondly, the claim that the iterate converges to the global minima is based on the assumption that the path follows an “epoch-wise star-convex” property. From this property, it only takes simple convex analysis to reach the conclusion of theorem 1 and 2. Meanwhile, the assumption that the path does follow the “epoch-wise start-convex” properties is not at all informative. It is not clear why or when the path would follow such a path. Therefore theorem 1 and 2 are not more informative than simply assuming the sequence converges to a global minimizer. In fact, it is well-known that SGD with constant stepsize converges to the unique minimizer if one assumes the loss function F is strongly convex and the variance of the stochastic gradient g_k is bounded by a multiple of the norm-square of the true gradient: Var(g_k) <= M ||∇F(x_k)||^2 Which is naturally satisfied if all individual functions share a common minimizer. Therefore, I don’t think the results shown in the paper is that surprising or novel. With respect to the empirical evidence, the loss function l_i is assumed to be continuously differentiable with Lipschitz continuous gradients, which is not true for networks using ReLU-like activations. Then how can the paper use models like Alexnet to justify the theory? Also, if what the authors claim is true, then the stochastic gradient would have vanishing variance as it approaches x^*. Can the authors show this empirically? In summary, I think this paper is definitely interesting, but the significance is not as much as it would appear. Ref: [1] Dinh, L., Pascanu, R., Bengio, S., & Bengio, Y. (2017). Sharp minima can generalize for deep nets. arXiv preprint arXiv:1703.04933.<doc-sep>This paper analyzed the global convergence property of SGD in deep learning based on the star-convexity assumption. The claims seem correct and validated empirically with some observations in deep learning. The writing is good and easy to follow. My understanding of the analysis is that all the claims seem to be valid when the solution is in a wide valley of the loss surface where the star-convexity holds, in general. This has been observed empirically in previous work, and the experiments on cifar10 in Fig. 2 support my hypothesis. My questions are: 1. How to guarantee the star-convexity will be valid in deep learning? 2. What network or data properties can lead to such assumption? Also, this is a missing related work from the algorithmic perspective to explore the global optimization in deep learning: Zhang et. al. CVPR'18. "BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning". | The proposed notion of star convexity is interesting and the empirical work done to provide evidence that it is indeed present in real-world neural network training is appreciated. The reviewers raise a number of concerns. The authors were able to convince some of the reviewers with new experiments under MSE loss and experiments showing how robust the method was to the reference point. The most serious concerns relate to novelty and the assumptions that individual functions share a global minima with respect to which the path of iterates generated by SGD satisfies the star convexity property. I'm inclined to accept the authors rebuttal, although it would have been nicer had the reviewer re-engaged. Overall, the paper is on the borderline. |
This paper studies the relationship of correlation of ranking of networks sampled from SuperNet and that of stand-alone networks under various settings. They also study the how masking some operations in the search space and different ways of training effect the ranking correlation. Pros: The paper has a lot of experiments to substantiate the claims. Figure 3 where every operation is systematically masked, provides more insights about which operations are effective and how NAS behaves if one of the operation is masked. Cons: Several other papers have already published similar findings. Overall the paper is very incremental. More specifics in the questions Questions 1. How is the SuperNet trained? 2. Figure2: Yu et al [1] have already explored the correlation of ranks of networks sampled from SuperNet and that of stand-alone networks. How is Figure 2 different from that? 3. RobustDarts [2] has explored the possibility of how subset of NASBENCH search spaces behave. FAIRDarts [3] also explored the influence of skip connection by running DARTS without skip connection, running random search by limiting skip connection to 2 etc. Figure 4 seems to be inspired by that. While it is interesting, this might be a slight extension to the work done by Yu et al [1] 4. Bender et al [4] postulate that the operations of a SuperNet are subject to co-adaptation and recommended techniques such as regularization, drop path etc to alleviate the same. RobustDarts also suggest some recommendations such as L2 regularization, drop path etc although in the context of DARTS. So while Figure 6 demonstrates this empirically, it is not a new finding. Overall, the empirical results in the paper are very useful for the NAS community. But the work is still very incremental. This might be better received as a workshop paper instead.<doc-sep>This paper introduces an empirical study on the ranking correlation in the singe-path setup. Following the paradigm of NAS-Bench-201, the authors test the Kendall Tau correlation of networks from NAS training and networks from standalone training. In general, I appreciate the authors' effort in bringing more infrastructure to the community of NAS. As a recently emerged community, we do need works like this one, as well as previous ones such as NAS-Bench-101 and NAS-Bench-201, to make the evaluation protocol more scientific. NAS problems are non-trivial as the search space is notoriously large. Colleagues who would like to invest their time and resources in exploring and manifesting this search space to uncover more phenomena are thus worth respect. However, this respectable responsibility also comes with a higher standard to evaluate works attempting to fulfill it. My major concern with this work is that the manuscript is not organized well. Although authors provide substantial details on their empirical study, they did not form a coherent logic flow to present these empirical findings, which makes this work more like a technical report than an academic paper. Readers may find these phenomena interesting but may not get interesting insights after reading this paper. Hence the technical contribution, especially on novelty, seems quite limited, even if there may be some intriguing points in the authors' discovery. I would recommend the authors to pick some phenomena e.g., masking Zero, masking Skip, etc., as examples to provide more analysis, so as to demonstrate to colleagues in our community that these findings can indeed lead to interesting research topics. Some minors: There are some works missed in the literature review. For example, the authors did not give adequate credits to colleagues who pioneered in using Kendall Tau to evaluate NAS training. As far as I know, Sciuto et al., 2019 was one of the earliest works. In the third paragraph, when reviewing recent progress, the authors did not distinguish the ranking correlation between NAS searching and retraining from the correlation between NAS searching results and stand-alone training. The former one was discussed and addressed in Hu et al., 2020. As the community has not fully realized the subtle but crucial difference between these two correlations, I believe a better framing of this work can be more helpful to other colleagues, especially those new comers. Sciuto et al. 2019, Evaluating the search phase of neural architecture search. Hu et al. 2020, DSNAS: Direct Neural Architecture Search without Parameter Retraining<doc-sep> + This paper studies the single-path one-shot super-network predictions and ranking correlation throughout an entire search space, as all stand-alone model results are known in advance. This is a crucial step in NAS. As we know, inaccurate architecture rating is the cause of ineffective NAS in almost all existing NAS methods. It makes nearly all previous NAS methods not better the random architecture selection (suggested by two ICLR 2020 papers and many ICLR 2021 submissions). Therefore, analyzing the architecture rating problem is of most importance in NAS. This paper takes a deep insight into the architecture rating problem, which provides a timely metric for evaluating NAS's effectiveness. (+) - In the following text, another paper entitled "Block-wisely Supervised Neural Architecture Search with Knowledge Distillation" should be discussed: "Recent efforts have shown improvements by strictly fair operation sampling in the super-network training phase (Chu et al. (2019b)), by adding a linear 1×1 convolution to skip connections, improving training stability (Chu et al. (2019a)), or by dividing the search space (Zhao et al. (2020))," (-) - Kendall's Tau is a good metric. As shown in EagleEye, Spearman Correlation Coefficient(SCC) and Pearson Correlation Coefficient (PCC) are also good metrics. Could the authors also provide a comparison using these two metrics? (-) EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning - I think NAS-Bench-201 is not enough. As we know, CIFAR-10 is sometimes considered a toy benchmark, and the sole result on CIFAR-10 is not convincing. Could the authors provide more results in addition to CIFAR-10? (-) - As we know, there may be a gap between the small-channel supernet and the large-channel finally-adopted architecture. We are quite interested in the ranking correlations between a subnet obtained from the small-channel supernet and a channel-expanded version of the subnet trained from scratch. Could the authors provide such a ranking correlation analysis? (-) - Could the authors provide more details in Figure 3. Figure 3 shows that the lines on the top mean the operation is used more frequently. But I am not sure what the value of the y-axis means. (-) - Could the authors present some comments on "Perhaps the most surprising is the low importance of Average Pooling, even lower than Zero, an operation that does absolutely nothing"? (-) + The following observation is believed to be crucial in NAS: "The baseline for small networks (top left, red) has the same averaged prediction accuracy for the top 10 as for the top 500 networks". This validates the inefficiency of SPOS in architecture search. (+) + The following observation is also important in NAS: "Masking Skip (blue, left) is the most harmful to τa (=1). As seen in Figure 4, the top-N networks have a worse average predicted accuracy than the top-M (for N < M) networks, and sometimes even below the random sample, which is terrible. Interestingly, \\tau may improve within the predictions for the top-N architectures." Especially, the phenomenon that masking skip connection reduces the ranking correlations is interesting. As is shown in SCARLET-NAS, the supernet training with skip connection is not fair. But in this paper, we can see that skip connection benefits the ranking correlation. We are interested in this opposite opinion. Specifically, it is fascinating to see that "Although the additional transformers seem to stabilize training, as seen by the lower standard deviation, they also worsen the τa problem." Besides, the phenomenon of "\\tao may improve within the predictions for the top-N architectures" indicates that the metric for ranking correlations maybe not perfect. A more reasonable metric may be desirable. (+-) + The following observation is important: "medium-sized super-networks require additional care." As shown by Figure 4, the averaged predicted accuracy of top-N networks in several subsets is lower than that of a random subset of networks. This is consistent with previous work like DNA, which shows a large search space may be harmful to the architecture rating. Even if a medium-sized supernet has a bad architecture rating, the ranking correlation should be worse in a large-sized supernet. (+) DNA: Block-wisely Supervised Neural Architecture Search with Knowledge Distillation - The following description is questionable: "After the architecture search, all Linear Transformers can safely be removed, as they do not impact the network capacity". Actually, stacking many fully connected layers without non-linear activations could lead to only one fully connected layer. It is an open question of whether optimizing loss(ABCx, y) is as difficult as optimizing loss(Dx, y) using stochastic gradient descent. (-) + The results providing evidence against disabling cell topology sharing during the training phase are exciting and new to the public. (+) + The following observation is fascinating: "The absolute validation accuracy value is increased by uniform sampling. However, this is not relevant, as only the correct ranking matters". This is against FairNAS. (+) + It is interesting and convincing that many tricks such as learning rate warm-up, gradient clipping, and regularization do not work to improve the ranking correlation. We are pleased that the authors provide so many experiments to point out some misleading approaches in NAS. I think this paper is very important in the context of AutoML. (+) - The analysis is based on medium-sized and small-sized search space. It would be good to see some analysis of large-sized search space. (-) Overall, this paper provides a timely analysis of the current NAS's ineffectiveness caused by the inaccurate architecture rating problem. As there are many NAS papers published every year and their ineffectiveness may still be not widely recognized by the reviewers and the public, I recommend a strong acceptance for this paper to promote the analysis of the NAS's architecture rating problem. ------------------------------post rebuttal------------------------------------------ -------------------Response to the authors' response---------------------- Thank you for the hard work in responding. I have read other reviewers' reviews and the response from the authors. The authors have addressed most of my concerns. I believe this paper deserves acceptance. As we know, variants of efforts have been made to improve NAS's effectiveness since 2016, and a great process has been reached. Despite the high expectation and solemn devotion, NAS's effectiveness is believed to be still low. This is inconsistent with many pioneer researchers' expectations four years ago, in which NAS is expected to be another revolutionary technique similar to 2012's deep learning. Currently, there are many NAS papers published every year. But their effectivenesses are unclear due to the lack of ranking correlation analysis. Differently, this paper comprehensively analyzes the architecture rating problem, which provides a timely analysis of the current NAS's ineffectiveness caused by inaccurate architecture rating. I think this paper can attract the community's attention, encouraging the community to pay attention to the architecture rating in NAS, especially when reviewing a NAS paper. Therefore, I recommend an acceptance for this paper to promote the analysis of the NAS's architecture rating problem. I agree with R2 that Yu et al. have proposed a similar idea (I assume R2 refers to "Kaicheng Yu, Christian Sciuto, Martin Jaggi, Claudiu Musat, Mathieu Salzmann, Evaluating the Search Phase of Neural Architecture Search"). But the analysis in this paper is more comprehensive than Yu et al.'s article. Many findings are new (at least they are not in published papers). I agree with R4 that the authors did not form a coherent logic flow to present these empirical findings, and the paper was similar to a technique report. However, many important articles, e.g., "Designing Network Design Spaces," "Exploring Simple Siamese Representation Learning," "Is Faster R-CNN Doing Well for Pedestrian Detection?" are also technique-report-like. I appreciate R1 for his devotion to finding similar observations in his experiments. I believe these observations are important and deserve publication. I agree with R1 that removing any operation leads to a smaller search space and a higher ranking. In summary, I will keep my rating as an acceptance. Undoubtedly, I also believe the comments from other reviewers can benefit the improvement of your paper. <doc-sep>In this paper, the authors proposed several findings on the single-path training strategy. The ranking correlation is the main issue. The experiments are conducted on NASBench201. Introduction. 'However, the aforementioned weight sharing xxx', there are a number of efficient multi-objective (Pareto front) NAS methods. 'However, since the single-path', please cite some literature related to the ranking issue. Summarize the main findings in the introduction section. Method. Define $\\tau_{\\alpha}$ in math. 'describing the ranking correlation of the average prediction accuracy depending on N', what is the average prediction accuracy for two lists? Experiments. 4.1 'masking the Zero operation (bottom row) significantly reduces this portion and thus improves the ranking correlation $\\tau$ (KT)'. If removing other operations other than zero, will the $\\tau$ be lower? In my opinion, removing any operation leads to smaller search space, and they all have a higher ranking. 4.2, 4.2 examine some of the training strategies proposed by previous works. References. Please make the references clear. Add the venues for all the papers. This paper tries to explore the single-path training strategy by studying the search space, the supernet, the linear transformer, the strict uniform sampling, the topology sharing, the LR warmup, the regularization, the clipping. The authors have done lots of experiments to clarify the important reasons for the ranking. However, most of the findings are not new to me. They have been discussed more or less by previous works and discovered by my own experiments. So the contribution is not significant. The paper is mostly clear, some paragraphs and references need to be polished, more related works should be added. | This submission received reviews with a very wide range of scores (initially 3,5,5,9; then 5,5,5,9). In the discussion, all reviewers maintained their general position (although a private message by the reviewer giving a score of 9 said he/she would consider going down to an 8). Because of the high variance, I read the paper in detail myself. I agree with all reviewers that NAS is a very important field of study, that the experiments are interesting, and that purely empirical papers studying what works and what doesn't work (rather than introducing a new method) are definitely needed in the NAS community. But overall, for this particular paper, I agree with the 3 rejecting reviewers. The paper presents a lot of experiments, but I am missing novel deep insights or lasting overarching take-aways. The papers reads a bit like a log book of all the experiments the authors did, before having gone through the next iteration in the process to consolidate findings and gain lasting insight. In a bit more detail, half the results in Section 4 use medium-sized super networks, which seem broken to me, yielding much worse performance than small super networks. I did not find any motivation for studying these medium-sized networks, no reason given for them to perform poorly, and none stating why the results are still interesting when the networks perform so poorly (apologies if I overlooked these). The poor performance may be due to using a training pipeline that works poorly for these larger networks, but this is hard to know exactly without further experiments. I would either try to fix these networks' performance or drop them from the paper entirely, as I do not see any insights that can be reliable gained from the current results. As is, I believe these results (accounting for half the plots in the paper) only muddy the water and are preventing a crisp presentation of insightful results. Another factor that I find unfortunate about the paper is that it only uses NAS-Bench-201 for its empirical study, and even for that dataset, mostly only the CIFAR-10 part. After getting rid of isomorphic graphs from the original 15625 architectures, NAS-Bench-201 only has 6466 unique architectures (see Appendix A of NAS-Bench-201), while, e.g., NAS-Bench-101 has 423k unique architectures. As the authors indicate themselves in their section "Grains of Salt", it is unclear whether insights gained on the very small NAS-Bench-201 space generalize to larger spaces. I therefore believe that there should also be some experiments on another, larger space, to study how well some of the findings generalize. An additional benchmark that the authors could have directly used without performing additional experiments themselves is the NAS benchmark NAS-Bench-1shot1 (ICLR 2020: https://openreview.net/forum?id=SJx9ngStPH), which studies 3 different subsets of NAS-Bench-101, and which was created to allow one-shot methods to use the larger space of evaluated architectures in NAS-Bench-101. Minor comments: - It reads as if the authors performed 5 runs, computed averages of the outcomes, and then computed correlation coefficients. That would be a suboptimal experimental setup, though; in practical applications, only one run of the super network would be run, and therefore, in order to assess performance reliably, one should compute correlation coefficients for one run at a time, and then obtain a measurement of reliability of these correlation coefficients across the 5 runs. - The y axis in Figure 2 appears to be broken: for example, in the left column it goes from 99.978 to 99.994, and the caption says these should be accuracy predictions of NAS-Bench201. However, even the best architectures in NAS-Bench201 only achieve around 95% accuracy. Overall, I recommend rejection for the current version of the paper. Going forward, I encourage the authors to continue this line of work and recommend that they iterate over their experiments and extract crisp insights from their experiments. I also recommend performing experiments with a much larger search space than that of NAS-Bench-201 to assess whether the findings generalize. |
#### Summary This paper works on unsupervised discovering keypoints in an Atari game frame to help improving Atari game performance. The keypoint discovery is based on predicting "predictable" local structure. I.e., the authors consider points that can not be predicted from its neighbor as good. Experiments show the learned keypoints performs better on 3 Atari games (Table. 1) than a counterpart keypoint discovery method, Transporter. #### Strength - The key idea of finding non-locally-predictable points as a representation of the game state is interesting, and specifically suitable for Games, where the backgrounds are mostly static and predictable. - The technical implementation of the framework (Fig. 2) is clear and makes sense to me. - Experimental results in Table. 1 is healthy. It shows the proposed method decently outperforms the Transporter counterparts. #### Weaknesses - The ablation studies are not exciting. These (number of keypoints/ spatial resolution of the embedding) are mostly design choice experiments and are better to be in the supplement materials. A more interesting ablation would be to quantitatively evaluate the quality of the points. Currently the paper only qualitatively shows the keypoint discovering results in Fig. 2 and claims an advantage to Transporter. This is not clear to the reviewer. The reviewer understands that there is no existing metric to evaluate keypoint discovery quality. However some proxy evaluation would also be helpful. For example, are the learned points temporally stable? - As the key contribution is the keypoint discovery, it would be more convincing to compare with other unsupervised keypoint discovery methods besides Transporter if applicable. E.g., PointNet (Jakab et al 2018) that considers keypoints as a pixel-level reconstruction bottle-neck. #### Summary The paper proposed an interesting idea with reasonable results (better than a recent counterpart, Transporter). However, the reviewer does not have backgrounds in the specific experimental settings (Atari games), and can not assess the significance of the improvements. Comparisons with more keypoint discovery methods would make the results more convincing. My current rating is 6, but might change based on other reviews. #### Post rebuttal Thank you for providing the rebuttal. The rebuttal addressed my concern on comparing to other baselines. And it's fine to keep the design choice experiments in the main paper. However a proxy evaluation of key point evaluation is still missing and it will further strengthen the paper (I don't have a clear idea for the evaluation either). I keep my original rating of 6. <doc-sep>The authors propose a novel approach to unsupervised key point detection based on predictability. The demonstrate their model on Atari tasks comparing to other key point detectors. Quality: The authors compare their work both qualitatively and quantitatively to Transporter. The authors show that their model picks out important key points that Transporter does not. Figure 5 is a great! It would also be good to show the distribution of predicted key points over multiple runs for other levels. The authors train agents on Atari and compare their model to suitable baselines. It’s interesting that the GNN does not always outperform the CNN. This paper could be improved by also comparing to Transporter + GNN. When constructing the error map, is this approach very sensitive to the receptive field and the number/ location of the neighbours? How would this approach handle larger / more complex objects? Section 5.2 is interesting. What about if you have new objects that are not predictable, but are distractors? Would the model not create key points here? For example, adding in some smaller distractor shapes? Some randomly coloured dots? Or some missing pixels? The ablation study is great! This results section of this paper is very thorough and addressed a lot of the questions that came to mind when reading the introduction and methods section of this paper. Clarity: The authors argue that local predictability is an intrinsic property of an object with out giving more evidence for this. Perhaps the authors are hypothesising that this interpretation of objects will be more useful for downstream tasks? It’s not clear to me otherwise why this is an intrinsic property of an object? The authors could improve their paper by being very clear about distinguishing focus on image regions that are unpredictable” from “local regions in input space that have high internal predictive structure” when describing objects and key points. With the exception to the above, the introduction is well written and the methods section is easy to follow. The model is designed to pick out points that are harder to predict, which is useful for ignoring background and finding the agent as nicely demonstrated in Figure 2 of Montezuma Revenge, but it’s not clear that this is a good definition of an object? For example the platforms in Frostbite may be very easy to predict, but you would need to know where they are in order to successfully navigate the environment. Also, it seems that predictability of a feature may depend heavily on the environment and any new object in an environment would be immediately picked out even if they are irrelevant to the task (i.e. new colours etc). Could you explain better why you thing that the platforms in frostbite are assigned key points as they are? Figure 3 is a really clear and nice example. Originality and Significance: This approach is novel and interesting and offers a new perspective on what an object can be and what definitions of an object may be useful for training agents. Pros: - Well written (with minor exception) - Thorough results section. - A novel approach to thinking about what an object is. - Improvement over Transporter on Atari tasks. Cons: - Some confusion in the introduction about their definition of an object. - There may be some limitation of the types (size and shape) of the objects that this model can assign key points to. - There are some examples of objects being detected where it is not clear why those points are being detected according to the definition given in the paper. Explaining this more clearly would improve the paper. <doc-sep>The authors tackle the problem of self-supervised representation learning, and validate their approach on downstream Reinforcement Learning tasks. Building on the insight that predictability in local patches is a good inductive bias for salient regions that characterize objects, the authors propose a well-reasoned, well-engineered and thoroughly validated pipeline for learning object keypoints without supervision. The authors present a wide range of ablative studies to validate their design choices, and demonstrate the superiority of their method both illustratively as well as quantitatively on a number of standard Atari benchmarks. The paper is very well written and clearly explained, with the assumptions clearly stated, and design choices thoroughly validated through ablative studies. Overall the authors make a very compelling argument for using local predictability as an intrinsic property of objects, leading me to recommend accepting this paper for publication. Pros: + The intro motivates the problem well, contrasting the proposed method with a number of key recent methods. The implementation details are well recorded in the supplementary, with the added mention of releasing the source code + The keypoint detection pipeline is well reasoned and well explained: using the error map obtained through the spatial prediction task to recover keypoints via a bottleneck with limited capacity is a neat idea. The authors ablate a number of design choices (number of keypoints, which encoder layers to use); Figure 1 and 2 are great at showing the high-level components of the method as well as (intermediate) outputs + The comparison against Transporter is thorough and well analyzed. Fig2.b. provides a very clear insight into the limitations of Transporter, showing that the method proposed by the authors is able to achieve some robustness to visual distractors. PKey-CNN uses a similar method as Transporter for encoding keypoint features for downstream tasks, and thus serves to show that the keypoints identified are indeed superior. PKey-GNN further increases performance on a number of Atari games. + Very good ablative analysis and qualitative examples. Some questions: + Do the authors have any further insights regarding why PKey-GNN would perform worse than PKey-CNN? While the authors’ reasoning makes sense, in my understanding a GNN based approach should be able to model any kind of interaction. + The authors demonstrate impressive results on a number of Atari games. I am wondering how this method would perform on a slightly more complex environment, i.e. CarRacing in OpenAI’s gym environment, or maybe even going as far as CARLA? + As I understand, PermaKey is first trained on Atari game rollouts, with the policy trained afterwards. Would it be possible to optimize both the keypoints and the policy together, end-to-end? Post Rebuttal: I thank the authors for their detailed and thorough response. All my questions and concerns were addressed and I appreciate the discussion on end-to-end learning as well as the “Transporter + GNN experiment”. I am happy to maintain my original rating and recommend acceptance. | Reviewers all agreed that this submission has an interesting new idea for learning object/keypoint representations: parts of a visual scene that are not easily predictable from their neighborhoods are good object candidates. Experimental gains on various Atari games are convincing. The main drawback at this point is that the evaluation is limited to visually rather simple settings, and it is unclear how the approach will scale to more realistic scenes. |
The paper pursues the research question of how to evaluate representation learning for echocardiographic data (ultrasound images of the heart). This is tough because available open-access datasets, while valuable, each cover only a subset of the supervised tasks of interest and may not offer too many labeled examples for their chosen tasks. The paper shows how to build representation learning benchmarks for many tasks of interest by remixing 3 open-access datasets: EchoNet from Stanford in California; CAMUS from University Hospital of St Etienne in France; and TMED from Tufts Medical Center in Massachusetts. The first contribution is a suite of 25 specific "visual task adaption" benchmarks, listed in Table 2. Each involves a source and a target choice of dataset-view-label. The second contribution is an evaluation protocol that helps grade overall representations, via an average across tasks and task-categories (Eq 1). The idea is to be able to assess the "overall usefulness" of a representation across tasks, via a single metric. Experiments show how their contributions enable insights about whether pretraining helps (Fig 3), which deep architecture might be best (Tab 3), and whether pretraining on medical images is better than generic images (Fig 4). Strengths are: * Tackles an important problem for an exciting application area (echocardiography); could spur significant ML methods development that is *useful* for improving the efficiency and quality of patient care * Broad coverage of tasks of interest within echocardiography, ranging from segmentation for structure, function estimation, view recognition, and diagnosis prediction * Specific experiments that show benefits of pretraining on medical tasks (3DSeg8) vs generic images (ImageNet) are valuable * Experiments that assess performance versus target set data size (as in Fig 3a) are quite valuable, esp. for thinking about combining public data with in-house private labeled sets that are small **Update after author response on 2022-09-02**: I have raised my score to an "accept", because * W0, W2, and W3 below have been completely addressed. * W1 has been mostly addressed (see my detailed comments in response to the authors for a remaining minor issue related to how CAMUS test data is used) **Original review submitted in July 2022** The key issues I see with the present paper are: * W0: Confusing task definitions for a few tasks in Table 2 * W1: Missing target-only baseline in Fig 4 and Tab 4; makes hard to assess quality * W2: Reproducibility is poor: key experimental design details / hyperparameters not available * W3: Documentation not ready; seems to have many completely blank sections See detailed subsections labeled "W__" below for elaboration on each of these weaknesses. I think these are very addressable in the response period, and I look forward to engaging with the authors via discussion. This work has lots of promise; I just want to emphasize that at present, the reproducibility and documentation really need to improve for this to be accepted. <doc-sep>The paper proposes a benchmark approach for representation learning of visual data in cardiac ultrasound. It provides 25 adaptation benchmarks in which a source task is performed on a source dataset, and then learned representation is employed for a target task on a target dataset. ETAB involves three datasets: EchoNet, CAMUS and TMED. The proposed benchmark suite captures a degree of adaptation on multiple datasets; nevertheless, the impact of this suite remains questionable. - The paper involves three well-known public datasets: EchoNet, CAMUS and TMED. - Adequate classification of adaptation tasks, including clinical prediction, view recognition, cardiac function estimation and cardiac structure identification. - Comprehensive experimental setup with the use of popular methods such as ResNet50, U-Net etc - Provided some basic insights on transfer and representation learning in Echocardiography. - The paper presents a benchmarking suite to investigate the task adaptation of representation learning methods which is simplistic with applications in a very limited context. - The adaptation benchmarks were proposed based on limited views/knowledge of the existing datasets, which might not be exhaustive and "unified" for tasks in Echocardiography. - Whereas enabling adaptation of pre-trained models are good to have, how does ETAB impact the existing/new workflows in developing representation learning methods? - Limited discussion on theoretical contributions and practical implications of the paper in real-world scenarios. <doc-sep>This paper provides an echocardiographic task adaptation benchmark for the various clinically-relevant task using publicly available datasets. Their benchmark evaluates some model architectures, pretraining techniques to standardize the evaluation method for clinical usage. This paper shows powerful usage in specific clinical tasks like Cardiac struct. identification, estimation, and clinical predictions. Translating the clinical evaluation protocols into ML-friendly tasks will bridge the gap in deep learning applications in the medical domain. - **Novelty**: as addressed in the paper, "the domain-specific analogue of the general benchmark for vision tasks developed in [19,20]". Though the ETAB definition involving the formula is slightly different, it does not increase superiority in methodology. - **Methods are not comprehensive**: benchmarking a dataset should consider SOTA methodology in model architecture and pretraining. Though the paper mentioned ViT and Swin-ViT in line 173, it's not included in the main context and supplementary. In the baseline for the cardiac function estimation task, it only mentioned LSTM to handle the video frames without justification. The other methods for video data like ViViT should at least be mentioned. Other methods like semi-supervised learning and contrastive learning, which achieved a good performance recently, should also be considered as a good pretraining strategy in addition to transfer learning. - **Pipeline not user-friendly**: Github repo does not provide a user-friendly interface to reproduce the result. All the links at the bottem are invalid. <doc-sep>This is a very well written paper and I believe that this benchmarking setup can later be used for other medical specialties. The use of AI in medicine requires a higher threshold of security and testing than other use cases because the difference between right and wrong output can mean life or death. The authors of this paper are suggesting a novel standardized approach to solving this problem. Novelty. Well written. Strong main experiment and sub-experiment. The GitHub website is not as complete as I would like it to be. However, I can understand that it is constantly being updated. <doc-sep>The work presents a benchmark suite of tasks for evaluating the performance of learned visual representations for use in echocardiography. This benchmark, the echocardiographic task adaptation benchmark (ETAB), provides a meta-dataset constructed from 3 existing echocardiogam research datasets. ETAB formalizes several categories of source and target task categories, and defines an aggregate ETAB score for estimating performance across a these benchmark categories. This benchmark enables evaluating a number of backbones and other representation learning approaches across a heterogenous mix of echocardiographic tasks and datasets, which is critical for assessing robustness and other properties of representation learning methods. - The benchmark is very promising. Moving beyond traditional benchmark datasets, which are often narrowly constrained to iid assumptions by task and dataset, is critical for practical development of applied medical ML. - Incorporating small training sizes (16+ training examples) is a nice focus area, as few-shot medical benchmarks is a critical research area. - The downstream tasks cover a nice range of realistic use cases (segmentation, cardiac function estimation, view classification, clinical attributes). - A metadata with a formal evaluation protocol across datasets would be a nice contribution The most significant weaknesses of this work are - A lack of clarity outlining the structure of the overall experimental pipeline in terms of measurable sub-components. - Reported results are terse or in some cases missing (i.e. all non-ResNet50 performance numbers). Currently the manuscript lacks a full breakdown of results even for the ResNet50 backbone use case that forms most of the paper's results. This incomplete reporting makes it difficult to follow the overall merits of the benchmark. Clarity of results could be substantially improved. The paper proposes an evaluation strategy that covers a large number of model configurations, spanning architectures, datasets, pretraining strategies, and downstream task performance inclusive of some adaptation/transfer approaches. This is a very large cross product. While the gestalt ETAB score provides some useful assessment of representation learning quality (to the points raised in lines 146-147 around how individual scores assess specific transfer learning scenarios) a complete reporting of the individual source/target scores would make this paper much easier to follow. Figures 3 and 4 include subsets of tasks. Why not report performance for all task categories? In general, I feel the paper is missing reporting for many intermediate results (e.g., performance metrics for the source models) as well as not providing a consistent report for all downstream tasks. The paper only includes results for ResNet50 architectures. Lines 173-176 states that other backbone architectures were evaluated, including vision transformers (ViT-L/16, Swin-ViT) and multi-layer perceptrons (MLP-mixer), however these performed worse than the ResNet50. While these results are stated as included in the supplementary material (line 176) they are actually not included. The supplement (which is quite short) includes pointers to the ETAB GitHub, but many links are broken and no results or leaderboard are available. I feel this is a substantial limitation and the paper would be substantially improved if we could observe ETAB scores across more than 1 family of backbone architectures. I also find that not including the performance for the transformer and MLP architectures due to inferior performance to the ResNet50 is counter to the utility of a leaderboard benchmark, where these backbones are very reasonable baselines and increasing in popularity. Some of the datasets may have availability issues. While EchoNet has a complete DUA process and provides archival information the other 2 datasets seem problematic. The "Tufts Medical Echocardiogram Dataset" is currently a dead link. CAMUS is from a challenge dataset and it's unclear from the website if the data is available to researchers who are not participating in the challenge (which ends in 31 Dec 2024). I was unable to fine an explicit discussion of licensing terms for CAMUS. Sometimes medical challenge datasets are only available for a given challenge. What guarantees do you have that these datasets will be available to researchers moving forward? | The consensus among the reviewers was that this work covers an important topic and a broad number of tasks. There were concerns about the documentation, reproducibility, and accessibility of the dataset. But the authors have done a good job in addressing most of these concerns with documentation, an API, and example notebooks. |
This paper designs a multilayer connection structure for neural networks, such that the connection architecture supports implementation of a hierarchical classification scheme within these layers. It applies this design to the task of hierarchical classification on ImageNet. Experiments compare results with those of Deng et al. (2012), as well as baseline flat classification models. The paper motivates the proposed approach via broad claims about what networks understand, but does not provide sufficient analysis or experimental evidence to justify these claims. For example: "In particular, when an existing CNN correctly identifies an image of an English Setter, the network itself does not learn that it is an instance of a dog, or more precisely, a hunting dog which is also a domestic animal and above all, a living thing" Assuming it is trained on example images of all of these categories, how do we know that the CNN does not learn shared representations that implicitly reflect such organization of the concept space? The paper does not employ any techniques to probe the learned representations of CNNs; without such analysis, the sweeping statements about what CNNs do or do not learn are mere speculation. On a technical level, the design of the proposed dense classification layers appears to be quite ad-hoc. It is not clear why a special design intermixing concept prediction nodes with hidden nodes is desirable or necessary. How does this compare, both conceptually and experimentally, to a branching hierarchy of subnetworks? The scheme of HD-CNN (Yan et al., 2015) is similar to the latter, but is not represented in experimental comparisons. In fact, experiments appear to lack comparison to any recent published methods on hierarchical classification. Deng et al. (2012) is the only prior publication that serves as a reference point. This is far from a sufficient baseline as surely there has been other work on hierarchical classification in the past 8 years. For example, a highly relevant work that this paper fails to even cite or discuss is: M. Nickel and D. Kiela. Poincare Embeddings for Learning Hierarchical Representations. NeurIPS, 2017. Together, the unsubstantiated motivating claims, ad-hoc design of questionable merit, limited experimental comparison, and missing citations to highly relevant recent work suggest that this paper is not of sufficient quality for publication. <doc-sep>The authors consider how to capture the semantic relationship among categories of a classifier. It is an important problem and has many potential applications. For example, the predicted concept chain can help people understand the performance of the classifier, the coarse-grained concepts are beneficial to the few-shot learning of new categories, and etc. The authors incorporate WordNet as their ontology and build their neural classifier based on it. Such a tree-structured dense-connected neural architecture is not very common in the current deep learning domain. The network is bound to external ontology, so when the ontology updates, the network has to be re-built. In my opinion, the design seems not very general. Maybe the authors could consider representing restrictions among concepts in the vector space. In the experiments, the authors used only two baselines, one is a flat single-layer classifier, the other is a work of 2012. The baselines seem too weak to demonstrate the superiority of the proposed model. The results of more recent works are necessary, even though these works "use a separate technique/tool for modeling the conceptual relations", as the authors claimed. To sum up, the pros of this paper include: - a valuable research topic - a fancy model - clear experimental details The cons include: - the model is not very general - baselines are too weak - the aspect ratio and resolution of figures seem improper<doc-sep>Summary: This paper proposes a novel module on top of ConvNet, multi-layer dense connectivity, for learning hierarchical concepts in image classification. Pros: This paper proposes to use the label hierarchy (with ancestor concepts from a label) instead of the label itself to learn the image recognition system. To achieve this, it has made two major contributions: 1. Building label hierarchy with a simplified set of categories, to remove the redundant and meaningless categories 2. With the constructed label hierarchy, this paper proposes a dense connectivity module to leverage the label hierarchy to model category abstractions over high-level visual embedding, on top of commonly used convolutional neural networks. With the proposed techniques, this paper builds up its recognition system using two standard deep ConvNets and achieved strong results on large-scale image recognition benchmarks. Cons: 1. In general, the paper is not very well written for a few reasons: A) The motivation of the proposed method over previous methods is not clear (intro paragraph#2). B) Section 3.1 is very hard to follow. C) Some notations in Section 3.2 seems unnecessary and there are things being used before it is formally defined. 2. The design of this dense connectivity module in Section 3.2 seems quite arbitrary, there is no good explanation on why we need to use the z to multiply the output x and h. 3. Experiments of naturally adversarial examples are not motivated earlier in the introduction. It's quite hard for me to understand why using a label hierarchy would improve this task. Detailed Comments: 1. Paragraph#2 in Intro: why training a neural network as multinomial/softmax logistic regression from images to labels can not acquire a comprehensive knowledge about the input entity? For instance, in some of the prior works (e.g. Hu et. al. 2018), they learn models to simultaneously classify categories on a predefined label hierarchy, including both abstracted classes such as "Dog" and concrete class such as the "English Setter". 2. It seems that from Section 3 on, it uses the term "Category" to stand for the leaf concept (most specific) and the term "Concept" as the shorthand of ` "Ancestor Concept". It would be better to mention this explicitly to avoid confusion. 3. Example in Figure 2 is not very clear and hard to follow. It might be better to simplify the figure by using a smaller hierarchy as an example. Also, it would be good to have a paragraph in section 3.1 to describe what in the right figure has been modified using the concrete examples of Figure 2. 4. Equation 1, why do we need a \\psi activation function which is linear? What it means by linear, is there an additional linear weight in \\psi besides v? 5. Why are we using an MSE for the concept classifiers? I assume we can use binary cross-entropy for them? Minor: * The aspect ratio of Figures 2 and 3 need to be adjusted. It is hard to recognize text and symbols on the stretched figures. * Notation \\hat{h} in the text is bolded but the ones in the equation (1) is not bolded * A recent work that also leverages hierarchical information in the label text to learn visual concept embeddings, which is closely related to the topic of this paper: Learning to Represent Image and Text with Denotation Graph. EMNLP 2020 <doc-sep>The paper proposes a method to learn concept classes along with its concept superclasses. The proposed method relies on an ontology which they heuristically re-organize by essentially pruning nodes that have few descendants and large semantic overlap. The network proposed to model the ontology essentially just consists of a learned multiplicative gate at each level of the ontology with a standard xent loss over concepts and regularizing term that indicates if the category is an ancestor concept. The experimental results claim gains over some baselines, e.g., combined acc. of 69.05 vs chosen baseline 66.15 on ImageNet12 for ResNet-50 features at a cost of between 18.2% increase in parameters. Overall, while some of the empirical results seem competitive, I am most concerned with the weak foundations of the motivation of the setup. The work reads like a proposed solution that's trying to look for a problem / motivation and as a result struggles to find its footing in explaining modeling choices & results. * The paper uses as motivation that many networks use a softmax head over semantic categories at the leaves of an ontology and claims this is therefore why models using such networks do not learn that say English Setter is a dog. This is a shallow argument for incorporating concept hierarchies since such models clearly would still not be learning deeply what the concept of a dog is, only encoding weak priors introduced by the ontology, an external knowledge base from the network. The connection to learning relationships like "is-a" relations don't ultimately fall out from the proposed method, instead you just get a list of likelihoods that correspond to superclasses that contributed to a concept prediction—the model is not learning the relation, just the co-presence of these superclasses. * The argument that works like Deng et al (2012, 2014), which for example propose label relation priors like HEX graphs, only either predict fine-grained labels or superclasses exclusively, but not both simultaneously, is another example of where this paper falls short in its problem setup. This work doesn't answer the question of _why_ one would even want to predict both simultaneously well. If we believe that a superclass is unlikely present, why would we still predict the child classes? Even if there are reasonable arguments for this, they are absent in this paper. * The approach to creating the "compressed concept hierarchy" largely felt like a description of what was done, again, rather than why. Unless I missed it, I also expected a baseline for ablation that doesn't use the "compressed" hierarchy, but just uses it as-is. * It's a bit strange to me why a MSE loss is used for the indicator of whether a concept is an ancestor. Why use an unbounded error in L2, even if (or especially if) you are squeezing through a sigmoid? What is the intuition? * I would be curious to know if the improvement in results that we see in Table 1 are just due to increased model capacity (params/compute), i.e. how does it compare to the Deng et al baselines. The comparison discussed are only made with respect to the ResNet-50 & Inception-V4 backbones. I will also note that the paper could improve on its clarity in writing. As an example, from Figure 2, it's unclear what exactly changed from the LHS and RHS, how, and why it's meaningful; and in Figure 3, it's not obvious without work how the input, output and z-term relate. | All reviewers recommended rejection after considering the rebuttal from the authors. The main weaknesses of the submission include poorly motivated claims and designs, and insufficient experimental comparisons. The AC did not find sufficient grounds to overturn the reviewers' consensus recommendation. |
This paper claimed they designed a new GNN architecture that achieves state-of-the-art performance with lower memory consumption and latency. More specifically, the proposed model uses memory proportional to the number of vertices in the graph $O(V)$, in contrast to competing methods which require memory proportional to the number of edges $O(E)$. The paper claimed that the new architecture enabled each vertex to have its own weight matrix, thus following a novel adaptive filtering approach. The experiments found that the proposed efficient model could achieve higher accuracy than competing approaches across six large and varied datasets against strong baselines. Moreover, the experiments demonstrated that the proposed method achieves lower latency and memory consumption for the same accuracy compared to competing approaches. ### Strengths: 1. This paper applied the idea of "basis weights" and thus effectively uses different weight matrices for different nodes. 2. This paper has provided a long section to help interpret the proposed architecture and compared it with GCN, GAT, and PNA. ### Questions: 1. Since it is noted that different aggregators might result in outputs with different means and variances, for the experiments in Table 1, did you normalize the combination weights $w$ or the different aggregated outputs? 2. I am curious how you implemented the "weighting of bases" per node, i.e., multiplication between the combination weighting coefficients $w$ (different for different nodes) and the aggregated outputs corresponding to different basis weight matrices? Is this step before or after the aggregation? Although the theoretical complexity of this step is $O(V)$, I think the choice of actual implementation might significantly affect the practical time efficiency. ### Weaknesses: 1. The major weakness of this paper, in my opinion, is that this paper seems to have misunderstood the real bottleneck to scale-up GNNs, and the proposed architecture may not be suitable for large graphs (i.e., size over a million). - The paper has compared with many existing approaches to solve the scalability issues of GNNs on large graphs. However, this paper did not tackle this problem. Firstly, the theoretical complexities of the proposed architecture are still $O(V)$. Thus on a graph with more than a million nodes (which happens in some node classification or link prediction benchmarks), the memory consumption is still too large for conventional GPUs. Secondly, the main experiments (Table 1) are conducted on four graph regression/classification benchmarks and only one node classification benchmark. However, the graphs in the four graph-level benchmarks are small. The experimental exploration and evidence on large graphs are insufficient. - For graph sampling approaches, the paper claimed that `We evaluated a variety of sampling approaches and observed that even modest sampling levels, which provide little benefit to memory or latency, cause model performance to decline noticeably.` The corresponding experimental results are shown in Table 8 in the appendix. However, all four benchmarks used in Table 8 are graph-level tasks (graph regression or classification), where the sizes of input graphs are small. For example, ZINC and MolHIV are molecule graphs, and CIFAR consists of super-pixel sampled graphs whose sizes are at most 150 (Dwivedi et al., 2020). Applying sampling strategies to these small input graphs is not reasonable, and it is not strange to observe significant performance degradation. Actually, the performance of some sampling algorithms on relatively large graphs for node classification is strong, e.g., GraphSAINT (SAGE aggregator) can outperform the "Full-batch" GraphSAGE on *ogbn-products* (see [leaderboard](https://ogb.stanford.edu/docs/leader_nodeprop/)). The quoted sentence above is not appropriate. - To sum up, I think the architecture proposed in the paper does not solve the scalability problem on large graphs. Thus the authors should make this point clearer in the related work and experiment sections. 2. Another potential problem, in my opinion, is the spectral interpretation in Section 4.2. The paper claims that `Our approach corresponds to learning multiple filters and computing a linear combination of the resulting filters with weights depending on the attributes of each node locally.` The paper compared two formulas: (1) the ideal linear combination of filters: $y=\\sum_{b=1}^Bw_b\\odot g_{\\theta_b}(\\mathbf{L})\\mathbf{X}$; and (2) the filtering of proposed model (correspond to EGC-S, Eq.(2)) $y=\\sum_{b=1}^Bw_b\\odot (\\tilde{D}^{-\\frac12}\\tilde{A}\\tilde{D}^{-\\frac12})\\mathbf{X}\\Theta_b$. However, I think the latter formula is not a combination of multiple filters. There is only one filter $(\\tilde{D}^{-\\frac12}\\tilde{A}\\tilde{D}^{-\\frac12})$ (which is left multiplied to $\\mathbf{X}$) and the weight matrix $\\Theta_b$ right multiplied to $\\mathbf{X}$ should not change the relative magnitude of signals of different frequencies. It is not the same if changing $g(\\mathbf{L})$ in the former formula, where the matrix left multiplied to $\\mathbf{X}$ changes. This paper proposed a novel architecture to linearly combine the aggregated outputs using different weight matrices for each node. The model is memory efficient since the memory consumption is $O(V)$ instead of $O(E)$. However, the proposed method did not solve the scalability problem of GNNs when applied to considerably large graphs. And it is unclear when the provided memory efficiency is necessary, and thus the proposed method is the first choice. In terms of performance, the improvements compared with PNA are also not very convincing. The significance of this paper might be limited if the efficiency/performance improvements are marginal. I would encourage the author to explore the theoretical understanding of the proposed architecture further. Currently, section 4 is well-written, but the conclusions are limited and may have some flaws. In general, I could not recommend the current manuscript for acceptance. <doc-sep>This paper introduces Adaptive Filters that enable some of the benefits of Message Passing architectures, but while maintaining a memory consumption that scales with the number of nodes. The authors claim that this architecture is not only better performing, and use less memory, but more efficient for GPUs through the use of sparse matrix multiplication. The idea of the model is that you have a number of filters (MLPs or linear layers) applied to each sending node latent, a "filter". These are then weighted and summed by a weighting vector calculated as a function of the receiving node. Since there are no functions that take as input both sending, receiving or edge inputs, the memory will scale with the number of nodes. You, in essence, get an efficient pseudo-attention mechanism. Strengths: - The paper conducts a thorough analysis and comparison of alternative approaches and describes how some of the motivations behind the alternatives can be gained without the memory costs. - The paper conducts a thorough analysis on latency, which I appreciate. - I like that the authors consider hardware - in many cases, more efficient algorithms in FLOP terms are less efficient because they are not parallelisable. This is an important point and I encourage work in the GNN literature that addresses this. - The results are better than the baselines, which are over a number of different graph networks. I think the model description is quite general. Weaknesses: - To state the obvious, such a network is only really applicable to data where you do not need to consider edge features explicitly. While this is fine in principle, the paper doesn't really make this distinction and so the claim is probably broader than the evidence supports. - The text seems to imply that the computational efficiencies derive from the use of sparse matrix multiplies. But, many new accelerators do not support sparse matmuls well at all, instead are very strong on dense compute. This distinction should be clearer in the text - this is a *GPU* optimised GNN, not an accelerator optimised GNN (perhaps a more parallel GNN would count for something more general purpose). - I do not think these results are state of the art - simply that they are better than the baselines reported. Please remove the phrase state of the art. - Whilst I understand that this method is different from the literature, it is *very* related to concepts such as attention. Thus, I think the impact may be limited. Some nits: It would be nice to have more detail on the OGB and Zinc results - perhaps explain why this model does not perform as well as MolHIV. I think this is a thorough piece of work, but I am concerned that the impact is limited for the following reasons: - I am not convinced that the results presented reflect the current state of of O(V) scaling GNNs. You cite "Training graph neural networks with 1000 layers" which I believe scales O(V), and I think achieves better results on OGBN-ARXIV. - The ideas seem very related to existing work. This isn't to suggest that the ideas are identical, more that the contribution appears to be incremental. That being said, I think the analysis is beneficial to the community so give it a weak accept. <doc-sep>The authors propose a different formulation for a graph neural network, focused on achieving accuracy equal to anisotropic GNNs without anisotropic mechanisms. This paper is a mixed bag. On one hand, I appreciate the authors' vision: take some things we've learned about GNNs (heads, aggregators), revisit first principles (isotropic GNNs), try to come up with something new that blends them. In addition, I appreciate their take on computational efficiency: they correctly seize upon the notion that a well-understood algorithmic kernel can have meaningful computational impact, and they build on its advantages (e.g., section 3.2-aggregator fusion). On the other hand, many of the conclusions leave something to be desired. The analyses and descriptions sometimes gloss over important factors, and I wish the results delivered a stronger takeaway to match the ambition. For all the talk about computation and memory efficiency, the actual numbers are not all that substantial. Strengths: - The authors' direct challenge to accepted folk knowledge about anisotropic GNNs is healthy and welcome. As models evolve over time, it is important for the community to revisit fundamental positions on model components. The paper opens a lot of areas for other researchers to build off it, which I see as a major strength. - The proposed model is clearly explained and fairly easy to follow. The intrepretations are useful and provide good intuition for how to place their proposed model in the context of other current work. - The appendices actually provide a lot of useful content, sometimes more than the paper itself. The rationale in appendix C, for instance, was far clearer than the paper's 'aggregator fusion' section in 3.2 (even without the algorithm). Weaknesses: - Some of the evaluation results are weak. The accuracy-normalized results in seem to demonstrate that EGC-S is not all that much faster or smaller than GCN. It also seems to indicate that parameter-normalized results do not correlate particularly well with memory footprint across models, which weakens the argument for O(V) vs. O(E). - Despite a nominally primary argument of the paper being about memory efficiency and computational performance, very little quantitative results support that argument. I agree that there is definitely a lot of potential for this approach to be efficient, but I wish the paper had strong support for it. - The authors fail to disambiguate different types of memory effects. For instance, while high-watermark memory footprint (which is mostly what is measured in their experiments) is important, they overlook the effects of parallelism. This leads them, amongst other things, to dismiss sampling methods almost out of hand. The memory behavior of sampled methods is vastly different than non-sampled GNNs when dealing with distributed execution or on graphs that are substantially larger than single-device memory capacity. Or the focus on O(V)/O(E), which glosses over the fact the average degree is small (between 2 and 13 on their chosen OGB datasets) and ignores other 'constants' on the same order. This is technically correct but misleading. None of these invalidate the authors' methods, but it's somewhat disingenuous. Also, it very much felt like parts of the paper were aimed at different goals. The intro and background lean heavily into hardware efficiency, while other parts of the paper (S3 outside of the last paragraph; S5.1-5.3) seem to ignore it. The authors take a different tact with their proposed GNN, which is welcome. They explain it clearly, and the paper's writing and organization are solid. The results support their claim that they can achieve competitive accuracy, but the quantitative computation and memory results are somewhat underwhelming. In my view, the contribution of this paper is its approach and challenge to convention, which feels good enough to publish, even if it left me wanting more substantive. <doc-sep>This paper investigates whether incorporating anisotropy (treating neighbors differently using latent functions and thus resulting in $\\mathcal{O}(E)$ memory cost) is necessary for boosting GCN's accuracy. They argue that the proposed EGC with $\\mathcal{O}(V)$ memory requirement can achieve higher accuracy than prior anisotropy-based works (e.g., GAT) using adaptive filters and thus achieve (1) better accuracy than vanilla GCN; (2) and less memory cost and latency than anisotropy-based methods. I am convinced by the introduction part. The hardware efficiency and memory cost are arguably two bottlenecks in the GCN inference and training and they are often correlated. Strengths: * The recap of algorithm-hardware co-design is clean and to the point and should promote the understanding of hardware acceleration for the GCN community, potential reference also includes GCoD (HPCA'22) and G-CoS (ICCAD'21). * The comparison table give the readers high-level information about the propagation differences among various GCN methods and their corresponding memory requirements. * The adaptive filter is a kind of meta-learning approach that allows different basis filters to explore different latent spaces and finally weighted summed. I suppose the experiments are a fair comparison with a similar number of parameters as Fig. 3 shows. * A clean codebase is provided. Questions: * I have one question about the sampling-based methods. As Table 2 shows, GraphsSAGE achieves the lowest GPU training and inference latency and also the least peak training memory, which seems to contradict the argument in Sec. 2 that sampling-based methods are often ineffective. In addition, why the inference time of GraphSAGE is also lower than GCN since we will not sample the subgraphs during inference? I am also wondering whether EGC can also be implemented in a sampling-based way? * Could you elaborate more about the potential impact on the GCN hardware acceleration? Will EGC-S or EGC-M propose unique challenges/opportunities that commercial devices (CPU or GPU) cannot efficiently handle while needing further customized hardware architecture for leveraging their full potential? I think the introduction and related works description clearly recap the background of both GCN algorithm and potential algorithm-accelerator co-design, and also the current dilemma in the GCN community, that is, the more accurate but less time/memory efficient anisotropy approaches VS. less accurate but more time/memory efficient vanilla GCN approaches. The proposed EGC can alleviate such a dilemma by achieving both higher accuracy and efficiency. | The manuscript develops a new and simple graph neural network architecture. The proposal make use of only O(V) (number of vertices) rather than O(E) (number of edges, meaning that it may be useful for scaling to larger problems. The didactic figures are especially clear, and as is shown in Fig 1 the proposed architecture passes messages based only on the source vertex rather than based on source and target. This challenges common ideas in the field that passed messages ought to reflect a function of both source and target. In spite of this introduced simplification, the architecture performed better than or as well as a set of strong baselines on a set of 6 datasets. The manuscript also examines latency and memory consumption, showing that the methods comes out favourably in this regard. One of the reviewers worries that the paper does not directly provide a solution to scaling network training to very large graphs; they note that several of the datasets that are examined do not contain large graphs. This is true, but the paper does not overclaim in this regard, and I agree with the majority of reviewers that the manuscript is worth publishing on the basis of having developed a well-performing approach that challenges the accepted assumptions in the field. While it may not be a direct solution, the counterintuitive results may help point the direction toward development of simple, effective approaches that do scale up. |
This paper proposes two modifications to the recently introduced multiplicative filter networks (and their bandlimited variant BACON): 1) by introducing skip connections it enables explicit layerwise bandwidth control, 2) by modifying initialization scale it makes the spectral supports of layers more or less disjoint. The benefit of doing this is that it enables coupling with standard multigrid / frequency marching approaches to solve inverse problems. Thanks to the introduced modifications the low-frequency content of the function implemented by the network need not change when fitting higher frequencies. The authors show how this leads to high-quality reconstructions of molecules in synthetic CryoEM experiments. The problem addressed by the paper is very clear, the paper is well written, the numerical experiments seem well executed and the results are compelling. The interventions in the architecture and training are simple. They yield solid (but not staggering) improvements over earlier work. Cryo-EM is an important application but the setting treated here is somewhat simplified (no shifts, no heterogeneity, only moderate noise SNR = 0.1, and a coarse initial orientation estimate). It would be interesting to see how the proposed architecture fares on other problems. problems. As far as I can tell the paper is technically correct—there are no formal results, the architecture description is clear and simple, and the successful experiments strongly corroborate the correctness of the derivations. In terms of the narrative, I have the impression that achieving perfect scale orthogonality is a bit overstated. It can be certainly play a role in inverse problems like Cryo-EM which are linear (or conditionally linear when given the particle orientations) and where the frequencies don’t mix (by the Fourier slice theorem). (I doubt that it need be ideal.) It may however be a liability or at least not as useful in problems with strong scale coupling. The discussion of limitations is adequate. <doc-sep>The paper describes an extension of the recently proposed Multiplicative Filter Networks (MFN) that permits an independent parametrization of different levels of scale (Fourier-bandwidth) by introducing residual connections and a suitable initialization scheme. MFNs are a simplification of positionally encoded networks / coordinate networks that have a special form of building the result as a linear combination of (potentially) exponentially many basis functions that are controlled non-linearly, achieved by multiplication with Fourier (or Garbor) basis functions in each "level" of the network (in the 2D/3D case considered here, the dimension of the output [function] space does not seem to be exponential though, bringing this closer to a classical linear representation) . The related BACON network extends this idea by providing initialization schemes that yield nested levels of scale (increasing resolution), but the networks are constructed in a way such that changes to higher level "fine" details invalidate the fit of lower layers. The submission addresses this issue in a simple but effective way by adding short-cut connections that carry the lower-layer function-fits to the next-higher layer unaltered, and building-up a new detail representation on top (which, to my understanding, can show complex scale dependencies). In addition, a suitable initialization is proposed that creates nested levels of detail in the Fourier-spectrum covered (again, to my understanding, targeting the low-dimensional regime of maybe 2D or 3D coordinates). Results for image fitting are improved over previous methods such as BACON. An example application to CryoEM reconstruction is presented that seems to outperform a well-known approach (characterized as SOTA). The main contribution is a scale-separated version of MFN/BACON. The applications to CryoEM are also interesting but the rather brief evaluation with only a few example reconstructions move the methodological contribution more in focus. To explain my view of strength and weaknesses, I would first discuss my understanding of the technical aspects a bit more in depth: I understand the main goal of the paper as to decouple scales in an MFN-style coordinate network. To my understanding, this is already achieved by the short-cut connections, which yield a superposition of the low-frequency part, solely controlled by low-frequency parameters, with a high-frequency part that depends on both low- and high-frequency parameters. For progressive coarse-to-fine-optimization, this seems to be sufficient, while at the same time being conceptually and technically very easy to understand and implement. A suitable initialization makes sure that progressive parts of the spectrum are covered (by introducing corresponding control parameters); this scheme seems to work only in moderately high dimensions (as it needs to cover Fourier-coefficient space with 2^d or 3^d patches of coefficients). Consequently, it is not fully clear how the decoupling would affect higher-dimensional applications, but they are not the target of the paper (and a lot of previous work, where similar representations are used for example for reconstruction from 3d point clouds). I think that the concern for scale separation is a valid goal, and results on image fitting in the paper at least indicate subtle advantage of the decoupled representation (although a strong explanation is missing; the case for CryoEM is clearer as multi-scale fitting is commonly used there). It would be useful to understand more clearly how important this is in an overparametrized regime, as multi-scale optimization is often used as a heuristic to counter bad local minima in non-overparametrized, non-convex optimization scenarios. From a conceptual point of view, I do not fully understand the boundaries (as in representational capacities and ability to find solutions) between linear scale space approaches (mip-maps, wavelets), MFN-approaches (linear in $x$, non-linear in $\\theta$), and the current proposal. If - hypothetically - the next-higher frequency band was fully separated by duplicating the required low-res coefficients, would the resulting method be substantially different from an ordinary hierarchy of Fourier-basis functions? How different (how much better) are results in comparison to a simple base-line (say, a linear wavelet or pyramid representation)? It would be useful to see a direct base-line comparison (or maybe arguments for why this is clearly inferior - I do not know this area too well). Summarized as a list, I would see the following strength... + Simple and straightforward idea. Easy to implement. + Canonical extension of MFN/BACON (users of those methods probably want to know about this one). + Good results with minimal (extra) effort (Seems to yield SOTA on CryoEM) + Well written paper. ...and weaknesses: - Experimental evaluation remains a bit anecdotal (I guess that the CryoEM fitting is very expensive to setup and conduct). - Very simple, maybe incremental idea (could also be seen as a limitation of the contribution) - Limited explanation of "why" and "how far" it works. - Could have more comparisons against base-line Overall, I would guess that the paper might be a simple but frequently employed addition to MFN-methods and thus potentially impactful (with many citations and applications). For this reason, I tend towards a positive recommendation. If my understanding of the limitations in terms of dimensionality were correct (see discussion question above), it would be useful to discuss this more in the paper. It might also be useful to mention the limitations of multi-scale optimization for avoiding local minima (in my experience with shape matching & reconstruction problems only very vaguely related to the CryoEM example, it helps somewhat but does not eliminate fundamental issues with bad local minima). <doc-sep>The authors treat the problem of coarse-to-fine optimization for solving inverse problems, which are particularly important for 3D reconstruction. They note that the existing state-of-the-art coordinate networks used to solve these problems are ill-suited for multiscale optimization since they can forget low frequency details learned during earlier stages in the training scheme. The authors propose residual multiplicative filter networks which (1) introduce skip connections between layers with different output scales so that coarse features learned early in the optimization process are explicitly not forgotten and (2) use an initialization scheme that allows for control over the frequency spectra learned by each layer, meaning that the network can avoid learning redundant frequency information in finer-scale layers. Several experiments are presented for 2D and 3D reconstruction problems where the proposed method is shown to be superior to existing approached in capturing details at several scales. STRENGTHS: - I like the intuitive breakdown of the intermediate representations as a sum and shift of frequency sprectra from the previous layers. This provides a straightforward motivation for the proposed method. - The visual representation in Figure 6 gives a clear and concrete example of why coarse-to-fine optimization is important for 3D reconstruction problems. - Detailed background is given on 3D molecular reconstruction problems, which is useful for the larger ML community that may not be familiar with such domains. WEAKNESSES: - I do not understand the benefits of having the outputs at each scale "shift" very little (e.g. as in the results in Table 1). I would think that as long as the final output PSNR is satisfactory, then the internal shift in frequency repsresentation does not matter. Looking at Table 1, it seems that even though fair BACON has quite a bit of shift in its frequency representations at each scale before and after optimizing the other scales, the output still has good PSNR. - I believe that a more thorough background and introduction to coordinate-based networks in general could be given. While some details are given (e.g. that the networks take 2D or 3D coordinates and output some value like RGB), it was not immediately clear to me what the purpose of these networks (e.g. compressing a high-dimensional signal) was compared to other models (e.g. generative models). The authors provide a small list of limitations in the second paragraph of the conclusion. However, the section reads as a justification for some design and experimental choices rather than a discussion about the shortcomings of the proposed method. I would suggest that the authors make a more explicit limitations (sub)section. | The paper studies Multiplicative Filter Networks, which are coordinate neural networks in which each layer applies a multiplicative (Hadamard product) filter and a sinusoidal nonlinearity. The paper shows how introducing residual connections and initializing appropriately can lead to networks where the frequency content of the image separates over layers. This leads to a learned version of classical “coarse-to-fine” reconstruction methods, which the paper terms Residual Multiplicative Filter Networks. The paper illustrates its proposals with experiments on image approximation and on cryo-EM reconstruction. Reviewers found that the paper presents a simple idea, which can be easily adopted whenever a coarse-to-fine reconstruction is desired, and as such is likely to see followup work. The main questions concerned the necessity of a coarse-to-fine approach in applications where one ultimately seeks a reconstruction at just a single scale, and the cryo-EM experiments, which show good performance compared to a baseline, when the coordinate network model is integrated into a larger system. Overall, the reviewers found that paper presents a natural modification to MFNs which improves both their interpretability and applicability in inverse problems in imaging. |
This paper introduces a supervised disentanglement method to learn dynamical systems. The method relies on the provision of privileged information (true parameters of a sequence) in order to disentangle them from observations. The method is evaluated on three toy datasets. - My main issue is the limited novelty of the proposed method. This method is a straightforward extension of the unsupervised disentangled state-space model (Miladinovic et al.) to a supervised one, where the privileged information regarding domain parameters is explicitly fed to the model. - Certain claims (e.g. regarding disentanglement) are made without proper quantitative and/or qualitative investigation(s). - It is claimed that the proposed supervised disentanglement method improves performance over the unsupervised method. However, there are no comparisons with the closest unsupervised method (e.g. Miladinovic et al.). Therefore it is hard to judge whether the proposed supervised method truly performs better or not. - The results on OOD generalization can not be considered OOD as the parameters' range used to create OOD datasets highly overlap with the ranges of the training datasets (Table 1, Appendix). I expand on these points in the following: - Regarding line 1 in the contributions section: The treatment of domain parameters as factors of variation is one of the main proposals of DSSM in Miladinovic et al. DSSM seeks to disentangle these true parameters (also referred to as domain-invariant state dynamics) from observations only. Therefore, I believe it's not the first work to consider this setting. - The main contribution of this work is the supervised disentanglement of sequential data. However, the authors do not investigate how good the disentanglement is. This leaves room for interpretation i.e. whether it is truly disentanglement that is helping the model in achieving good performance. I believe both quantitative and qualitative analysis of disentanglement would further strengthen the claims made in this paper. - The true parameters (factors of variation) are explicitly provided to the network for training. The method is not directly comparable to Locatello et al. 2019, as only a few labels were used in Locatello et al. 2019 which resulted in a semi-supervised disentanglement setting, in contrast to a fully supervised setting in this work. - The authors claim that the supervised disentanglement of the sequential model is better than the unsupervised disentanglement done in DSSM (Miladinovic et al.). I would appreciate it if the authors could back it up with some empirical evidence. It is important to compare results with DSSM (even Kalman VAE) to see the true benefits of supervision. - The method is practically limited as the privileged true parameter information is not readily available in real-world systems. Thus, as acknowledged by the authors, this method can only work for the simulated systems where these variables are known beforehand. - The ranges of the parameters used to create the OOD dataset highly overlaps with the ranges used to create the training dataset. In my opinion, this is not OOD as it is very likely that the test sample comes from the range which is used for training. I suggest authors use the ranges which are completely outside the ranges of the training distribution i.e. extrapolation generalization regime (or even interpolation regime where the parameters are sampled from the subset of the training range but that subset range is not seen during training). - Have the authors tried Gaussian distribution (correspondingly L2 loss) for the decoder? I wonder how the results might differ from the Laplace distribution. - The prediction quality is reported by using perceptual metrics LPIPS and SSIM. These metrics compare the deep feature space and statistical properties of the images respectively. I think these metrics are not sufficient for evaluating the predictions of the dynamic. If it is possible then kindly report RMSE and/or NLL. - Fig1 caption: Do the input-output dimensions differ? I don't think the labeling in the figure is correct as there are some inconsistencies. For e.g. in a single time step: $x_1$ → $x_{n+1}$ and $x_n$ → $x_{n+o}$. Typos (minor): pg2: "This can be extend to" pg2: "high-dimemnsional video rendering" pg2: The sentence "we directly assess using the downstream prediction task." seems incomplete. pg4: "though as equivalent to the phase space of the system." pg4: check sentence structure of "being a state-of-the-art model in long-term video prediction," pg5: "on three well studies dynamical systems," pg9: "prediction is based both on them which" I have some reservations regarding the novelty of this work. Moreover, certain empirical results do not back the claims made in this paper. Therefore, I am inclined towards rejection. <doc-sep>The paper studies the performance of dynamical systems learned from data with a focus on out of distribution (OOD) evaluations. Authors consider the question whether disentangling dynamical system parameters in the latent space can improve the generalization of the models, which is perceived as privileged information available from the reference (ground truth) simulations. Authors carry out experiments on several dynamical systems: pendulum, Lotka-Volterra system and three-body problem. Additionally an experiment on video prediction of a singing pendulum is performed. Authors found that additional disentanglement can improve generalization performance of the models and in video prediction setting leads to better long-term predictions based on structural and perceptual image metrics. **Strengths:** * Clear statement of the underlying hypothesis being tested * Clear presentation of the results and supporting information * Extensive sweeps over hyperparameters **Weaknesses:** * Improvement of the models with disentanglement in the phase space setting appear marginal; Based on the provided visualizations it is not clear that there is a systematic way in which models with disentanglement perform better. A more expressive analysis of the errors might be helpful to assess this aspect (maybe distribution of errors across the dataset for several fixed samples?) * It’s hard to assess how much variance in the performance is present in the video prediction metric; This is general challenge with selecting best performing models, as they completely mask away the error bars; (Providing several model instances would help to evaluate the significance better) * While marginal improvements are presented in coarse performance metrics, an insight into the type/class of errors that are being reduced would be very interesting. * One potentially important hyperparameter (time step) was not varied, which often significantly affects the prediction accuracy. Authors present a clear investigation of how disentanglement of the domain factors may affect the performance of learned dynamical models. The suggested experimental evaluation is sound, but current results seem a little marginal. With additional results/modifications I believe this work could be useful to a wider audience, but my initial rating is marginally below the threshold. <doc-sep>In this paper, the authors propose a supervised approach to disentangle domain parameters from the dynamics in a deep latent variable model like VAE. Extending VAEs to dynamical systems is a relevant problem and has been a focus of interest in many recent works [1,2,3,4,5,6,7]. This paper identifies two issues for developing dynamical VAEs, i) out of distribution generalisation ii) long term trajectory prediction The main contribution is to address the aforementioned issues using a supervised loss defined between latent variables and domain parameters. The authors present empirical experiments to support the idea. Pros: - A relevant problem in dynamical VAEs that is sufficiently motivated in the paper. - Empirical experiments on three problems: LV, video pendulum and the three-body problem. Demonstrate long term trajectory prediction and OOD on an easy and a hard task. - Have done a hyperparameter search and presented some ablation studies. I find it is an interesting work. However, I strongly feel the authors have not adequately demonstrated the benefit of disentanglement and are missing comparison. My main concerns are below: - Lack of evidence on whether supervised loss disentangles dynamic parameters from domain parameters. The authors mention evaluation of disentanglement is beyond the scope. I beg to differ for two reasons: - the long term trajectory prediction doesn’t necessarily benefit from the disentanglement of latent factors. There are several methods that achieve good performance in long-term trajectory prediction without any explicit form of disentanglement, for example, Hamiltonian Neural Networks, Hamiltonian generative network (HGN), Symplectic RNN [4], Physics-as-Inverse-Graphics [7], Lagrangian Neural Networks [6], etc. In the related work section, the authors refer to HNNs and say disentanglement is not successfully addressed in such models. It would help if the authors could elaborate the sentence here. The HNNs learn Hamiltonians in a data-driven way and can make long term predictions. So why do they need to address disentanglement in the first place? It is unclear what added gain comes from supervised disentanglement and how it is advantageous over other state-of-the-art methods of long term trajectory predictions or OOD generalisation. - Can supervised loss ensure the domain components are fully disentangled from dynamics? I think it is critical to demonstrate whether domain variables are disentangled from the dynamics in any meaningful way? Could, for instance, fix the domain variables and draw samples by slowly changing the dynamic variable and vice-versa. Or better report disentanglement metrics. - A weak baseline. As referenced above, several works on extending VAEs to dynamical models have shown empirical and theoretical (symplectic structure) arguments for long-term trajectory prediction. It would be worth comparing those methods and demonstrating any benefit in using a supervised approach. In addition, models like SINDY [5] can discover dynamical parameters in an unsupervised way and have demonstrated benefits on long-term trajectory prediction. Without comparison, it is not evident what the immediate benefits of the supervised setup are? If it is OOD generalisation, authors should at least show this as a limitation in existing approaches. - It is not apparent what makes the interval of domain parameters an easy or hard problem. It would be beneficial to discuss from the dynamical system perspective. - The choice of loss in supervised disentanglement needs more explanation. In Section 3.2, it is L1 and in Section 3.3 is L2. - In Table 9, the results of VAE-SD and VAE-SSD are unstable in some cases. But this is not the case with LSTM or VAE. The authors should provide some discussion here and a potential explanation of the effect. Minor comments: - In Table 1, the domain parameters of the train/val/test set are in the same range. It is likely for a model to perform well on val/test if it has seen sequences of the same parameters in training. Shouldn’t the two be selected differently? - Please number the equations. Technical inconsistencies: -In Section 3.2, the loss function is inconsistent with Figure 1. According to Figure 1, the input to VAE is x_n, and the prediction is x_{n+1}. The loss is a typical VAE plus a supervised disentangled term. There are no dynamics there. If the reconstruction term is supposed to be a prediction of x_{n+1} please add appropriate suffixes on x or z in $\\mu_x(z;\\theta)$. If this is not the case, please provide details on how dynamics are taken into account. - Please use consistent scripts. In Section 3.2, the k components of latent variable z are written as under script z_{1:k} and in Section 3.3 for latent variables as superscripts s^{1:k}. - In loss formulation of Section 3.2, the domain parameters $\\xi^{i}$ are associated with sample $x^{i}$. As far as I understand, the time steps in a sequence share the domain parameters. It would be helpful to use a suitable script to express it consistently. - In the loss formulation of Section 3.3, the prediction model is in the state space. The domain parameters are shared over T; why use the prediction model on $s_t$ instead of $d-k$ components of $s_t$? # References: [1] Chang MB, Ullman T, Torralba A, Tenenbaum JB. A compositional object-based approach to learning physical dynamics. [2] Sanchez-Gonzalez A, Bapst V, Cranmer K, Battaglia P. Hamiltonian graph networks with ode integrators. [3] Toth P, Rezende DJ, Jaegle A, Racanière S, Botev A, Higgins I. Hamiltonian generative networks. [4] Chen Z, Zhang J, Arjovsky M, Bottou L. Symplectic recurrent neural networks. [5] Champion K, Lusch B, Kutz JN, Brunton SL. Data-driven discovery of coordinates and governing equations. [6] Cranmer M, Greydanus S, Hoyer S, Battaglia P, Spergel D, Ho S. Lagrangian neural networks. [7] Jaques M, Burke M, Hospedales T. Physics-as-inverse-graphics: Joint unsupervised learning of objects and physics from video. In my view, this paper proposes a fair approach to a relevant problem. However, there are several concerns. - The benefit of disentanglement is not demonstrated. The long term generation is not sufficient to support the claim. If a fully unsupervised approach can work equally good what is the incentive of supervised loss? Therefore, I think it is critical to compare with some of the methods outlined above. - The contribution is marginal as it simply introduces a regularisation term and provides empirical results. Simplicity is generally good and not a downside. However, it should be supported by proper justification and if possible perhaps by a theoretical claim. The choice of L1 in 3.2 and L2 in 3.3 is not properly explained. - There are technical inconsistencies that leave room for ambiguities. I have come to the conclusion this paper has concerns that need addressing. I, therefore, give a score of 5. # Post Rebuttal I have changed my score from 5 to 6. <doc-sep>The paper introduces a VAE-based disentanglement model for dynamical system prediction, which was trained under the supervision using domain parameters. The authors conducted experiments on simulated datasets and showed good performance for OOD cases and long-term predictions. - The novelty of this work may be not enough for ICLR acceptance standards in that the authors applied existing VAEs with minor modifications to known problems. - I think the comparison with unsupervised disentanglement models is quite unfair because the proposed model is trained using strong supervision about data. Furthermore, the results showing that supervised disentanglement methods outperform unsupervised ones are trivial and not particularly impressive. Instead of the baselines designed by the authors, it would be better to add some comparisons with existing papers for dynamical system prediction (particularly in Figures 3 and 4). - I am not sure whether the performance differences in Figures 3 and 4 are statistically significant because the results of some models exhibit quite high variances and the number of examined models (i.e., 5) seems small. It would be better to conduct some statistical tests to show that the differences are meaningful. - The experiments were conducted only on simple simulated datasets. I think some experiments on real-world and/or more complex data are necessary to show the applicability of the proposed method. - I think a deeper analysis on disentangle representations is not out-of-scope and is necessary because the paper heavily relied on VAEs for disentanglement learning. (i) Are the latent dimensions obtained with the supervision (z_1:k) truly disentangled? (ii) What kind of information is encoded in the other features without the supervision (z_k+1:d)? It would be better to add quantitative results based on existing disentanglement metrics and/or visual results (latent traversal, embedding space visualization). - Regarding Figure 7, it would be better to add proper explanation about why RSSM is better for the initial timesteps than RSSM-SD. - It would be better to improve the presentation quality of Figure 5. It is difficult to identify the differences between the lines in the current version because they are largely overlapped. Simply changing linear axis scales into log scales may be helpful. - Is the reconstruction loss in page 4 replaced by the prediction loss as described in the caption of Figure 1? If so, please modify the reconstruction loss in page 4 to accurately show the prediction loss. I think the underlying technical contributions are quite small, while the empirical results are not particularly impressive. I thus find it difficult to argue for acceptance of the work. <doc-sep>This paper introduces a supervised loss to encourage dynamical systems predictors to retain systems' parameters (e.g. appearing in the underlying ODE) in their latent space. It presents multiple experiments to evaluate the advantages of this approach, including better long-term forecasting ability as well as improved prediction performance for out-of-distribution parameters, that had not been seen during training. ## Contribution The tackled problem of out-of-distribution generalization for the forecasting of dynamical systems is relevant and valuable to the community, as motivated in the paper. Models that generalize well in this setting still need to be discovered. This is an important issue as a forecasting system overfitting the training distribution cannot possibly have retained the true dynamics, in this case the ODE / PDE, of the observed phenomenon. The proposed method is one of the first steps in this challenging direction. However, to the best of my understanding, I find the contribution of this paper to be overly limited for acceptance, with numerous limitations that are described below. ### Supervision and Disentanglement One of the main claims of the paper is that the proposed method makes models disentangle the system's parameters, but this claim is not sufficiently supported. It is clear that the method makes model learn the system's factors of variations by design, but no experiment indicates that it does separate them as well, which is necessary for disentanglement. In this regard, I disagree with the statement at the end of Section 1 removing from the scope of the paper the inspection of the learned representations, given the claims of the paper. Experiments investigating the latent space could include, for example, the manipulation of the latent variables associated to the systems' parameters to assess whether they are actually disentangled, similarly to e.g. [1, Figure 1]. The advantage of the considered framework of experiments in this setting is that all the data is simulated: one could numerically compare the resulting sequence from the manipulation of the latent state with the actual simulated sequence generated with the corresponding parameters, in line with [2, Sections 5.2 and 5.3] in another context of disentanglement. Regardless of the disentanglement property, an experiment evaluating the ability of the models learned with the proposed loss to retrieve the true parameters of the system from the observation of the sequence would greatly emphasize the utility of the method. Indeed, estimating the parameters of dynamical systems is an active line of research which this paper directly follows, given that the method is supervised on these parameters. ### Novelty and Significance The novelty of the proposed method is limited, given prior works on supervised disentanglement in other contexts (e.g. [3] cited in the paper). The discovery that the supervision of the system's parameters improves forecasting performance is interesting but expected; an opposite result may have been questionable. I stress that this is not a significant problem per se, given that evidencing this behavior in the context of dynamical systems is valuable to the community. However, this limited novelty is to be considered with the lack of significance in the presented empirical study. The lack of significance first lies in the numerical results, which all show non-significant or marginal improvements for the proposed method against the baselines: VAE-SSD performs similarly to VAE-SD, and VAE-SD is only marginally better than VAE and is far from closing the gap between out-of-distribution and in-distribution performance. Furthermore, qualitative animated results that I checked in the Hard OOD setting provided as supplementary do not match the example of Figure 5 as VAE and VAE-SD seem very close compared to the difference between them and the ground truth, making the improvement look thin. To my understanding, these mixed results may be the consequence of choices in experimental design, as argued in the next point of this review. The experiments also lack significance in their design. The considered out-of-distribution parameter ranges are restricted and close to the training parameters. Without further discussion in the paper to contextualize this choice, it would seem that out-of-distribution sequences might be close to in-distribution sequences, thus questioning the obtained results. I would suggest the authors to either extend the considered ranges of parameters or explain how the current parameters are relevant. A possible direction to improve this paper in this regard might be to consider a semi-supervised setting, like [3], instead of a fully supervised method like in the current version. Real-world simulations can be expensive to run and the available labeled data in this context may be limited, thus rather motivating a method working with sparsely labeled data. ### Choice of Models The choice of models to apply the proposed supervision may be questionable and explain the mixed results obtained in the experiments. To the best of my knowledge, forecasting models of the kind of VAE and CNN-VAE in the paper are not widely adopted in the community; I would be interested in references that the authors could provide to support this choice. Instead, state-of-the-art variational models often rely on sequential latent variable generative models, like [4, 5, 6, 7, 8] to name some works in the last five years. Moreover, the use of ODE-like recurrent schemes may be considered as well as they have been shown to be adapted to the prediction of dynamical systems (see e.g. [9, 10, 11]). RSSM is the only baseline in this paper following this line of works, and is also the only model presenting a substantial improvement with the proposed supervision. I believe that this is no coincidence, given that a questionable choice of model like MLP may induce suboptimal results with the introduced method. I encourage the authors to strengthen their empirical evaluation by considering more robust and standard models. ## Other Remarks and Questions ### Questionable Claims Several other claims is the paper may be questionable, as listed in the following. > System identification [...] requires knowledge of the underlying system to be computationally effective". [Page 1] It would seem that the proposed method does require knowledge of the underlying system as well, since it relies on supervising over the system's parameters. > [We treat] the ground truth domain parameters from simulations as privileged information which, to the best of our knowledge, has not been applied to dynamical system prediction previously. [Page 2] This may be a wording issue but privileged information has already been leveraged for dynamical systems for the last few years, cf. for example [10, 11, 12], even though this privileged information is not necessarily the system's parameters. The authors might consider further discussing this point. > The problem is that [VAEs] usually lack in competitive performance. Without references to support this claim, I would strongly disagree given the references mentioned above [4, 5, 6, 7, 8]. ### Number of Experiments Figures 3 and 4 are said to show the top 5 models of each architecture, but I could not understand the details of this selection. Does this correspond to the top 5 best performing sets of hyperparameters? Or is it the top 5 over a given number of experiments for the same set of hyperparameters? ### LPIPS Could the authors justify the choice of LPIPS for the experiments in Section 5? LPIPS is a perceptual metric for realistic images, making it a priori less relevant for synthetic datasets like these pendulum sequences. The authors might rather highlight PSNR which is a standard metric for this type of datasets and is already used in the appendix. ### Writing The paper is mostly clear and easy to read, but I find the description of the models to be confusing regarding their nature and the considered architectures (for instance, the VAE is underspecified in the main text), which raises issues in the motivation of the modeling choices in the paper as mentioned above. Many figures are hard to read in greyscale; I recommend that the authors improve their readability to make them as accessible as possible. Typos: - the reference to Saxena et al. (2021) at the end of Section 1 should be between parentheses; - title of Section 3.2: "disentanglment" should be "disentanglement"; - title of Section 3.3: there is an extra parenthesis at the end of the title; - Section 3.3: "which can be though as" should be "which can be thought of as"; - there is an extra comma and no parentheses are needed in the last sentence of page 4; - Section 4.3: "We also observe that VAE-SSD model the in-distribution data" should be "We also observe that VAE-SSD models the in-distribution data". ## References [1] I. Higgins et al. $\\beta$-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. ICLR 2017.\\ [2] J. Donà et al. PDE-Driven Spatiotemporal Disentanglement. ICLR 2021.\\ [3] F. Locatello et al. Disentangling Factors of Variations Using Few Labels. ICLR 2020.\\ [4] R. G. Krishnan et al. Structured Inference Networks for Nonlinear State Space Models. AAAI 2017.\\ [5] M. Fraccaro et al. A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning. NIPS 2017.\\ [6] E. Denton et al. Stochastic Video Generation with a Learned Prior. ICML 2018.\\ [7] J. Chung et al. A Recurrent Latent Variable Model for Sequential Data. NIPS 2015.\\ [8] Y. Rubanova et al. Latent Ordinary Differential Equations for Irregularly-Sampled Time Series. NeurIPS 2019.\\ [9] R. T. Q. Chen et al. Neural Ordinary Differential Equations. NeurIPS 2018.\\ [10] S. Greydanus et al. Hamiltonian Neural Networks. NeurIPS 2019.\\ [11] Y. Yin et al. Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting. ICLR 2021.\\ [12] M. Raissi et al. Physics-Informed Neural Networks: A Deep Learning Framework For Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. Journal of Computational Physics. 2019. From the limitations that are described in this review, I think that this paper needs very significant changes to be accepted, especially because of the questionable claims and insufficient experimental results. Nonetheless, I am looking forward to discussing my opinion with the authors and other reviewers. I believe that this paper follows an interesting line of research and that further work could make it ready for publication at a next conference. ### Post-Rebuttal Update I acknowledge the authors' response and thank them for their extensive answer. As explained in my follow-up response, I find that the proposed improvements are marginal and insufficient to raise my score. Therefore, I maintain my strong recommendation to reject the paper. | This manuscript tackles an interesting and significant line of research of long-term prediction and out-of-distribution generalization in time series models. I strongly believe this problem is an important one to solve. However, in its current form, its novelty is marginal, and the experiments fail to decisively show advantages. It also lacks of systematic improvements and error analysis. Further work could make it ready for publication at a next conference. |
Paper proposes an active learning algorithm in which labels are not acquired when the model chooses to abstain from predicting. [note confidence score of 1 (@authors + @AC). this is largely outside my comfort zone with many proofs, of which I did not study in details. I would recommend discarding my opinion.] I think lines 35-38 already starts this paper off on the wrong foot. The objective of active learning is to the learn the decision boundary with minimal labels; what the model does with points close to the decision boundary or with high uncertainty once this is learnt could be considered another problem. If the model can the learn the decision boundary with high accuracy given more queries around the decision boundary then this is justified. Strengths - The introduction makes clear what the paper is trying to achieve. Weaknesses - The paper is not easy to read. - Scenario proposed does not reasonable. - No experimental results whatsoever. - Lots of propositions and theorems are stated in the paper, but all the proofs in the appendix. I have but skimmed these. Originality: Certainly seems new, but a more directed related work would be appreciated. Quality: I find the theoretical exposition somewhat lacking in places-- e.g. for proposition 3 and the statement follows of the algorithms superiority over any uncertainty-based AL method. No experimental results. Clarity: Neither well written nor well organised. Language is imprecise and unclear. Parts of the paper make strong statements without backing. Significance: Difficult to assess the impact of the method. The impact of the paper however will be small, given the problems above. N/A <doc-sep>The paper proposes an active learning algorithm that can avoid sampling from regions of the input space with high label noise. The algorithm satisfies two important properties: 1) it achieves exponential improvements compared to passive learning with respect to an evaluation metric that penalizes abstentions (Chow’s excess error); and 2) the algorithm is computationally tractable for finite pseudo dimension function classes. Strengths: - The paper employs in a creative way techniques from contextual bandit literature to extend the idea of Puchkin et al and propose a computationally tractable algorithm. Moreover, the result is particularly remarkable since it does not require a condition constraining the amount of label noise, but rather captures it in the bound. - The analysis for deriving the results is non-trivial and some of the connections to quantities from the contextual bandit literature (e.g. eluder dimension, disagreement coefficient) may be of independent interest for the active learning community. - The paper contains several results that help to position the proposed algorithm in the broader active learning literature. For instance, the analysis of Section 3 confirms that the algorithm is minimax optimal (albeit not substantially better than passive learning) with respect to the standard excess risk. Weaknesses: - While the paper is generally easy to follow, certain details regarding the algorithm or the analysis could be discussed in more detail in the main text (see the Questions section) - Minor remarks: There are a few typos in the paper (e.g. lines 259, 266, 274 etc). Also the pseudocode of Algorithm 1 can be made a bit more precise and easier to follow (perhaps add a notation for the labeled set; Q_t, x_t, y_t are not defined when they appears first in step 4; unspecified how $\\hat{f}_1$ is selected etc). The paper generally addresses some of the poignant limitations of the analysis (e.g. focus on finite pseudo dimensions function classes, realizable vs agnostic case etc). See the Questions section for other limitations that could also be discussed in the paper. <doc-sep>This paper studies the pool-based active learning problem. The main contribution is to propose a computationally efficient algorithm to train a rejection model. Under the realizable case, the model enjoys $\\epsilon$ chow's excess risk with $\\widetilde{O}(\\mathrm{polylog}(1/\\epsilon))$ label complexity. The guarantee is achieved without any low noise assumption commonly used to achieve the exponential savings label complexity in literature. Although a similar rate (for learning with abstentions) has already appeared in the literature, the proposed method is more efficient (or practical) than the previous one. Besides the main result, the authors also show that (a slight modification of) the proposed method enjoys minimax optimal label complexity for the standard excess risk with the low noise assumption. Furthermore, this paper has shown a constant label complexity in a special case (with a finite hypothesis set) and presented the guarantees with model misspecification. ### Strength: Overall, I think this is a nice paper with fruitful results. Specifically, the strengths of this paper are listed as follows, + novelty& significance: although the algorithm framework shares a similar spirit as the previous work [Krishnamurthy et al. 2017] in standard active learning with abstention, the new criterion for label querying is interesting to me. A similar rate for active learning with abstention has been achieved by [Puchkin and Zhivotovskiy, 2021], but a computationally efficient algorithm is always what we desire. + clarity: this paper is well written and clearly structured for the most part. Although there are fruitful results regarding Chow's excess risk and the standard excess risk (under different conditions), the authors have clearly organized them to make the results easy to follow. ### Weakness: In general, I like the results of the paper, but I have still some reservations about the assumption and the computational cost as follows, - about the realizable assumption: although the realizable has frequently appeared in the active learning literature, the most related work on active learning with abstention seems not to require such an assumption (Theorem 1.1 of [Puchkin and Zhivotovskiy, 2021]). It seems to me that the realizable assumption is the price for the efficient algorithm since Algorithm 1 requires to approximate $\\eta(y=+1|\\mathbf{x})$ with the function $f(\\mathbf{x})$ (by ucb and lcb). So, I think it would be necessary to make a more clear comparison with the previous work. - about the efficient algorithm: - issue on the parameter setting: the algorithm takes the disagreement coefficient $\\theta$ as the input. I am not sure whether such a coefficient can be calculated efficiently in general? (maybe an upper bound for $\\theta$ is enough in special cases, but the realizable assumption could be violated.) - issue on the estimation of lcb and ucb: although the authors have referred to [Krishnamurthy et al. 2017] for the calculation of the lcb and ucb, I think it would be nice to discuss their computational costs since the efficiency is one of the main contributions of this paper. (for example how hard it is to compute lcb or ucb for a $\\mathcal{F}$ containing the $f_\\star$?). This paper has discussed its limitation on the realizable assumption in Section 4.2. It has shown that the same exponential saving label complexity is achieved with misspecified model space as long as $\\epsilon$ is less than the approximation error $\\kappa$. The results partially address the limitation on the assumption, but I think the paper would become even strong if the author could show a similar convergence rate for Chow's excess risk compared with the best model in the hypothesis when $f_*$ is not in $\\mathcal{F}. <doc-sep>The paper studies active learning of general concept classes. Lower bound is known in this regime to rule out savings in label complexity over passive learning. However, [PT21] showed that with the additional action of abstention, active learning does provide exponential savings in terms of the error rate. This work follows the research line, and the main contribution falls into a computationally efficient algorithm that archives label complexity comparable to [PT21]. The main algorithm relies on efficient implementation of regression oracles, which has been developed in prior works. Strengths: + Active learning is a very useful tool to reduce labeling cost, and this paper studies an interesting and practical extension. + The core contribution on efficient learning paradigm is important. + The paper is well written and easy to follow, with right amount of reminders and pointers. Weakness: - The computational efficiency is phased in terms of number of calls to an oracle, yet leaving the runtime of that oracle unsettled. Please provide concrete computational cost analysis to justify the main contribution. - It is true that [PT21] runs with minimizing an empirical 0/1 loss which is NP-hard. Can you give more intuition on why the 0/1 loss is vital for their analysis, and why the regression oracle approach in the paper works as well? Yes. | In this paper, the authors develop the first computationally efficient active learning algorithm with abstention, while maintaining the exponential savings in terms of label complexity. Furthermore, the proposed algorithm enjoys other nice properties, such as recovering minimax rates in the standard setting. The algorithm is based on novel applications of techniques from contextual bandits, and the analysis is nontrivial. On the other hand, the authors should improve their paper by addressing the concerns of reviewers, especially the realizable assumption. |
This paper proposes a Hermite variational auto-encoder which use Ornstein Uhlenbeck Semi-group to p(z_i|z_i+1) which i denotes the latent layer number. It has clear theoretical inspiration and had solid analysis on variance reduction. Pros: Quality: The paper's generic theoretical motivation and analysis is with high quality. Clarify: The paper's presentation is clear. Originality: This paper provides a new perspective and used mathematically tools of Hermite expansion etc to inspire and proposed new method for variance reduction which prevent dying unity problem in Hierarchical VAE. Although motived by advanced tools, but the application of the method in vanilla version of hierarchical VAE (in term of implementation) seems very simple. Thus, the method looks easy to adopt. Cons and questions: Significance: 1) Does the method work for vanilla VAE with only one level of z? It seems that it is only applicable to the hierarchical version as the operator is applied in p(z_i|z_i+1) and if it is one level, it lost the point due to single Normal prior. This may limit the application impact as VAE is much more widely adapted in different applications comparing to hierarchical VAE. 2) I am not sure making unit not dying at all is desired (such as shown in last column of table one or Figure 3. Being Bayesian with the prior, there is a natural model selection behavior (implicit Occam's Razor) , thus, behavor such as the method with active unints (40,40,40,24) in table one may not be desired and rather a bit weird as only the last layer have dying units. Behavior such as VAE+KL (2,3, 11, 37) looks more natural to me as simpler model is needed in high hierarchy. 3) Experiments only compared to Ladder VAE in number of dying unit but not in term of ELBO for performance. As LVAE is one of the most known work in this domain and also mentioned first in the related work section), this is weird. In LVAE paper, the MNIST performance is reported as -81-82 while in the paper it is about -85 which is significantly worse. Although there is a chance due to minior setting differences, I doubt the method's performance can match LVAE. (Again with 2), I don't think that puring comparing the number of units without reporting performance makes sense). 4) in term of performance of ELBO, most of the time, it does not match simple KL annealing either. 5) there are more highly related work anaylsis the variance-bias trade off such as Tighter Variational Bounds are Not Necessarily Better are not discussed in the paper. <doc-sep>1. Summary This paper studies the training of deep hierarchical VAEs and focuses on the problem of posterior collapse. It is argued that reducing the variance of the gradient estimate may help to overcome posterior collapse. The authors focus on reducing the variance of the functions parameterizing the variational distribution of each layer using a layer-wise smoothing operator based on the Ornstein-Uhlenbeck semigroup (parameterized by a parameter $\\rho$). The operator requires additional Monte-Carlo samples. The authors provide an analytical analysis of bias and variance. Last they train multiple VAEs models, measure the posterior collapse and observe a phase transition behaviour depending on the parameter $\\rho$. 2. a Strong Points This paper introduced a theoretically grounded solution to the problem of posterior collapse. In particular, it is discussed that variance may be an issue. Great efforts were invested to study the behaviour of the Hermite VAE in theoretical terms and the authors provide analytical results on the Bias and Variance for this estimator. 2. b Weak Points * Complexity In the main text, it is written that "*experiments show that 5 or 10 samples suffice*". This is a major drawback for Hermite VAEs and the complexity of the algorithm is not discussed, nor it is studied empirically. Given 5 MC samples, I interpret that HVAE is 5 times more expensive than other approaches -- please clarify this point. * Empirical study of the variance The problem of the variance is discussed in the paper but left apart in the experimental section. I would expect the authors to measure the variance (and/or SNR) of the gradients for the HVAE objective, the VAE, for advanced estimators such as STL and DReG. A study is required to corroborate the claim that reducing variance overcomes posterior collapse. * Experiments on posterior collapse I am surprised to see that none of the existing methods (KL warmup and freebits) allows overcoming posterior collapse (Figure 1). At least using the right amount of freebits should improve the results (the number of freebits is not reported). Furthermore, the authors should report the KL divergence in the benchmark experiment. * Experimental protocol I don't understand why VAE models trained in section 5 only have 2 layers whereas HVAE uses 4 layers: this is not a fair comparison. Furthermore, LVAE should be studied on the basis of posterior collapse -- not only in terms of likelihood. 3. Recommendation Unfortunately, based on the current form of the paper, I recommend rejecting this paper. 4. Recommendation Arguments Despite the good theoretical contributions, I do not find the experimental section to be strong enough to support the claims. In particular, the cost induced by the additional MC samples is not discussed and methods are hence not compared on the same basis. 5. Questions to the Author - What is the complexity of HVAE? Do the VAE models use multiple MC samples as well? - Why using only 2 layers for the VAE models? - How are the freebits and KL-warmup applied in figure 1? 6. Feedback Your work is very relevant and the theoretical insights are very interesting, this work would greatly benefit from an improved experimental section. In the first page, two typos: - you defined $q(z | x)$ and not $q(x, z)$ - The KL divergence in equation 1 should depend on $q(z_i | z_{i-1})$ and $p(z_i | z_{i+1})$ <doc-sep>Post-rebuttal update --------- Thank you for your response. Now I understand that the algorithm works by smoothing the Gaussian parameters $\\mu_i,\\sigma_i$ w.r.t. the centered Gaussian rv (as described in my last reply, second part of bullet point (1)), so my original concern regarding the bias _in the Gaussian parameters_ does not hold. However, I still cannot recommend acceptance at this point, because of a newly discovered issue in the theoretical analysis: The analysis in Section 3 does not take into account the impact of smoothing on the ``downstream'' nonlinear layers. The text only considers two layers of stochastic latents and the KL part of ELBO, but in the deeper case, the smoothing of $\\mu_i(z_{i+1})$ will additionally have influence on the layers below $i$, through the nonlinear functions $\\mu_{i'},\\sigma_{i'}$ for $i'<i$. More concretely, consider the following scenario: $\\mu_i(z_{i+1})\\equiv z_{i+1}, \\sigma_i(z_{i+1})\\equiv \\epsilon$ which is very small. Further assume that $z_{i+1}$ is high-dimensional and approximately follows $\\mathcal{N}(0,I)$, so $\\\\|z_i\\\\|_2 = \\\\|\\mu_i(z_{i+1})+\\sigma_i\\varepsilon_i\\\\|_2 \\approx \\\\| z_{i+1} \\\\|_2 > 100$ with probability $1-\\epsilon_1$, where $\\epsilon_1$ is also very small. In this case, it is possible to achieve a low KL in the original ELBO, by using a $\\mu_{i-1}$ which only has sensible values in the region $B := \\\\{z_i: \\\\|z_i\\\\|>100\\\\}$; in the complement set $B^c$, $\\mu_{i-1}$ can be "arbitrarily" bad so long as its impact on the ELBO does not outweigh $\\epsilon_1$, the probability its input falls there. However, in the smoothed estimator with $\\rho=0$, the input to $\\mu_{i-1}$ only have norm $O_p(\\sigma_i(z_{i+1}))=O_p(\\epsilon)$, so the value of $\\mu_{i-1}$ on $B^c$ will have a far higher impact, easily exceeding the original by $O(1/\\epsilon_1)$. To summarize, *it is possible to construct models where the ELBO has a reasonable value, but the smoothed objective behaves catastrophically*. Moreover, even in the shallow case, $z_i$ will be fed into a final decoder block to generate the reconstruction image, so a similar issue exists, although it will be in the reconstruction likelihood part of the ELBO as opposed to the KL part. A less important issue is that parts of the analysis are written in a confusing way. Apart from the abuse of notation $U_\\rho$ which leads to my original confusion, in Section 3 the $\\hat{\\mu}_p$'s should have a suffix of $z_1$, to signify the fact that they are coefficients of a function that depends on $z_1$ (see the last response from he authors). Also it is unclear to me why there is no mention of $\\mu_p^4$, in the analysis of the variance of an estimator for $\\mu_p^2$. But given the aforementioned issue, I don't think it is necessary to look further into this case. Summary ------- This work proposes to smooth the mean and variance parameters in the decoder of hierarchical VAEs with the O-U process. It is shown that the smoothing procedure reduces variance of the ELBO, alleviates posterior collapse, and improves on model likelihood on CIFAR-10 under a fixed number-of-parameter budget. The idea to investigate the impact of ELBO variance in hierarchical VAE performance is sensible, and the experiments seem to show improvements. However, I have concerns regarding the theoretical claims, and the empirical results also seem to need clarification. Major Concerns -------------- - The claim that the smoothing doesn't change the expectation (of functions acting on the latents) doesn't seem correct. Prop.1 and 2 only holds when the expectation is taken wrt the standard normal distribution, while all but the top-level latents (i.e., $z_i$ for $i<L$) come from a mixture of Gaussian. Intuitively it also seems incorrect: what if $\\rho=0$? - The variance analysis works by assuming $\\sigma_q$, the decoder variance, is constant. This ignores the problem of unbounded likelihood [1], where posterior variance goes to zero, thus driving the ELBO and its variance to infinity. It would be helpful to include a plot of the decoder variances in the most realistic model, to see if this issue is relevant in modern hierarchical VAEs (and thus whether the analysis here provides a complete picture). - The conclusion of the analysis does not seem helpful: the bias is $O(1-\\rho^2)$ and the variance is $O(\\rho^2)$, so it is unclear from the bound whether there will be a $\\rho$ that decreases the overall MSE. Minor -------------- - It is worth mentioning that there are several types of posterior collapse and not all of them are undesirable [2]: sometimes it is superfluous units rightfully pruned [3, 4]. This also implies that the number of active units is not a good measure of model quality; it is helpful to include reconstruction error in Section 5.1. - The observed phase transition of KLD connects to the fact that ELBO-trained VAE acts like a thresholding operator; see [2]. - Why didn't Table 3 mention NVAE [5] and IAF-VAE [6], both of which have better BPD values? Seeing where those models are on the #parameters-BPD curve helps to put the results here in perspective. References ---------- [1]: Mattei and Frellsen, Leveraging the Exact Likelihood of Deep Latent Variable Models, in NeurIPS 18. [2]: Dai et al, The Usual Suspects? Reassessing Blame for VAE Posterior Collapse, in ICML 20. [3]: Lucas et al, Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse, in NeurIPS 19. [4]: Dai and Wipf, Diagnosing and Enhancing VAE Models, in ICLR 19. [5]: Vahdat and Kautz, NVAE: A Deep Hierarchical Variational Autoencoder, in NeurIPS 20. [6]: Kingma et al, Improved variational inference with inverse autoregressive flow, in ICLR 16. | This paper develops a smoothing procedure to avoid the problem of posterior collapse in VAEs. The method is interesting and novel, the experiments are well executed, and the authors answered satisfactorily to most of the reviewers' concerns. However, there is one remaining issue that would require additional discussion. As identified by Reviewer 1, the analysis in Section 3 is only valid when the number of layers is 2. Above that value, "it is possible to construct models where the ELBO has a reasonable value, but the smoothed objective behaves catastrophically". Thus, the scope of the analysis in Section 3 deserves further discussion. Given the large number of ICLR submissions, this paper unfortunately does not meet the acceptance bar. That said, I encourage the authors to address this point and resubmit the paper to another (or the same) venue. |
This paper presents a survey of methods for enforcing low-rankness and sparsity in neural network weights and proposes SPARK, an alternating algorithm for creating low-rank and sparse weight tensors from a pre-trained network. Baseline method accuracies are retained or slightly improved while parameter count is reduced by a large factor. Overall the ideas and overview presented seem solid. However, several omissions make me worry about how well this contribution is situated in the literature. 1. The term Spark is well established in the sparsity literature and defined as something else entirely (see e.g. https://www.pnas.org/content/pnas/100/5/2197.full.pdf). I would heavily encourage the authors not to create more confusion about terms than there already is, especially when working in the same field, and rename this method to something else. 2. Arguably the seminal contribution in the most recent neural network era pertaining to low-rankness of weights is https://arxiv.org/abs/1404.0736 . The fact that it is not mentioned makes me worry about how many other references are missing and potential comparisons. E.g. why is the method of Yu, Liu, Wang, Tao not transposed to the architectures evaluated here and compared in the table? 3. The sparsity literature is vast and includes some extremely relevant contributions, all of which have been omitted. The following can almost be drop-in replacements for the alternating algorithm described in the paper: - http://proceedings.mlr.press/v28/richard13.pdf - https://arxiv.org/abs/1206.6474 - https://arxiv.org/abs/1111.1133 This list is certainly not exhaustive and it will not do to simply cite these papers. There are more in the references they list and the ones listed are too similar not to be compared. Contrary to the algorithm presented, the above contributions lend themselves to convergence analysis and guarantees due to their convexity. Possibly solid contribution, but high uncertainty on the originality of the contribution due to lack of references to the surrounding literature. <doc-sep>The paper proposes a novel approach for 'model compression': reducing the size and computational cost of a neural network model by converting weight matrices to (a) be sparse and (b) low-rank. While prior works have considered both sparsity (i.e., pruning) and low-rank-ness before, the proposed method utilizes both simultaneously: approximating the weight matrix as a sum of two matrices, one that is low-rank and one that is sparse. This leads to an improved performance-compression trade-off: indeed, in some cases, this approach seems to have a regularizing effect and actually improves the accuracy of the model. ### Strengths - This is a well-written and well-motivated paper. The algorithm is introduced and derived systematically. I especially appreciated the thorough discussion and analysis in Section 3. - The experiments are convincing and impressive. ### Weaknesses These are relatively minor: - While the paper appears to do a reasonable job of covering related work, it would have been good to have pointers in Section 4.2 for how the projection steps are similar to or differ from prior low-rank/sparse compression methods. - There are only two compressed versions reported for each architecture/benchmark. I realize that running the optimization+training is computationally expensive. But it would've been good to have a more complete picture of the performance-compression curve. For example, for Resnet-20, there are only models with >= 70% compression --- would've been nice to know how performance degrades as you push for lower target rank/sparsity. Again, this is not a requirement for publication (because these experiments are expensive), but I'd recommend that --- if possible --- authors consider doing a more thorough exploration for Resnet-20 on CIFAR-10 (the lightest model+benchmark). - One of the things that's a little hard to glean is which is better at maintaining performance: pushing for lower rank or higher sparsity. Right now the algorithm requires both to be provided as input, but one can imagine doing a 'meta'-optimization on top to set these parameters to achieve the highest performance for a target FLOPs budget. While that would be a separate algorithm and out of the scope of this paper, it would be helpful to give readers a bit more intuition on which approximation causes a higher performance loss (and if it varies from layer to layer). - Finally, it is interesting to observe that for Resnet-20 and -56, the cheaper models actually lead to higher accuracy. I assume this is because of a regularizing effect, but perhaps the compressed models also have better optimization behavior. It would be nice to also report training set performance to verify this. Overall this is a great paper, and I believe clearly meets the bar for ICLR. <doc-sep>The authors argue and propose to compress the neural networks using an additive combination of TT decomposed and sparse tensors/matrices. To make this happen, authors formulate an optimizaiton problem and solve it using the ADMM framework. Authors provide compression results that are interesting (improved accuracy for a given compression rate), however, the method has an important shortcoming that requires setting the compression parameters by hand for each layer (ranks, sparsities) that limits its practical applicability. Additionally, authors miss a large body of related work on: a) additive combinations of compressions b) low-rank and tensor decomposition methods. Authors argue and empirically demonstrate that an additive combination of compression is a good choice for the neural network compression. Then authors formulate an optimization problem and solve it using alternating directions of multipliers which has three steps: step over nn parameters, step over compression weights, and step over multipliers. Authors justify usefulness of this scheme using small empirical studies as well as by showcasing compression results. By itself, this would have been a good contribution if executed perfectly, however there are several shortcomings that prevent me from accepting the paper. 1. Justifications for the claims. To justify the usefulness of the additive combinations, authors perform several empirical studies over weights of the pretrained NNs (ResNet-20, fig2) and show that L+S scheme is much better than any other compounding (S->L or L->S). While this is an interesting empirical study, I would not be so sure to generalize it over all possible networks and use it as a trump card. In other words, an empirical study on a single network should not be generalized for all networks by saing "...L+S is the best choice...". Additionally, authors do not report the rank+sarsity settings used for this experiment, which further questions validity of such studies. Here is one practical argument againts L+S or any additive compression scheme: the overall compression ratio of any additive combination scheme is limited by the compression ratio of the worst term. For any L+S scheme, simply doing just L or S would get higher compression (but worse approximation error). Clearly, there is a compression-error interplay between using additive scheme vs using any single scheme, which should be studied more formally to make any long standing claims. 2. The formulation and the choice of hyperparamters. The fundamental practical issue for the proposed optimization problem is the fact that users need to give all ranks and cardinality constrains per layer, which involves on selecting over combinatorial number of different settings (rank per each layer $times$ cardinality per each layer)^(number of layers). 3. Missing details on reported quantities. Authors report overall compression ratio, however, do not report how these values were computed. While compressed storage of tensor weights is straightforward to obtain, the storage of sparse matrices might recuire different amount of bits depending on how they are being saved to disk. Please report clearly how the measures are being computed. 4. Authors omit mentioning/comparing to a vast literature of low-rank, pruning, and additive combinations literature. Some missing low-rank/tensor decomposition works: - [L1] Accelerating Very Deep Convolutional Networks for Classification and Detection (IEEE TPAMI 2016) - [L2] GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking, NeurIPS2018 - [L3] Automated Multi-Stage Compression of Neural Networks (ICCV Workshops 2019) - [L4] Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer (CVPR 2020) - [L5] Factorized Higher-Order {CNNs} with an Application to Spatio-Temporal Emotion Estimation (CVPR 2020) Relevant pruning works: - [P1] “Learning-Compression” Algorithms for Neural Net Pruning (CVPR 2018) - [P2] Automatic Neural Network Compression by Sparsity-Quantization Joint Learning: A Constrained Optimization-based Approach (CVPR 2020) Relevant additive combinations work: - [A1] More General and Effective Model Compression via an Additive Combination of Compressions (ECML 2021) - [A2] Handbook of Robust Low-Rank and Sparse Matrix Decomposition. Applications in Image and Video Processing (CRC Publishers 2016) - [A3] Compressing by Learning in a Low-Rank and Sparse Decomposition Form (IEEE Access 2019) Importantly, many compression approaches use variation of ADMM (e.g., L4, P1, P2, above or the work of Ma et al. (2019)). More specifically, additive combinations for model compression have been studied in [A1], and generally, such a combination approach is very well studied in image processing field. See for example the reference [A2]. Authors should mention these works and factor paper's contribution into the existing literature in a more rigorous way. My rating of the paper is based on the following issues: 1. Weakly justified claims 2. Practical difficulties of the proposed formulation 3. Missing details 4. Literature review and positioning <doc-sep>This paper introduces a new DNN compression technique. It consists in approximating the weights of a trained DNN by the sum of a low-rank and a sparse tensor. This is done by adding sparsity and low-rank constraints to the usual loss, the optimization problem being solved with ADMM. Experiments and comparisons with state of the art show the effectiveness of the technique. I think the paper addresses an important problem and brings an interesting contribution, even though the ideas (low-rankness and sparsity) are quite straightforward. The key elements of the method are clearly discussed and supported (even too heavily in my opinion). The proposed technique is challenged against state of the art for three common architectures on CIFAR-10 and ImageNet. In my opinion this is an interesting contribution and I support its publication. But I feel that the paper could have been more convincing if providing more extensive experiments, especially because the novelty is somewhat limited (the paper combines already existing techniques, even if this is done cleverly). I detail here some criticisms that could be addressed to improve the contribution. 1. Authors state that they exhaustively analyze the "design knobs and factors" on how to optimally combine low-rankness and sparsity for DNN compression. And this appears to be one of the main contribution of the paper. While I agree with the three asked questions, I think that the answers could be better supported. For instance, the answer to Question 2 compare SVD and TT on a single experiment, and only in terms of approximation error. Intead, I was expecting more experimental settings, more various decompositions (e.g., Tucker), and other metrics such as the final impact on the network's performance (that might not been exactly correlated to the approximation error, as remarked by the Authors). Similarly, for answer to question #3, I agree that using the loss instead of the approximation error might be preferable. But this should be supported by various experiments. And the Authors do not compare the regularized and the constrained formulations. 2. Experiments and comparison with state of the art. Even though the experiments and the baselines seem to assess the interest of the proposed technique, their could be more experiments and comparisons (in sections 5.2 and 5.3 or in Appendix), in order to show the impact of the proposed "design knobs" in the final result, or analyze the parameters, in particular the sparsity level and ranks. The computational load should be properly discussed and given for all the techniques, especially because it could be a drawback of the proposed technique. 3. Minor comments - Section 1 & 2 contain several repetitions and redundancies - Figure 3: the way the dimensions are reshaped before SVD is not clearly explained - Answer to question #3: I would say that the constrained and penalized formulations are very similar, and even equivalent for a given Lagrangian parameter. What makes them really different is the way they are parameterized, and it is often easier to set a constrain parameter than a penalty weight. Could this be better discussed? - Optimization: why not considering proximal gradient instead of ADMM? Summary Of The Review - Fairly good paper, but overall quality could be improved - Significant contribution, but might be slightly oversell - Convincing experimental validation but limited to only 3 experiments, with few additionnal results | The paper proposes a neural network compression technique based on sparse and low-rank approximations. The paper received mixed reviews, with one accept, one reject, and two borderline accepts. Most reviewers have appreciated the effort conducted for the evaluation. Three reviewers are nevertheless worried about the limited novelty and two of them found the positioning in the literature unclear with many missing references. In particular, one reviewer makes a strong case against the accceptance of the paper. The authors have made a significant effort to address the issues raised by the reviewers with a very long rebuttal. The area chair has read in details the responses, the points raised by the reviewers, and the paper itself. He/she tend to agree with the issues raised by the reviewers about the positioning of the paper in the literature and the missing baselines. The rebuttal was very helpful and addresses some of the concerns. There are still some remaining issues - the discussion about related work is relegated to an appendix. Yet, it is critical for positioning the paper and a discussion within the main paper would be more appropriate. - there is no assessment of the statistical significance of the results. Hyper-parameters are fixed to some ad-hoc values and it is unclear what the effect of different hyper-parameter choices is upon the method and other baselines. - for reproductibility purposes, providing code with the submission would be very helpful, especially given the empirical nature of the contribution. Overall, this is a borderline case, which, unfortunately, would require additional work before being ready for acceptance. |
This paper proposed Semi-ViT, a semi-supervised learning approach for vision transformers. The proposed method consists of three stages: first un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. At the semi-supervised fine-tuning. At the semi-supervised fine-tuning stage, Semi-ViT adopts two techniques: an exponential moving average (EMA)-Teacher framework and a probabilistic pseudo mixup mechanism, to improve the performance. Semi-ViT, achieves comparable or better performance than the CNN counterparts in the semi-supervised classification setting. The authors also show promising scaling up experiments, such as: Semi-ViT-Huge achieves an impressive 80% top-1 accuracy on ImageNet using only 1% labels, which is comparable with Inception-v4 using 100% ImageNet labels. Strength: The proposed method is clearly written. It can be understood easily. Weakness: Lack of novelty. The proposed three-stage training, EMA teacher, and the probabilistic Pseudo Mixup are all well-known techniques. (the specific techniques in probabilistic Pseudo Mixup is new, but Pesudo Mixup is very natural). All the experimental results are as expected. I did not learn much new here. For comparisons in Figure 1 (a,b), I'm not sure whether the baseline methods (SimCLRv2, PAWS, EMAN) are also trained in a three-stage manner (e.g., the third semi-supervised stage for SimCLRv2 can be just standard semi-supervised learning with EMA teacher). If not, the merge of three techniques together in Semi-ViT makes this comparison unfair to other methods. The results with 100% data for Semi-ViT in Table 1 should be reported. No matter it's better, equal or worse than the baseline, it is valuable point to make fair comparison with the baselines (the MAE paper only reports the 100% data results). Similarly, 100% data results should be reported in Table 8. yes. <doc-sep>Recently pseudo-labeling demonstrated powerful results in many domain, including object detection, speech and image recognition, NLP and others. Current paper continues series of works on pseudo-labeling in context of ViT architecture and understanding different aspects of successful pipeline for ViT models with respect to scaling and reducing supervised data. First, authors proposed probabilistic mixup which allows to use filtered pseudo-labeled data to augment non-filtered pseudo-labeled data: weights of mixup are not sampled from beta distribution but pseudo-label score defines them. This scheme is shown with many experiments and ablations to be very effective and give consistent significant improvement. Second, authors confirm that FixMatch is unstable scheme of training in both cases having or not the self-supervised pretraining in the regime of low supervision (1% or 10% of ImageNet is used as labeled data). Third, authors demonstrated that self-supervised pretraining is complementary to pseudo-labeling and combination together improves results by a lot especially in 1% labeled data setting (this result was shown in several domains too, e.g. speech recognition). Finally authors show great scalability of pseudo-labeling for ViT models (with self-supervised pretraining, supervised finetuning and then EMA pseudo-labeling finetuning) and reach impressive results with only 1% labeled data of ImageNet compared to ImageNet supervised baselines. **Strengths** - Very well, clearly written paper with all necessary details and deep explanations - Comprehensive experimental study of pseudo-labeling for ViT and proper ablations showing consistent results across the board - New idea of probabilistic mixup which gives consistent experimental improvement across the board for different scenarios and pipelines - Ablations on FixMatch confirming training instability for low supervision setting - Ablations showing complementary property of pseudo-labeling and self-supervised pretraining - Impressive results with 1% labeled data only - Demonstration of scaling property for pseudo-labeling for ViT architecture **Weaknesses** - [not important] Absence of some recent literature on theoretical justification of pseudo-labeling and similar EMA study and stability in other domains (see Questions section on more details) - Absence of investigation at what extent filtering is important in the pipeline (regarding that mixup with filtered data helps) - this could be another baseline for probabilistic mixup justification - [maybe future work?] Absence of study how many epochs of supervised training / supervised finetuning is needed before starting EMA pseudo-labeling process. Limitations are listed in the conclusion section. <doc-sep>This work proposes a three steps training framework for pure ViTs, including un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. EMA and a probabilistic pseudo mixup mechanism are used and results are competitive. + The paper is well written and easy to follow. + First to use pure ViT for SSL. - The proposed training pipeline are not new compared with former works, such as [14]. - The improvements are based on existing works (i.e. EMA-Teacher) that are easy to come up with in the semi-supervised domain. The necessity to design spatial methods for SSL on ViTs needs to be clarified. <doc-sep>This paper proposes a semi-supervised framework for vision transformers. In which, author introduces two techniques to improve the robustness and performance of ViT in semi-supervised learning. They are 1. EMA-teach network update which is the moving average of the student network. 2. Probabilistic Pseudo Mixup which is a novel mix-up method under pseudo-labelling based SSL framework. Strengths: 1. This paper is well written, and the core idea is easy to understand. The proposed method and formulate is clean, straightforward, and easy to re-implement. 2. The proposed method effectively improves the semi-supervised training for ViT. Compared to the baseline, both EMA-teacher updating, and Probabilistic Pseudo Mixup achieve significant improvement. 3. The Probabilistic Pseudo Mixup is novel, which provide a new direction to employ mix-up in ViT under pseudo-labelling based SSL framework. 4. The experimental result is remarkable. Compared to fully supervised finetuning after MAE, this paper is only 2% lower with only 10% imageNet data. In addition, the proposed method works well under various self-supervised pretraining pipelines. Weaknesses: 1. It will be good to show more ablation study over some hyper-parameters, such as the momentum decay and confidence score. 1. The large-scaled self-supervised pre-training (MAE) may generate more carbon emission. | This paper explores Semi-ViT, a semi-supervised learning approach for vision transformers. Semi-VIT build-on three stages pipeline such as SimCLRv2. The authors introduce a probabilistic mixup for the semi-supervised finetuning stage which gives consistent experimental improvements. Semi-ViT shows strong empirical results as it achieves 80% top-1 accuracy on ImageNet using only 1% labels, which is comparable with Inception-v4 using 100% ImageNet labels, Demonstrating that ViT+semi-supervised training enables to reach 80% top-1 accuracy on 1% ImageNet is novel and of potential interest to the SSL community. I therefore recommend acceptance. However, I would encourage the authors to clarify that the three-stages pipeline is not a contribution of the paper and focus the novelty on the probabilistic mixup and the experimental study. |
This paper proposes a new parameterization (HaBa) of image data for dataset condensation. The proposed method decomposes a dataset into two components: data hallucination networks and bases, and considers the relationship between different condensed data points. Experiments show that HaBa achieves a better compression rate and better cross-architecture generalization performance. This paper is well-motivated and focuses on an essential problem (the parameterization of the condensed data) for current dataset condensation methods. The proposed method is novel, and the results are encouraging. - Originality: Decomposing the condensed data to bases and hallucinators is novel and interesting. Related work focusing on the parameterization of the condensed data: https://arxiv.org/abs/2205.14959, https://arxiv.org/abs/2206.02916. - Quality: Most arguments are well-supported, and the experiments demonstrate the effectiveness of the proposed method. Some additional ablation studies and experiments are needed to make the paper more convincing. - Clarity: This paper is well-written and easy to follow, though the position of some figures and tables can disrupt the reading experience. Some training details can be added to the appendix. - Significance: The experiments demonstrate the effectiveness of the proposed method and can inspire some future work on the parameterization of the condensed data. The authors may want to discuss the limitations of their methods. Here are some of my incomplete thoughts. - There are so many hyperparameters in the proposed method that may impede its practical usage. - The parameterization only works for image data but may want to consider other modalities. - When scaling the method to many bases and hallucinators, there may be some optimization issues. <doc-sep>The paper investigates different ways to parametrize the synthetic learned/distilled dataset in the Dataset Distillation [1]. In particular, they propose to reparametrize the dataset as $\\{G_j(z_i)\\}_{ij}$, where $z_i$'s are "basis" (i.e., latents in usual generative modelling terminology), and $G_i$'s are the "hallucinators" (i.e., generators). (I took liberty to use variable $z$ rather than the $\\hat{x}$ in paper, for better clarity). This reparametrization can be essentially applied with any Dataset Distillation method. In particular, the authors show that combining it with MTT (the SOTA Datasett Distillation work), can learn better distilled datasets **with same total number of parameters but with a much larger dataset size**. In fact, the performance gain mostly come from the increased dataset size (see below). [1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018. I will first discuss what I believe are two important issues of the current paper, and then provide a list of strengths and weaknesses. ### **Parameter count and dataset size** It is important to note that + The method essentially is a **compression** of the distilled dataset in terms of #parameters, but **not in** dataset size, the usual quantity of interest in Dataset Distillation. The authors unfortunately do not make this clear in most cases when claiming improvements, which in reality comes at a cost of larger distilled dataset size. + It is unclear why #parameters should be a meaningful metric for Dataset Distillation. The paper does not provide arguments for it, or compare with any image compression methods. + When comparing with same **distilled dataset size**, the proposed approach often obtains slightly inferior performance (than the original pixel parameterization). This runs directly in contrary of the authors' claim that optimizing pixels directly is difficult for learning relations between samples (lines 29-33,43-44,121-124,329-331). ### **Improper credit assignment in writing** I believe that it is greatly damages the integrity and openness of the academic community to + Refer to this task as Dataset Condensation (DC) [2] rather than Dataset Distillation (DD) [1] + Write as if the DC paper [2] introduces this task (entire paper, esp. introduction and abstract; DD is only mentioned once in related work and as a row in results table), when + Both DD [1] and DC [2] talk about the same task, with DD paper predating DC paper for **2 years**, DC paper citing DD paper and acknowledging that they investigate the same task, + Neither DD paper nor DC paper explicitly gives a name to the task, but calls their proposed method DD or DC, and seems to refer to the task as DD or DC. It is **extremely misleading** to people not familiar with this area, and essentially assigns credit in an obviously wrong way. I sincerely hope the authors did not do this purposefully. Admittedly, some other papers do the same thing, but it is no excuse to keep doing the harm. I strongly urge the authors to revise the paper in this aspect. ### **List of Strengths and Weaknesses** **Strengths:** + The authors show that, by reparametrization with deep generators in Dataset Distillation training, it is possible to further compress a distilled dataset into fewer parameters, without losing much performance. + Therefore, with the same parameter count, the parametrization achieves better Dataset Distillation performance (at the cost of larger distilled datasets). If parameter count turns out a useful metric in future, this can potentially have better use cases, after validating its benefits against other image compression techniques. **Weaknesses:** + ~The stated motivation for proposed reparametrization is that (original) pixel parametrization don't learn relations. This claim is not immediately clear why it is true, and is not verified. In fact, results in paper show that parametrization does not help when the dataset size is kept the same.~ + ~The motivating example in Section 1 is misleading.~ + The paper mentions reduced storage cost, but it is unclear why it should matter when the distilled dataset often only contains tens or hundreds of images. + ~Writing is somewhat misleading in that whenever performance improvement is mentioned, the increased dataset size is almost never mentioned.~ + ~No comparison with standard image compression techniques.~ + Missing comparison with other DD parametrization work. Please see my questions and suggestions below. [1] Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018. [2] Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. Dataset condensation with gradient matching. arXiv preprint arXiv:2006.05929, 2020. ---- The author's comment addressed most of my concerns (striked through text above). I have adjusted my score accordingly. I do not think that the limitation is sufficiently discussed. As I emphasized above, the authors should make it clear that the claimed improvements come with the cost of much larger distilled dataset size, the one metric that Dataset Distillation research cares about the most. Furthermore, the added generator components should increase both training time and the training GPU memory cost. The training time effect is not really discussed other than a sentence in the checklist, which is not part of the paper. <doc-sep>This paper proposes a novel algorithm for dataset condensation and introduces a dataset factorization approach - HABA which frame the factorization as a hallucinator-basis problem. This paper also introduces a pair of adversarial contrastive constrains to increase the diversity of generated images and inject more discriminant information into the factorization. ### Strengths 1. This paper is well written and easy to follow, and the presentation is clear. 2. The overall idea is novel and the experimental results are convincing. ### Weaknesses I have to be honest that I'm not an expert in this area, but I have a few questions about this paper. 1. Since the author has mentioned that the early dataset compression methods are inspired by the knowledge distillation, I'm wondering if these compressed data can be used for KD methods. 2. Although this paper has performed the experiments across different architectures, however, I'm wondering about the effect of the model size on the final performance. (i.e. the performance gap between the compressed data with the whole data on ResNet18 will be greater or smaller than ResNet-101 ?) adequately addressed | The reviewers originally had concerns but these have been well addressed by the authors in a thorough rebuttal and there is a consensus for acceptance. We encourage the authors to incorporate all the comments from the reviewers in the final version. |
This papers proposes a way to speedup the training of GBDTs via quantization technique. They adopt the idea of quantized gradients that is widely used in neural net training/compression literature and show that it is possible to reduce the precision as low as 2-3 bits and yet to achieve similar performance in terms of accuracy. The overall speedup in training time can be up to 2x. Strengths: - Practical and simple approach to speedup GBDT training for both CPU and GPU; - The paper is clearly written with nice introduction to GBDT training preliminaries and techniques which is especially useful for non-experts - Section 6 describes system implementation details which is quite interesting. I especially liked the idea of packing the gradients and hessian. - It is quite surprising to see that lowering the precision that much (up to 2-3 bit) would preserve the accuracy. It definitely has research/practical value and raises an interesting question: are GBDTs overparameterized? It is not a secret that the ensemble size in production environment can be huge. And if that is the case, then should we consider compressing them? Weaknesses: - Authors provide theoretical analysis of the method (i.e., error bound after quantization) which is great. However, there are some limitations: 1) the main theorem is only valid decision stumps; 2) the bound is presented for EACH GRADIENT. How about their summations or a bound for histogram counts? Moreover, the error can be proportional to max|$g_i$| (plus square of it) which I believe is pretty high and not sure if it is useful result. - Novelty. The training of GBDTs is done in traditional way: gradient+hessian, feature histograms, etc. So, there is no contribution from that angle. The application of quantization here is also straightforward and done via binning and rounding. However, it seems there is no previous work which combines them, whereas this paper shows its huge benefits. ----- After Rebuttal ------------ My main concerns regarding this paper were addressed during rebuttal and I'm willing to increase my score to 7. Overall, I think that this paper is a nice contribution with both theoretical and practical results. Especially, experimental results suggest strong implications: one can significantly quantize (compress) gradients and still learn pretty accurate boosted trees. Novelty seems to be fine: although quantization is well known technique in NN literature, this paper seems to be the first who proposes applying it to trees and did a nice job on that. . <doc-sep>This work proposes a low-precision training algorithm for GBDT based on gradient quantization and theoretical analyze the necessary precisions of gradients without hurting performance can be quite low. The authors also conduct extensive experiments on both CPUs and GPUs and the results show great improvements. Pros: 1. The article is well written, quite clear. Cons: 1. The advantages and limitations of the approach could be better underlined. 2. The work is limited in its novelty. 1. There are some other limitations that I think should be more discussed 2. The work is limited in its novelty. <doc-sep>The paper proposes a quantised version of Gradient Boosted Decision Trees. The paper proposes to quantise the high-precision gradients in a rather simple way. The paper considers the problem how many bits are really needed for gradients in GBDT to achieve a reasonable performance. It is shown that with the low-precision gradients most arithmetic operations may be replaced by integer operations. Strengths. The paper is clearly written. The results both theoretical and numerical are convincing. Weaknesses. The idea to perform training with the quantised gradients is not new. However, the papers introducing the quantised learning which appeared 5-6 years ago (Binarised neural networks, results of M. Courbariaux and Y. Bengio) are well cited. It would be interesting to discuss the cases (if they exist) where quantised gradients lead to much worse performance than continuous ones. | The reviewers conclude on an interesting paper with broad messaging that does make sense -- I do subscribe to it as well. I recommend it for acceptance, noting that interactions with reviewers were the occasion to provide additional remarks that have to be used to craft the camera ready, in particular for the remarks made to PPQi (mostly the technical part of this discussion). |
## Summary In this paper the authors introduce the notion of stable weight decay. The stable weight decay property can be defined in dimension 1 as follow: the effective learning rate represents an amount of time ellapsed between two iteration. The weight decay factor normalized (in log space) by the time ellasped should be constant across iterations. From their framework, the authors also propose SGDS, a minor modification to SGD where the amount of L2 regularization is increased with momentum, in order to balance for the larger step sizes that the momentum will yield. When applied to Adam, the authors derive AdamS, supposed to work better than the previous AdamW, which already improved how weight decay and Adam interact. AdamS has the stable weight decay property, unlike Adam or AdamW. AdamS weight decay amount is scaled by the denominator of Adam, taking the average over all dimensions to make it isotropic. The authors test their methods on vision tasks, where Adam is known to underperform compared to SGD, and their method AdamS achieves significant gains compared to AdamW. ## Review The reformulation introduced in equation (3) is an interesting alterntive view to look at weight decay, but I find it is not really used by the authors. The definition of the weight decay rate can be done from equation (2). The authors introduce the notion of stable weight decay and seem to right away assume it is a desirable property. In particular, there is no theoretical justification that this is the case. The benefit from SGDS is limited, except for hyper parameter tuning (as only the first few iterations of SGD are "unstable"), but it serves as a nice illustration of the new concept. The part on adaptive methods is more interesting but the authors deviate significantly from their theoretical framework. Verifying the stable weight decay property is actually not optimal, because it is not isotropic. The authors trick is to average the value of the moving average of the squared gradients along all dimensions before using it to rescale the weight decay. The experimental section of the paper focuses only on vision. It would have been interesting to see the effect of AdamS on other type of tasks. Overall I think the idea introduced by the author is interesting, although the theory is not completely coherant. The experiments shows significant improvement but could have been on a more diverse set of tasks. Still, I recommend acceptance. ## Remarks - In the Introduction, talking about adaptive methods: "are a class of dominated methods to accelerate", I'm not sure what dominated means here. - what is the point of having $\\beta_3$? after equation (4), the authors say "SGD with momentum is $\\beta_3=0$", that doesn't seem right, if $\\beta_3=0$, then the gradient is completely ignored. ============= Update avec rebutal and discussion with AC and reviewers. After discussing with the others reviewers and AC, I have come to share their concerns with the overall fragility of the paper. We agreed that the methods is sound and likely to work better than AdamW, the proofs are not sufficient. In particular, the authors should strive to provide experiments on different training sets (ImageNet) with learning rate cross validation. The authors do not systematically compare across learning rates which make it hard to interpret the results as being conclusive. In fact only CIFAR-10 is evaluated with multiple learning rates. The remark by Elya Loshchilov should also be adressed. Note that you cannot use stochastic noise as a justification, because for heavily overparameterized neural network, the amount of stochastic noise at the optimum is zero (i.e. perfect fitting of the training set). But there is another explanation: Intuitively for a fixed $\\beta_2$, $v_t$ goes to zero as the current gradient goes to zero, and the ratio of the gradient by $v_t$ will converge to some constant, which prevents convergence. $v_t$ goes to zero at the same speed as the gradient but with a delay of $1 / (1 - \\beta_2)$. If there is no convergence, then the gradients won't actually go to zero. The only way to prevent $v_t$ from going to zero is to have $\\beta_2 \\rightarrow 1$ (i.e. the previously mentioned delay going to infinity), but in that case $\\bar{v}_t$ won't go to zero neither. This is only an idea of a possible justification and I would encourage the authors to think carefully about this stability issues in the next revisions. Finally I would encourage the authors the remove from the theoretical analysis parts that are not actually used (which I and other reviewers have noted).<doc-sep>Summary: This paper presents a novel framework that alters the weight decay update rule and aims to improve generalization when applied to 1) momentum-based optimizers and 2) adaptive optimizers. The framework includes the concept of “weight decay rate” and “total weight decay”, where the idea is to make the “weight decay rate” constant throughout training. The framework is applied to “fix” or “stabilize” weight decay when applied to SGD with momentum and Adam (which effectively amounts to changing the weight decay parameter). The result is better generalization when applied to a variety of image classification tasks, compared to the appropriate baselines. Strengths: The core of the idea is novel, relatively simple, and the experiments support the claim. Weaknesses: My main complaint is that this paper is poorly written. I don’t understand the motivation behind the reparameterization of the update rule with w instead of \\theta. The concept of “weight decay rate” and “total weight decay” seems to be the most critical part of the paper, but the explanation surrounding them was messy and hard to understand. The language used is not precise at some points in the text. At the current state of the paper, I recommend a reject because I don’t think it’s polished enough to be published. Comments and questions: - I think the concept of weight decay rate and total weight decay should be explained better, and earlier in the paper. Without knowing what “stable” meant, I had a hard time taking the introduction seriously, because I didn’t know what “unstable” meant exactly, and that the decoupled weight decay method was “unstable”. - “\\beta_3 = 0” should be “beta_3 = 1”. The Pytorch documentation calls 1-beta_3 dampening and its default value is zero. - I’m not sure I understand the sentence “Although <\\theta_{t-1}> is an approximated value of \\theta_{t-1}, replacing \\theta_{t-1} by <\\theta_{t-1}> still introduces undesirable noise into learning dynamics.” What do you mean by undesirable noise? Why is replacing <\\theta_{t-1}> by <\\theta> justified? ===== Update ===== I have read the authors’ response, the updated paper, and the other reviews. I believe that the changes made by the authors address my concerns about the motivation and the lack of clarity; section 2 reads much better now. It seems like the main complaints of the other reviewers are in the lack of more difficult workloads, and the lack of theory. I personally don’t find the lack of theory very important. I think the novelty comes from the simple observation, which no one to my knowledge has come to before, and the experiments support the idea empirically (which I think is what actually matters). I also find it a bit uncomfortable penalizing the authors for not running experiments on ImageNet, and I think the variety in architectures that the authors tried, compensates for this. I do agree that a more modern set of workloads (transformers, or even the same setup as AdamW) would have made the paper much stronger. I increased my score to a 6 because I think the paper in its current form is enough to get accepted, but there are still improvements that could be done to make it much stronger.<doc-sep>In this paper, the authors study the effect of weight decay across different optimizers. When one uses weight decay, the learning rate which multiplies the weight decay is different from the effective learning rate (the term multiplies the gradient dependent contribution). The authors propose to adjust for this by having the effective learning rate multiplying the weight decay term, ie Delta theta= -eta_{eff} lambda theta-eta_{eff} F(gradient). For SGD, this implies that one can roughly equate weight decay and L2 regularization by rescaling the L2-parameter. For the case of Adam, the effective learning rate is anisotropic, which they fix by taking the mean of the vector 'v'. The authors show that after correcting for the effective learning rate, which they call AdamS, they can get the same performance as what one gets with SGD in CIFAR10 and CIFAR100. I find the discussion about rho and R confusing (they are not defined precisely and using the small lr/lambda approximation seems unnecessary) and they do not really add much. Similarly the term unstable seems a little strong given that there is nothing bad going on with training of such models. They also make comments about reinterpreting weight decay as flattening the loss and increasing the learning rate which they don't pursue further nor connect with the main point and it seems a little out of context. The main conceptual point of the paper is simply that the weight decay should have the same effective learning rate as the gradients, it would be nice if the authors could make this more clear and more central. The paper main point then is to equate the learning rate in the weight decay coefficient by the effective learning rate and they show that this might be enough to bridge the gap between Adam and SGD. I think this is an interesting problem, but given that this is their unique point they should probably back it up with more experiments, since they only check it for CIFAR datasets (4 experiments total). If the authors could do one extra experiment on Imagenet (for example with a standard Resnet-50), that would raise my score to weak accept. <doc-sep>Summary: This paper presents a novel weight decay regularization, stable weight decay by using the bias correction to the decoupled weight decay in adaptive gradient descent optimizer. They empirically found that the L2 regularization the decouple weight decay are unstable weight decay. The proposed stable weight decay can fix this issue in terms of dynamical perspective. Experimental results based on benchmark dataset show that the proposed scheme outperform the popular and advanced optimizers in generalization. Weight decay has been a basic technique in most optimizer and indeed there are not many studies for the effect on the performance. Also, how to obtain the optimal weight decay is still missing. I think this paper has done a decent investigation on this topic. The overall paper is easy to follow and seems technically sound. However, one major issue in my mind is that conclusions are only supported by the empirical results, which though look promising. No formal theoretical claims or results have been reported. This hurts the paper in terms of theoretical novelty. Even though some shallow analysis has been presented in the draft, it is not enough. Moreover, a couple of statements can be arrived at by directly observing existing algorithms, which are not that technical. For example, Statement 1 that says “Equation-1-based weight decay is unstable weight decay in the presence of learning rate scheduler.” can be quickly summarized even from an intuitive sense without any derivation, which makes it trivial. There is no need to give Definition 1 formally for Stable Weight Decay as that doesn’t sound like a definition. In the draft, the authors can directly say constant or time-varying weight decay. Stable doesn’t just mean constant, though I can completely understand what the authors really meant in the paper. Statement 4, in my opinion, is one of key results in the draft. However, the current theoretical analysis is not enough to support this conclusion. More formal theoretical analysis is required. A minor point, in Eq 3, how did the authors arrive at $-2t-1$ in the superscript of weight decay rate, not $-2t+1$? Additionally, I am a little confused about the statement that the effect of weight decay can be interpreted as flattening the loss landscape of $\\theta$ by a factor of $(1-\\eta\\lambda)$ per iteration and increase the learning rate by a factor of $(1-\\eta\\lambda)^{-2}$ per iteration. Can the authors put more detail, a bit more of derivation to show this? *************************************** After reading the rebuttal and considering carefully, I think the authors' response addressed some of issues in my mind so I raised the score. However, in terms of theoretical foundation, the current paper draft is still only marginal, which requires substantial improvement. | The paper proposes a novel way to have weight decay-like update rule. Empirically, the authors claim that it improves generalization when applied to momentum-based optimizers and optimizers with coordinate-wise learning rates. This paper has been thoroughly discussed, both in public and private mode. The strength of this paper lies in the possible gain in generalization performance due the proposed change. The weaknesses are: - the very confusing and not scientific motivation of the proposed change - the experiments are not fully convincing More in details, we all found the discussion on "stable" and "unstable" weight decay extremely confusing. The claim of the paper is that "stable" weight decay should be preferred over "unstable" one. However, to validate a scientific claim it is necessary to carry out an empirical or theoretical evaluation. The theoretical one is simply missing: a number of proposition and corollaries are stated with some simple mathematical facts completely disconnected from the optimization or generalization issues. As it is, removing these arguments would actually make the paper better. On the empirical side, there is no experiment that supports the simple claim that "the unstable weight decay problem may undesirably damage performance". Instead, what we see are experiments in which the modified update rule seem to perform better, but they don't actually show that "stability" or "instability" are the specific issues at play here. Indeed, any other explanation is equally valid and the experiments do not support any specific one, but rather they can only support the claim that the proposed algorithm might be better than some other optimization algorithms. The *specific reason* why this is happening is not clear. Turning to the empirical evaluation, the discussion elicited the fact that, a part for CIFAR10, the experiments are carried out without tuning of the learning rates. Hence, it is difficult for us to even validate the claim of superiority of the method. I don't subscribe to the idea that a deep learning paper requires experiments on ImageNet to be valid. Yet, given that there is no supporting theory in this paper, the empirical evaluation should be solid and thorough. For the above reasons, the paper cannot be published at ICLR. |
This paper studies the problem of source detection in an epidemics when one observes the underlying graph and a snapshot of the population at a given time i.e. who is infected or not infected. For a SIR (or SEIR) model, the authors propose to use GNN for this task. The learning procedure is then the following: given a fixed graph G, the authors create a dataset of snapshots by running a SIR on G. I am not convinced this problem should be solved with a machine learning approach. In most practical cases, we only have access to one snapshot for a given graph and learning is impossible. The authors here solve this issue by simulating many SIR processes but techniques like the one described by Shah and Zaman without any learning seem much more appropriate. The authors should compare their results to the results obtained by Shah and Zaman. [No rebuttal given by the authors] Score unchanged.<doc-sep> This paper proposes to use a graph neural network to infer the source of an epidemic in a network. Given a snapshot of the epidemic, the goal is to determine patient zero without full information of the mechanics of the epidemics but rather learning from historical data. Overall evaluation: My evaluation is borderline. Although the application is timely and of interest, there seems to be little methodological novelty (simply an application of an existing GNN architecture) and, for being an empirical paper, the numerical experiments are not entirely convincing (more details below). The theoretical results add to the contributions. Pros: 1 - Timely application of GNNs to a network problem of interest. Neutral: 1 - The theoretical statements add to the contributions, although quite simple in essence. For example, Theorem 2 is computing the probability that a given node belongs to a triangle in an ER graph. Proposition 1 seems to be more of an observation than a formal statement. Indeed, Reaction-Diffusion dynamics and MPNN are both non-linear processes on networks. It is unclear if Proposition 1 is saying anything more fundamental than this. Cons: 1 - No methodological innovation or domain knowledge embedded in the design of the architecture. The authors talk about “several modifications” but these seem to be more accessories (skip connections and batch normalizations) than incorporation of expertise in the architecture or the loss. 2 - For a heavily empirical paper, the experiments are not comprehensive enough. Generalizability across graph types, real-data experiments (instead of simulated data on a real graph), learning from an epidemic mechanism and testing on another one, and learning with just a few observations (how realistic is to observe 20,000 epidemic spreads on the same graph?) are some of the areas that one would expect an empirical paper to cover and are missing in the current version of the manuscript.<doc-sep>An S(E)IR epidemics propagates on a graph, and the goal is to detect its source (P0) only from the observation of the state (S,E,I,R) of every node of the graph at some time $T > 0$. This version of the source detection problem has been studied first by Shah and Zeman (2011) for SI epidemics, as listed in Section 2. The current paper claims to (i) establish new fundamental limits on this problem, showing in particular that after some time the source detection becomes difficult, and (ii) to demonstrate the ability of graph convolutional networks to solve the problem and validate the results on real data. (i) The theoretical part consists in two short theorems, but their proofs are problematic. - Theorem 1 builds on an approximate analysis in Newman (2018), which requires a number of assumptions and approximations (for example, a mixing assumption that allows to replace expectations by ensemble averages in ordinary differential equations describing the evolution of $S$, $I$ and $R$; the limitation of time $t \\rightarrow 0$). These assumptions and approximation might not be valid on all graphs (e.g., a line graph has the largest eigenvalue around 2, but it is difficult to imagine that $O(1)$ nodes get infected in $\\Theta(log(N))$ time). The proof cannot rely on implicit assumptions and approximations whose error is not rigorously estimated. Maybe the theorem is true for some class of graphs including Erdős-Rényi graphs, but even in that case a more involved proof is needed, since the current techniques only work for early times $t$ and not for large times as in (5). Some computations could be clarified, for instance if $ \\langle \\psi^{(1)} \\cdot I(0) \\rangle $ denotes the average (over the uniform prior of patient zero over the $N$ nodes) of the scalar product between $\\psi^{(1)}$ and the one-hot vector $I(0)$, why is it equal to $\\lVert \\psi^{(1)} \\rVert_1 $ instead of $ \\langle \\psi^{(1)} \\cdot I(0) \\rangle = \\frac{1}{N} \\sum_{i=1}^{N} \\left( \\psi^{(1)} \\cdot I(0) \\right)_i = \\frac{1}{N} \\lVert \\psi^{(1)} \\rVert_1$ ? - The proof of Theorem 2 is flawed. It starts with the statement that "if P0 is in a triangle, we may miss it $2/3$ of the times." Why would all nodes of a triangle be equally likely to be the source? With the same rationale, why could not we argue that if P0 is in an edge, we may miss it $1/2$ of the times, which would give an upper bound of $1/2$ (as long as $|G_I|>2$) and would contradict the simulations results? The next statement that `"in $G_I$ all nodes have degree $k \\approx p|G_I|$" appears incorrect: suppose that the graph is an E-R graph $G(N,p)$, that $R_0$ is very large and that the infection spreads in a snowball way. Then most of the early infected nodes in $G_I$ have node degree $pN$, and not $p|G_I|$. - Now, the fact that P0's detection becomes harder over time is an important message, especially these days, but it is not a surprising result that when the infected set is a constant fraction of the population, then it is hard to detect P0. This difficulty was already reported in the initial paper by Shah and Zaman (2011). - In Section 3, the authors contend that compared to the SI model, the removed state introduces additional uncertainty about the temporal order of infections. Why? Since the state of each node is known, having 3 classes (S,I,R) instead of two (S,I) gives more information, which should ease the task of detecting P0. (ii) The comparison of the GNNs used by the authors over state-of-the-art message passing algorithms is made only with the DMP method of Lokhov at al (2014), but not with the (in general more accurate) belief propagation method of Altarelli et al (2014), also cited in Section 2. There is no comparison with the rumor centrality method developed by Shah and Zaman (2011) either, in terms of accuracy and speed. Also, it would be interesting to see the comparison at times other than $T=30$. - In terms of speed, training and inference should clearly be separated. It may be misleading to report in the abstract that GNNs are 100 times faster than state of the art methods: that applies only for inference. It would be more accurate to report that GNNs are 100 times faster for inference and twice faster for training compared to the DMP method (as well described in Section 5.1). - It may be unrealistic to assume that all 4 states (S,E,I,R) can be detected for each individual; the exposed state in particular might be very hard to detect. Otherwise the simulations on real data seem well-done. It is unfortunate that with the non-interpretable theoretical results, the simulation results do not give much insight either. For example, it would be interesting to compare the accuracy results to the size of the infected set on the simulations. Maybe it would be more meaningful to normalize the rank by $|G_I|$ instead of $N$. - In Figure 2, how are the theoretic curves computed? Equation (7) in Theorem 2 depends on $|G_I|$, which is not directly linked to $T$ nor to the epidemic parameters. Is $|G_I|$ computed based on the simulation results? If so, why are then the confidence intervals not given for these curves? - In Figures 4 and 5, how is the size of the set $G_I$ evolving over time? - The paper should be proofread, it contains quite a few typos or vague statements, for instance: Theorem 3 in the appendix is actually Theorem 1 in the main paper; Figure 2 caption does not read well (no verb in the sentence : While accuracy drops below...); Figure 4 caption: cycles significantly reduces accuracy of P0 -> cycles significantly reduce the accuracy of the detection of P0; Bottom of p4: where each edge has independent an probability $p$ -> where each edge has an independent probability $p$; etc<doc-sep>Summary: Backtracking source of an epidemic (Patient Zero (P0)) is one of the important research topics of the current era that helps efficient resource allocation. Many of the existing works in this domain use graph-theoretic measures or message passing algorithms to tackle this problem. In contrast, this paper uses recently emerged Graph Neural Networks (GNNs) to learn and efficiently locate P0. It models disease spreading as a contagion process over a graph. While considering cyclic graphs, the paper shows upper bound on the accuracy of finding P0 and presents a bound of time horizon after which inference become difficult. Experimental results on different real-world and synthetic networks show interesting results. Comments: The framing of P0 in terms of learning problem and the proposal of GNNs for solving it is a valuable contribution. Also, the theoretical bounds of time horizon and accuracy is quite interesting. Furthermore, Proposition 1 shows an interesting observation that Reaction Diffusion Dynamics is structurally like Message Passing Neural Networks. Such mapping may open new ways to study information diffusion over networks in a learning perspective. Overall, this paper is well-written and structured and friendly to reviewers. I would suggest the following comments for further improving the quality of the paper 1. Although the experimental details are given in section 4.1 and Appendix A.1. However, it would be useful to add datasets tagging details that how nodes in each simulation were tagged for training GNNs. 2. Since all the experimental results are based on the randomly chosen P0 and S(E)IR simulations parameters. I encourage the authors adding an ablation study. It would be more helpful to see results on different combination of parameters. Also, the statistical properties such as degree, the number of triangles it lies etc., of P0 would be helpful to see. 3. To ensure that GNNs work as intended, it would be helpful to see learning curves of GNNs on each type of random graph 4. I encourage the authors to release their code and pre-trained models in a format that is easy to reuse for other researchers 5. The use of \\citet and \\citep has been mixed throughout the draft. The authors are encouraged to revise the citations accordingly. 6. The introduction section is lacking in terms of problem motivation. It’s mentioned that the problem is hard however, there is no discussion on the computational complexity of existing graph-theoretic methods and the challenges that need to be addressed. 7. Eq. 3 \\alpha is undefined. | The paper introduces a GNN approach to solve the problem of source detection in an epidemics. While the paper contains some interesting new ideas, the reviewers raised some important concerns about the paper and so the paper should not be accepted in the current form. In particular, - the paper does not motivate the ML approach to the problem - the experiments are limited for an empirical paper - the method used in the paper is not very novel - the proofs presented in the paper are not formal enough |
This paper presents a variant of stochastic sequence neural network, the family of VRNN and SRNN. This paper adopts the CW-VAE framework and completes the optimization process under the stochastic sequence neural network framework. The authors test it on the speech domain. The experiments show that it outperforms VRNN and SRNN in the benchmark datasets. Strength: 1. The proposed method is clear. The experiment results support that the proposed method is better than the VRNN and SRNN methods. Weakness: 1. The novelty is limited. The main ideas about clock-wise RNN network and CWVAE ideas have been proposed. 2. The hierarchy latent variable idea has been proposed in Stochastic WaveNet (https://arxiv.org/pdf/1806.06116.pdf) and STCN (https://arxiv.org/pdf/1902.06568.pdf). This paper didn’t compare the proposed method with these two methods. 3. In the paper, https://arxiv.org/pdf/1902.01388.pdf, the authors point out the evaluation problem of the stochastic sequence neural network. They found the stochastic sequence neural network has an unfair advantage over the deterministic model when s!=1. And with some tricks, the deterministic model can catch up the stochastic model's performance. But the authors didn’t discuss and address this issue in this paper. The proposed idea is reasonable but didn't conduct a comprehensive study comparing related works. <doc-sep>This paper proposes to put various models under the same experimental setting and compare their rate at compressing speech. The models of choice are vanila LSTMs, variational RNNs, stochastic RNNs, Clockwork VAEs, and WaveNets. The results are also compared against regular compression algorithms, for example, FLAC. The paper is well motivated. I do think benchmarking different latent models is worth doing, and reporting the compression rate is the right metric. The experiments themselves are fine, but the evaluation metric is a little confusing. All models, except vanilla LSTM and WaveNet, have hidden variables to marginalize, so I'm not entirely sure how the likelihoods are computed. Marginalization is difficult as the paper argued. It's unclear whether, for example, the numbers in Table 1 are simply the values of the variational lower bound, or if any approximation is done to marginalize the hidden variables. The paper also attempts to answer why one model can be better than the others. The paper looks into phonemes and speaker genders, but the message is not clear. The presentation is fine. The majority of the paper is spent reviewing the models. I have mixed feelings visualizing the models the way it is done in Figure 1. Figure 2 is a much better representation, laying out the conditional assumptions for both the encoders and decoders. I do understand it would take up a lot of space, but it might be worth putting a figure in the appendix. It's also worth talking about the independence assumptions and where uncertainties are baked in. The paper is well motivated. The experimental design is fine, but it's unclear how the evaluation metric, the likelihood, is computed without marginalizing the hidden vectors. The presentation is fine. <doc-sep>This paper presents an exploration of the use of latent variable models as generative models of speech. Noting that such models work well in the image space, but not so much in the speech space, the authors move on to adapt the Clockwork VAE (a video LVM) as a speech model. In the process the authors present a series of useful technical solutions to various issues that arise in this domain transition. This, and other generative models of speech are later compared in the experiments section. The results show that this approach is potentially viable. The performance of the proposed speech LVM is good, albeit it comes with increased computational complexity (hopefully something to solve in the future). In addition, it is shown that the resulting latent representation is correlated with phonetic structure, which is a pleasant bonus that other speech generative models (e.g. WaveNet) lack. Strengths: This paper addresses a series of issues that need to be resolved to apply an LV model like clockwork VAE to data like speech. I find that sequence of steps to be instructive, and the overall target to be one worthy of exploration. I think that collectively all the engineering described here is a significant amount of work and I appreciate it all being in one place. Weaknesses: This paper seems quite detached from the community that would find it most interesting. Speech generation is something that has been studied for a very long time, so I would have expected to at least get some sense of how this approach would compare (on various aspects) with modern practice. Of course, WaveNet is one model that can serve as a well-recognized benchmark, but there is a lot more to compare with here. I am also uncomfortable with the use of bps as a performance measure of the generative power of these models. I would have liked to hear some examples and understand how these models differ in their outputs. This paper provides an insightful exploration of how one can use an LVM as a speech generative model. Although not completely achieved here, I feel that this paper shows some intriguing progress towards that goal, and towards speech models with semantically meaningful latent states. I think many researchers in this area will find interest in all the engineering that was put to work in this paper, which might also be useful outside of this particular problem. On the downside, this paper doesn't feel like it addresses any deep scientific questions (although it touches on some near the end), and it mostly reads like a todo list to get clockwork VAE to work with speech. | This paper presents the application of the hierarchical latent variable model, CW-VAE which is originally developed in the vision community, to the speech domain with meaningful modifications, and provide empirical analysis of the likelihood as well as discussions on the likelihood metrics. The reviewers tend to agree that it is a promising direction to study hierarchically structured LVMs for speech, and the introduction/adaptation of CW-VAE is useful. There were some discussion on the suitability of the likelihood evaluation, and it appears a fair comparison with wavenet shall take place at s=1 (single sample), a resolution level the proposed method does not yet scale up to. On the other hand, an important potential use case of the model is representation learning for speech, as it is a common belief that at suitable resolution the features shall discover units like phoneme. But I find the current evaluation of latent representations by LDA and KNN to be somewhat limited, and in fact there is no comparison with suitable baselines in Sec 3.2 in terms of feature quality. A task closer to modern speech recognition (e.g., with end-to-end models) would be preferred. |
The authors address the problem of how to use unsupervised exploration in a first phase of reinforcement learning to gather knowledge that can be transferred to new tasks to improve performance in a second task when specific reward functions are available. The authors proposed a model-based approach which uses deep neural networks as a model for the environment. The model is PETS (probabilistic ensembles with trajectory sampling), an ensemble of neural networks whose outputs parametrize predictive distributions for the next state as a function of the current state and the action applied. To collect data during the unsupervised exploration phase, they use a metric of model uncertainty computed as follows: the average over all the particles assigned to each bootstrap is computed and the variance over these computed means is the metric of uncertainty. The authors validate their method on the HalfCheetah OpenAI gym environment where they consider 4 different tasks related to running forward, backward, tumbling forward and tumbling backward. The results obtained show that they outperform random and count based exploration approaches. Quality: I am concerned about the quality of the experimental evaluation of the method. The authors only consider a single environment for their experiments and artificially construct 4 relatively similar tasks. I believe this is insufficient to quantify the usefulness of the proposed method. Clarity: The paper is clearly written and easy to read. Novelty: The proposed approach seems incremental and lacks novelty. The described method for model-based exploration consists in looking at the mean of the prediction of each neural network in the ensemble and then computing the empirical average. This approach has been used before for active learning with neural networks ensembles: Krogh, Anders, and Jesper Vedelsby. "Neural network ensembles, cross validation, and active learning." Advances in neural information processing systems. 1995. The used model, PETS, is also not novel and the proposed methodology for having first an unsupervised learning phase and then a new specific learning task is also not very innovative. Significance: Given the lack of a rigorous evaluation framework and the lack of novelty of the proposed methods, I believe the significance of the contribution is very low.<doc-sep>The authors built upon the PETS algorithm to develop a state uncertainty-driven exploration strategy, for which the main point is to construct a reward function. The proposed algorithm was then tested on a specific domain to show some improvement. The contribution of this paper may be limited, as it needs a specific setting, as shown in Figure 1. Furthermore, this paper is a bit difficult to follow, e.g., it was not until the 5th page to describe their algorithm. I summarize the pros and cons as follows. Pros: - The idea to include the exploration for PETS is somewhat interesting. Cons: - The paper is a bit difficult to follow. Just to list a few places: 1. The term "unsupervised exploration" was mentioned a few times in this paper. I am not sure if this is an accurate term. Is there a corresponding "supervised exploration" used elsewhere? 2. When you introduced r_t in Section 3.3, how did you use it next? Was it used in Phase II? 3. For the PETS (oracle) in Figure 4, why are the settings different for forward and backward tasks? 4. What does "random" mean in Figure 4? - The novelty of this paper is somewhat limited, as it requires a specific setting and has been applied in only one domain. - There are a few grammar mistakes/typos in this paper. 1. What is "k" in the equation for r_t? 2. "...we three methods..." in Page 6.<doc-sep>The paper performs model-based reinforcement learning. It makes two main contributions. First, it divides training into two phases: the unsupervised phase for learning transition dynamics and the second phase for solving a task which comes with a particular reward signal. The scope of the paper is a good fit for ICLR. The paper is very incremental: the ideas of using an ensemble of models to quantify uncertainty, to perform unsupervised pre-training and to explore using an intrinsic reward signal have all been known for many years. The contribution of the paper seems to be the combination of these ideas and the way in which they are applied to RL. I have the following observations / complaints about this. 1. The paper is very sparse on details. There is no pseudocode for the main algorithm, and the quantity v^i_t (the epistemic variance on page 5) isn't defined anywhere. Without these things, it is difficult for me to say what the proposed algorithm is *exactly*. 2. Sections 1 and 2 of the paper seem unreasonably bloated, especially given the fact that the space could have been more meaningfully used as per (1). 3. The experimental section misses any kind of uncertainty estimates. If, as you say, you only had the computational resources for three runs, then you should report the results for all three. You should consider running at least one experiment for longer. This should be possible - a run of 50K steps of HalfCheetah takes about one hour on a modern 10-core PC, so this is something you should be able to do overnight. 4. The exploration mechanism is a little bit of a mystery - it isn't concretely defined anywhere except for the fact that it uses intrinsic rewards. Again, please provide pseudocode. As the paper states now, the lack of details makes it difficult for me to accept. However, I encourage the authors to do the following: 1. Provide pseudocode for the algorithm. 2. Provide pseudocode for exploration mechanism (unless subsumed by (1)). 3. Add uncertainty estimates to evaluation or at least report all runs. I am willing to re-consider my decision once these things have been done. | Strengths The paper proposes to include exploration for the PETS (probabilistic ensembles with trajectory sampling) approach to learning the state transition function. The paper is clearly written. Weaknesses All reviewers are in agreement regarding a number of key weaknesses: limited novelty, limited evaluation, and aspects of the paper are difficult to follow or are sparse on details. No revisions have been posted. Summary All reviewers are in agreement that the paper requires significant work and that it is not ready for ICLR publication. |
The paper introduces a new combination of MCTS with generalized policies for probabilistic planning. The generalized policies are based on the recently developed ASNets (Toyer et al. 2018), and are used in the simulation phase or in the action selection phase of UCT. The main idea of the paper is intuitive and its details are presented in a very clear manner. As mentioned by the authors, the rationale behind this combination of UCT with ASNets is to obtain the best of both worlds, exploiting during search the reactive knowledge learnt by the ASNets, but also overcoming through the search the potential weaknesses of the inductive learning approach embodied by theses ASNets. All of this is adequately analyzed and discussed in the experiments. Even though only three domains were tested, the experimental results and their explanations are sound and insightful. The paper concludes with a short overview of related research, together with some interesting ideas for future work. The topic is well suited for the context of HSDIP; the idea is intuitive and clear; the presentation of the paper is well organized; the results are competitive with other methods based on generalized policies for probabilistic planning, and are well analyzed. Besides all of these reasons, the topic of generalized planning has been drawing a good amount of attention recently, and I believe this paper could spark a nice discussion and provide some interesting future work ideas in the workshop, and would therefore argue for acceptance. A few minor remarks: - Could you please put Figure 1 in the top of the column so it does not break the paragraph? - In Subsection 4.2, in the case-definition of Ranked-ASNet, the different cases are not mutually exclusive, i.e. it might happen that a node falls into both of the first two cases. - It could be good to provide references for RTDP, LRTDP, and AO*? - It could be interesting to discuss the connection with the recent work by Issakkimuthu, Fern and Tadepalli (ICAPS 2018) about Deep Reactive Policies for probabilistic planning in the Future Work part. <doc-sep>The paper is well written, clearly structured and ASNets are an interesting framework to develop base policies. I think the paper will be an excellent one to have in the workshop. I have also some criticism I hope will be perceived as constructive. The authors read like being at pains to distance themselves from MCTS as a label... unfortunately, I think they algorithm is best described as an instance of MCTS. There is a fixed tree policy that looks ahead from the current state, evaluates incrementally possible trajectories, and selects those that are judged to be more promising. Then at the leaves of the lookahead, a base policy is simulated to obtain an upper bound on the cost to go. Conceptually, DP-UCT + ASNets is pretty much like UCT + Random Walks. I am not totally convinced about the experimental evaluation. I would have expected the authors to compare on benchmarks we know DP-UCT or alternative algorithms, like Bonet and Geffner's Anytime AO*, to perform well. For instance, the Canadian Travelling Problem or some of the simpler domains of the latest IPPC. That would allow to test whether ASNets can produce better cost to go estimates than the hand coded heuristics proposed by T. Keller and co-authors. Also it would show the results to generalises beyond domains like Exploding Blocks World, which in their original formulation do not make a great deal of sense. Working with RDDL has been difficult until recently, that libraries and tools made in Python to parsing and simulating execution have become available. I think that if ASNets can match the performance of other MCTS algorithms that rely on domain-specific knowledge, or DRL algorithms like Value Iteration Networks, the authors' would have a very compelling demonstrator of their approach. | Dear Authors, thank you very much for your submission. We are happy to inform you that we have decided to accept it and we look forward to your talk in the workshop. Please, go over the feedback in the reviews and correct or update your papers in time for the camera ready date (May 24). Best regards HSDIP organizers |
The authors propose a new algorithm, contrastive entity linkage (CEL), to identify duplicates and variations of entities in catalogues. The authors introduce the concept of base entities and entity variations. A base entity is defined using a set of attributes, and all variations must have the same values for base attributes, but differ in the non-base attributes. The key idea is to mine significant phrases from the unstructured attributes of a record such as the title or a product. The significant phrases are added as a new attribute, and a classifier is trained using the new "variational" attribute. Experiments show that inclusion of the variational attribute improves entity resolution results. pros: - the work is in an important area as entity resolution of near duplicates remains a challenging task. - method is unsupervised so can be easily applied in new domains - method improves the performance of any ER system (as it defines a new feature) cons: - distinction of base and variational attributes is unclear in practice (see below) - no discussion of hyper-parameter tuning - hard to replicate, no reference to open code, several details not fully specified The key contribution of the paper is the VarSpot algorithm to identify variational attributes (contrast features). The main idea is to mine word ngrams whose frequency is larger than expected based on the frequency of the individual parts of the ngram. This idea is similar to the significant terms query in ElasticSearch. The evaluation focuses on three datasets, Amazon/Google software products, groceries, and musicbrainz/lastfm. The evaluations show that the contrast features improve the entity resolution performance on all three datasets for identification of duplicates and variants. The evaluation compares results with and without the contrast features, showing that the three ER systems considered in the evaluation (SILK, Magellan and DeepMatcher) benefit from the contrast features. In all experiments random forest consistently outperforms logistic regression so nit doesn't seem useful to include both. The algorithm has two hyper-parameters, the threshold alpha to prune the significance of an ngram, and the length of the ngrams. The paper does not discuss how these hyper parameters were optimized, or sensitivity to them. The distinction between base and variational attributes is unclear. In many cases, unstructured fields such as titles or descriptions may include both base and variational attributes (how are they distinguished?). Also, variational attributes may appear in structured fields too (eg memory size can be a structured attribute). In these cases it is unclear how the ngrams are identified. This part should be made clearer in the paper. In summary, the paper presents an interest variant of an old problem, and presents a simple method to extract a useful feature from the unstructured attributes in records. The evaluation shows promising results, but is not thorough as it should evaluate the hyper-parameters that are used in constructing the feature. The paper is clearly written and accessible to a wide audience. The related work is incomplete as there isn't a related work section, or a discussion of relevant work on mining of significant phrases. <doc-sep>Paper is about finding variational attributes for catalogue named entities. Examples of such attributes is "capacity" for a memory card (e.g. Sandisk flash drive "64GB") and identification of such attributes helps in duplicate detection in E-commerce cataloging and search. The proposed approach is unsupervised where they first detect some candidate entity variations (pairs of entities with high similarity scores) and then in each pair detect the "significant" phrase that "contrasts" one entity from the other one in the pair as the contrastive feature. The significance phrases (ngrams) is estimated exhaustively from a corpora with a PMI-like metric. Authors experiment with three entity linking systems with diverse architecture ranging from rule-based to logistic regression and neural-based models on three domains (music, grocessary and software catalogs). Results are promising and show that most systems benefit from these features. Paper is written mostly well: problem has been defined and motivated well and the approach is presented in a smooth structure and flow. However, presentation of results and analysis is unclear at parts. Novelty of the approach is modest and is mostly around the detection of detection of contrast features and significant phrases. These approaches which show promising improvements against the baseline, can be expanded to more recent efforts in using deep semantic representations in NER and extraction of multi-word-expression. Experiments are fairly extensive and support the proposed approach well. Analysis is not extensive and should be improved. All together, I found the paper an interesting work in progress which requires improvements in (a) novelty (b) analysis. Questions and suggestion: 1. Evaluation of candidate pairs is based on a data that is annotated in a post-extraction fashion (annotator labels the output of the system). So if you don't have a gold-standard set (all possible pairs), how do you compute "recall" there? (to compute the F score in table 4). 2. Did you experiment with richer models of semantic similarity using embedding, etc? 3. Despite the preceding explanation of the notation elements, the formal definition of the 3-way entity linkage is not easy to understand and doesn't connect with the rest of the section. 4. In the core extraction of significant phrases and contrast features, ngram frequency is the major factor (along with some thresholding). It is not clear why authors are comparing their interpretability against "frequent phrases" (which are a fairly similar approach). Please elaborate more on Table 3 comparison. 5. Please provide more details and analysis on results of the three way classification, specially around the confusion metrics. Are the improvements similar for different classes. What kind of duplicates does the CF model extract that the No-CFs don't, etc. 6. How would analyze last column of table 3 (higher rate of incorrect class for contrast features)? Post Rebuttal comment: After reading authors responses to my and other reviwers' comments and also checking the new draft, I am going to lift my rating of the paper. Thanks for your willingness to improve your work. <doc-sep>Update after response: Thanks to the authors for responding to my comments in particular adding the parameter analysis. While I still wonder about the limited scope of the solution, the research is nicely done and the problem formulation is now clearer. ------------- Pros: - novel problem definition - experiments on multiple datasets - nice usage of feature extractor in multiple Negatives - problem formulation does not include some assumptions - potential lack of generalizability This paper describes the problem of entity resolution in an environment where there are a wide variety of entity variations. I thought this was a rather novel problem formulation. They introduce the notion of contrastive entity linking to solve this problem. In particular, they define a blocking mechanism for identifying entity variations and a feature extraction algorithm for identifying entity attributes that are core to the entity or that are part of the entity's variation. These can then be used to drive a classifier. My main criticism of the paper is the potential generalizability. While it's applied in three different domains, the datasets essentially of the same kind, namely, product databases which already contain unique entities. From my reading, the assumption is not stated in the problem definition. The problem definition could be more precisely worded. In section 3.3, two assumptions are stated about catalogs, namely, that they ensure that records refer to distinct entities and that entity variation (i.e. record variations) are more similar to each other than base entities. These are important assumptions that make the problem much easier than what was outlined in the problem definition. In terms of evaluation, the paper didn't seem to report a number of critical parameters, namely , the bucket size threshold and similarity threshold during the experiments. I appreciated the experimental settings of using the feature extractor in a number of downstream entity. There's a couple pieces of related work. First, for entity resolution I think this approach bears similarity to [1]. There's been quite a bit of work in the NLP community on identity (see e.g. [2]) that would be useful to discuss. Overall, I thought the paper was a nice contribution. Minor comments: - The paper was easy to read. - it would be good to check the usage of the word record, entity and product, they get confused in places. - It would be nice if the annotated data is also made available in the paper. [1] Zhu, Linhong, Majid Ghasemi-Gol, Pedro Szekely, Aram Galstyan, and Craig A. Knoblock. "Unsupervised entity resolution on multi-type graphs." In International semantic web conference, pp. 649-667. Springer, Cham, 2016. [2] Recasens, Marta, Eduard Hovy, and M. Antònia Martí. "Identity, non-identity, and near-identity: Addressing the complexity of coreference." Lingua 121.6 (2011): 1138-1152. | This paper addresses the problem of unsupervised duplicate resolution of attributes for e-commerce and propose a new approach for this, which they call "contrastive entity linking". Overall, the reviewers agree that the paper deals with an important problem, and that it is well-written and motivated. |
- Good results with a simple baseline, compare to several relevant existing works - Seemingly good overview of the current litterature (I am not knowledgeable in neither auto-encoder nor unsupervised anomaly detection) The inference and post-processing section (3.3) is way to short and does not allow to fully understand the in and out of the supervision. For instance, I am confused by the "foreground mask $F$", at inference. What is this? How does $f_{MF}$ work? I also think that the authors could motivate a bit more some of the design choices, such as the training regiment and the choice not to mix different noise scales. I would also have liked to see a discussion on the (total) training time, training stability, and inference, of the different methods. <doc-sep>* The focus on improving reconstructions to help UAD is a simple and logical one. The information bottlenecks in VAEs have known to affect reconstructions, and hence also UAD. By focusing on using skip connections (U-net) and designing suitable noise, this work shows promising results. * The paper is well written, with a balanced overview of relevant literature. * Baseline experiments are shown with appropriate methods, and the performance improvements are considerable. * Noise generation model is simple and effective; the study showing the influence of noise magnitude and coarseness provides additional insight. * The simplicity of the nosie model is actually a strong point in this work. However, one can't but wonder if there would be more gains if a well-suited noise model can be used for this work? Have the authors considered other noise models that perhaps take the spatial information into account? Or on the other extreme would these results hold for noise models used in self-supervised learning, such as random masking. * How are the optimal thresholds obtained for Dice computation? Is the test set used for choosing this threshold as mentioned here: > Secondly, we calculate ⌈Dice⌉, a Dice score which measures the segmentation quality using the optimal threshold for binarization found by sweeping over possible values using the test ground truth. > * Will the same denoising strategy work for other types of anomaly detection? BraTS dataset is challenging but as the experiments also show, thresholding + MF do sufficiently well. Could this be the case because of the multi-modality nature of the dataset? * Source code is for the work is not available. I encourage the authors to provide a repository so that the results can be reproduced. <doc-sep>Although (as the authors state) DAEs have been used before in the context of anomaly detection, they did not receive as much attention as alternative models like VAEs or GANs. The authors are the first to propose a UAD method based on DAEs that improves upon multiple existing baselines through simple modifications of the noise generation process. This result is potentially interesting for the community. Apart from that, the manuscript shows that a different post-processing further improves a previously published thresholding baseline, which remains relevant in future UAD papers with brain MRIs. In general, the paper is well-written and easy to follow. The proposed method is evaluated only on a single dataset, which is not enough to judge the generality of an unsupervised anomaly detection (UAD) method. While the results showcase the potential of DAEs in UAD and are thus still interesting for the community in my opinion, adding another dataset as done in many related works (e.g. MS lesion datasets to see if the proposed method works with smaller anomalies, too) would be important to determine how practical the method is. After all, the goal of UAD methods is to detect *multiple* types of anomalies after training on a healthy cohort, so hyperparameter settings tuned on a single anomaly type annotated in the validation set may not work for other anomalies. Apart from the question whether the hyperparameters chosen with the annotated validation set generalize to other datasets/anomalies, I think the comparison with baselines could be made more transparent by describing the hyperparameter selection and tuning budget more clearly. Was each baseline tuned manually or values adopted from related work? | This paper presents an in-depth analysis of denoising autoencoders for anomaly detection. The reviewers raised some initial questions in their comments, which were mostly addressed by the authors in their rebuttal. All reviewers now agree that the paper is ready for publication at MIDL. Based on their recommendation, I'm happy to accept this work. |
This paper extends the "infinitely differentiable Monte Carlo gradient estimator" (or DiCE) with a better control variate baseline for reducing the variance of the second order gradient estimates. The paper is fairly clear and well written, and shows significant improvements on the tasks used in the DiCE paper. I think the paper would be a much stronger submission with the following improvements: - More explanation/intuition for how the authors came up with their new baseline (eq. (8)). As the paper currently reads, it feels as if it comes out of nowhere. - Some analysis of the variance of the two terms in the second derivative in eq. (11). In particular, it would be nice to show the variance of the two terms separately (for both DiCE and this paper), to show that the reduction in variance is isolated to the second term (I get that this must be the case, given the math, but would be nice to see some verification of this). Also I do not have good intuition for which of these two terms dominates the variance. - I appreciate that the authors tested their estimator on the same tasks as in the DiCE paper, which makes it easy to compare them. However, I think the paper would have much more impact if the authors could demonstrate that their estimator allows them to solve new, more difficult problems. Some of these potential applications are discussed in the introduction, it would be nice if the authors could demonstrate improvements in those domains. As is, the paper is still a nice contribution.<doc-sep>Thank you for an interesting read. This paper extends the recently published DiCE estimator for gradients of SCGs and proposed a control variate method for the second order gradient. The paper is well written. Experiments are a bit too toy, but the authors did show significant improvements over DiCE with no control variate. Given that control variates are widely used in deep RL and Monte Carlo VI, the paper can be interesting to many people. I haven't read the DiCE paper, but my impression is that DiCE found a way to conveniently implement the REINFORCE rules applied infinite times. So if I were to derive a baseline control variate for the second or higher order derivatives, I would "reverse engineer" from the exact derivatives and figure out the corresponding DiCE formula. Therefore I would say the proposed idea is new, although fairly straightforward for people who knows REINFORCE and baseline methods. For me, the biggest issue of the paper is the lack of explanation on the choice of the baseline. Why using the same baseline b_w for both control variates? Is this choice optimal for the second order control variate, even when b_w is selected to be optimal for the first order control variate? The paper has no explanation on this issue, and if the answer is no, then it's important to find out an (approximately) optimal baseline for this second order control variate. Also the evaluation seems quite toy. As the design choice of b_w is not rigorously explained, I am not sure the better performance of the variance-reduced derivatives generalises to more complicated tasks such as MAML for few-shot learning. Minor: 1. In DiCE, given a set of stochastic nodes W, why did you use marginal distributions p(w, \\theta) for a node w in W, instead of the joint distribution p(W, \\theta)? I agree that there's no need to use p(S, \\theta) that includes all stochastic nodes, but I can't see why using marginal distribution is valid when nodes in W are not independent. 2. For the choice of b_w discussed below eq (4), you probably need to cite [1][2]. 3. In your experiments, what does "correlation coefficient" mean? Normalised dot product? [1] Mnih and Rezende (2016). Variational inference for Monte Carlo objectives. ICML 2016. [2] Titsias and Lázaro-Gredilla (2015). Local Expectation Gradients for Black Box Variational Inference. NIPS 2015.<doc-sep>In this paper, the author proposed a better control variate formula for second-order Monte Carlo gradient estimators, based on a special version of DiCE (Foerster et al, 2018). The motivation and the main method is easy to follow and the paper is well written. The author followed the same experiments setting as DiCE, numerically verifying the advantages of the newly proposed baseline, which can estimate the Hession accurately. The work is essentially important due to the need for second-order gradient estimation for meta-learning (Finn et al., 2017) and multi-agent reinforcement learnings. However, the advantage of the proposed method is not verified thoroughly. The only real application demonstrated in the paper, can be achieved the same performance as the second-order baseline using a simple trick. Since this work only focuses on second-order gradient estimations, I think it would be better to verify its advantages in various scenarios such as meta-learning or sparse reward RL as the author suggested in the paper. Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-agnostic meta-learning for fast adaptation of deep networks." ICML 2017. Foerster, Jakob, et al. "DiCE: The Infinitely Differentiable Monte-Carlo Estimator." ICML 2018. <doc-sep>Overview: This nicely written paper contributes a useful variance reduction baseline to make the recent formalism of the DiCE estimator more practical in application. I assess the novelty and scale of the current contribution as too low for publication at ICLR. Also, the paper includes a few incorrect assertions regarding the control variate framework as well as action-dependent baselines in reinforcement learning. Such issues reduce the value of the contribution in its current form and may contribute to ongoing misunderstandings of the control variate framework and action-dependent baselines in RL, to the detriment of variance reduction techniques in machine learning. I do not recommend publication at this time. Pros: The paper is well written modulo the issues discussed below. It strikes me as a valuable workshop contribution once the errors are addressed, but it lacks enough novelty for the main conference track. Issues: * (p.5) "R_w and b_w are positively correlated by design, as they should be for variance reduction of the first order gradients." This statement is not true in general. Intuitively, a control variate reduces variance because when a single estimate of an expectation of a function diverges from its true value according to some delta, then, with high probability, some function strongly correlated with that function will also diverge with a similar delta. Such a delta might be positive or negative, so long as the error may be appropriately modeled as drawn from some symmetric distribution (i.e. is Gaussian). Control variates are often estimated with an optimal scaling constant that depends on the covariance of the original function and its control variate. Due to the dependence on the covariance, the scaling constant flips sign as appropriate in order reduce variance for any delta. For more information, see the chapter on variance reduction and subsection on control variates in Sheldon Ross's textbook "Simulation." The fact that a control variate appears to work despite this is not surprising. Biased and suboptimal unbiased gradient estimators have been shown to work well for reasons not fully explored in the literature yet. See, for example, Tucker et al.'s "Mirage of Action-Dependent Baselines", https://arxiv.org/abs/1802.10031. Since the authors claim on page 6 that the baseline is positively correlated by design, this misunderstanding of the control variate framework appears to be baked into the baseline itself. I recommend the authors look into adaptively estimating an optimal scale for the baseline using a rolling estimator of the covariance and variance to fix this issue. See the Ross book cited above for full derivation of this optimal scale. * The second error is a mischaracterization of the use and utility of action-dependent baselines for RL problems, on page 6: "We choose the baseline ... to be a function of state ... it must be independent of the action ...." and "it is essential to exclude the current action ... because the baselines ... must be independent of the action ... to remain unbiased." In the past year, a slew of papers have presented techniques for the use of action-dependent baselines, with mixed results (see the Mirage paper just cited), including two of the papers the authors cited. Cons * Much of paper revises the DiCE estimator results, arguing for and explaining again those results rather than referring to them as a citation. * I assess the novelty of proposed contribution as too low for publication. The baseline is an extension of the same method used in the original paper, and does not generalize past the second order gradient, making the promising formalism of the DiCE estimator as infinitely differentiable still unrealizable in practice. * The experiments are practically identical to the DiCE estimator paper, also reducing the novelty and contribution of the paper. *EDIT: I thank the authors for a careful point-by-point comparison of our disagreements on this paper so that we may continue the discussion. However, none of the points I identified were addressed, and so I maintain my original score and urge against publication. In their rebuttal, the authors have defended errors and misrepresentations in the original submission, and so I provide a detailed response to each of the numbered issues below: (1) I acknowledge that it is common to set c=1 in experiments. This is not the same as the misstatements I cited, verbatim, in the paper that suggest this is required for variance reduction. My aim in identifying these mistakes is not to shame the authors (they appear to simply be typos) but simply to ensure that future work in this area begins with a correct understanding of the theory. I request again that the authors revise the cited lines that incorrectly state the reliance of a control variate on positive correlation. It is not enough to state that "everyone knows" what is meant when the actual claim is misleading. (2) Without more empirical investigation, the authors' new claim that a strictly state-value-function baseline is a strength rather than a weakness cannot be evaluated. This may be the case, and I would welcome some set of experiments that establish this empirical claim by comparing against state-action-dependent baselines. The authors appear to believe that state-action-dependent baselines are never effective in reducing variance, and this is perhaps the central error in the paper that should be addressed. See response (3). Were the authors to fix this, they would necessarily compare against state-action-dependent baselines, which would be of great value for the community at large in settling this open issue. (3) Action-dependent baselines have not been shown to be ineffective. I wish to strongly emphasize that this is not the conclusion of the Mirage paper, and the claim repeated in the authors' response (3) has not been validated empirically or analytically, and does not represent the state of variance reduction in reinforcement learning as of this note. I repeat a few key arguments from the Mirage paper in an attempt to dispel the authors' repeated misinterpretation of the paper. The variance of the policy gradient estimator, subject to a baseline "phi," is decomposed using the Law of Total Variance in Eq (3) of the Mirage paper. This decomposition identifies a non-zero contribution from "phi(a,s)", the (adaptive or non-adaptive) baseline. The Mirage paper analyzes under what conditions such a contribution is expected to be non-negligible. Quoting from the paper: "We expect this to be the case when single actions have a large effect on the overall discounted return (e.g., in a Cliffworld domain, where a single action could cause the agent to fall of the cliff and suffer a large negative reward)." Please see Sec. 3, "Policy Gradient Variance Decomposition" of the Mirage paper for further details. The Mirage paper does indeed cast reasonable doubt on subsets of a few papers' experiments, and shows that the strong claim, mistakenly made by these papers, that state-action-dependence is always required for an adaptive control variate to reduce variance over state dependence, is not true. It should be clear from the discussion of the paper to this point that this does _not_ imply the even stronger claim in "A Better Second Order Baseline" that action dependence is never effective and should no longer be considered as a means to reduce variance from a practitioner's point of view. Such a misinterpretation should not be legitimized through publication, as it will muddy the waters in future research. I again urge the authors to remove this mistake from the paper. (4) I acknowledge the efforts of the authors to ensure that adequate background is provided for readers. This is a thorny issue, and it is difficult to balance in any work. Since this material represents a sizeable chunk of the paper and is nearly identical to existing published work, it leads me to lower the score for novelty of contribution simply by that fact. Perhaps the authors could have considered placing the extensive background materials in the appendix and instead summarizing them briefly in the body of the paper, leaving more room for discussion and experimental validation beyond the synthetic cases already studied in the DiCE paper. (5), (6) In my review I provided specific, objective criteria by which I have assessed the novelty of this paper: the lack of original written material, and the nearly identical experiments to the DiCE paper. As I noted in response (4) above, this reduces space for further analysis and experimentation. | This paper extends the DiCE estimator with a better control variate baseline for variance reduction. The reviewers all think the paper is fairly clear and well written. However, as the reviews and discussion indicates, there are several critical issues, including lack of explanation of the choice of baseline, the lack more realistic experiments and a few misleading assertions. We encourage the authors to rewrite the paper to address these criticism. We believe this work will make a successful submission with proper modification in the future. |
The paper considers online algorithms for classic graph problems when predictions regarding the requests are provided to the algorithm. The key contribution of the paper is to define a new notion of error to measure the quality of predictions provided for such problems. The new error attempts to capture the intuition that if following the predictions leads to substantially worse performance than the optimal solution, then such predictions should suffer from large errors. To make this notion precise, the paper defines the error as the size of the minimum cost edge cover in an associated hypergraph where the edge costs capture the amount of excess cost required to satisfy incorrectly predicted requests. For the online traveling salesman problem (and its dial-a-ride generalization), the paper provides a framework to “combine” a known robust online algorithm with an offline TSP algorithm on the predicted requests to obtain consistent and robust algorithms. As in many prior work, the algorithm is parameterized by an alpha that represents the trust of the decision maker in the predictions and simultaneously provides a competitive ratio of (1+alpha) if the predictions are correct, and also O(\\rho/\\alpha) if the predictions are incorrect where \\rho is the competitiveness of the base algorithm. Using the new error measure, the paper also reanalyzes the algorithm by Azar et al for online steiner tree / forest with predictions and show that it obtains a total cost of at most O(1) OPT + f(k, error) [as opposed to the logarithmic dependence on the error in the multiplier for OPT in Azar et al.] yielding tighter guarantees for certain predictions. The paper is well motivated and the new error formulation is pretty intuitive for graph problems. Overall I quite like the paper and find the contributions to be non-trivial and interesting. The paper is well-written and reads well. The experimental section is also well written and the experiments are pretty comprehensive for a primarily theoretical paper. None <doc-sep>This paper introduces a new notion of error for algorithms with predictions in the context of online metric graph problems. The predictions considered are the set of all requests, e.g., the offline problem instance. At a high level, the "cover error" introduced in this paper measures the optimal solution cost on the set of requests that appear in the prediction but not in the real instance and vice versa. This is formalized as a min-cost bipartite hypergraph cover problem to account for these two types of error via "detours" from the true instance. The authors give instantiations of this general error measure to online TSP, online Steiner tree/forest, and online facility location and develop algorithms that use predictions and whose performance is parameterized by the cover error. Strengths - The cover error has several benefits, which the authors do a good job highlighting. In particular, it accounts for the asymmetry in the impacts of the errors in which we predict requests that aren't there and errors in which we miss real requests in our predictions. - The authors develop algorithms parameterized by the cover error for various online graph problems, achieving bounds that can significantly improve on the worst-case if the cover error is small. - I think that this error measure and the resulting algorithms will be of interest to those working on algorithms with predictions. Weaknesses - Naturally, more complex error measures can more precisely capture the performance of algorithms with predictions. I worry that in some sense the definition of this error measure almost makes the fact that there exist good algorithm if this error is small a tautology. More basic error measures may be loose in certain respects, but they tell us something clear about the relation between the prediction and instance which we may or may not be able to leverage algorithmically: they have some meaning somewhat separate from the algorithmic task. It seems that saying the cover error is basically an algorithmic fact: there are a small set of errors which we can cover by work which we must do anyway. I think the discussion of limitations is adequate. <doc-sep>This paper gives a new error measure for online algorithms with predictions for metric problems of two kinds: (a) online TSP and Dial-a Ride, and (b) online Steiner tree and forest. The two categories differ in their notion of online: the first category has requests released over (continuous) time, while the second category just has an online sequence of requests but no actual notion of time. The error measure is innovative and tries to address some shortcomings of previous measures. In particular, the paper compares to two papers from AAAI 22 and SODA 22. The first paper uses set differences to characterize error and the non-errorneous part of the prediction needs to exactly match the actual input locations. The second paper relaxes this notion by allowing the common set between predictions and actual input to not match exactly and uses a minimum cost matching between these points to quantify the quality of the match. At a high level, the deficiency being addressed in this paper is that this minimum cost matching (or the stricter requirement that the common part matches exactly) forces the cardinality of the common parts of the predicted and actual sets to be equal. So, for example, if there are many real input locations that are all close to a single predicted location, the previous error measures have large error while one can argue that the prediction is actually a good one in a qualitative sense. To address this, the new error measure being introduced in this paper (roughly speaking) replaces the edges in the matching with hyperedges with one point on one side (predicted or actual) and multiple points on the other side. The cost of the hyperedge is defined in a natural way based on the problem at hand. Using this new measure, the paper gives the following results: (a) for online TSP and Dial-a-Ride, it gives online algorithms that degrade gracefully with the new notion of error, and (b) for Steiner tree and forest, it shows that the algorithm from the previous SODA 22 paper gracefully degrades with this notion of error. Although the algorithm in (b) is the one from the previous paper, the analysis with respect to the new error is new. Strengths: 1. I think the online algorithms with ML predictions (or more generally data-driven algorithms) area is a very important one. This paper is exactly in the direction that is particularly important for this area, which is to explore new paradigms and frameworks that are natural and have interesting technical challenges. As the paper correctly claims, there is no unanimity on what the correct measure of error is (as against fairly universal acceptance of the notions of consistency and robustness) and this is a significant handicap because each problem requires its own specialized techniques to showing smoothness bounds with error. So, I really like what the paper sets out to do, which is to give a reasonable notion of error that applies to many (at least metric) problems, and then give somewhat general techniques for obtaining smooth algorithms with respect to this error measure (at least techniques that extend beyond single problems). 2. The algorithms for online TSP and Dial-a-Ride with predictions are new. These are important problems in their own right, so initiating the study of online algorithms with predictions for these problems is an important step irrespective of the particular error measure being used here. Weaknesses: 1. I did not completely understand why the paper eliminates the difference sets in the error measure that existed in the previous definitions. For instance, if the predictions are completely bogus, then it seems the actual location of the predictions should not matter. In particular, one should be able to recover robustness guarantees from error dependent bounds, but that does not seem to be the case here. I suspect it is possible to keep the current definition but only apply it to a common part (chosen in an adversarial manner) and allow difference sets outside the common part, thereby recovering robustness. There is no explicit discussion of limitations. This can be included, perhaps in the conclusions section. <doc-sep>The paper studies learning-based online graph problems, focused on online TSP and its variants, online Steiner tree and forest, and online facility location. In each problem, there is a sequence of node requests arriving online, and an algorithm must move or select certain edges to serve these requests. Moving or buying edges come with costs, so the goal is to minimize the total cost, measured against the offline OPT. Classic algorithms are focused on the worst-case competitive analysis and generally do not consider real world data distributions. In the algorithms with prediction model, the algorithm is instead provided with a predicted sequence of requests. We’d like to show that if the prediction is good, a learning-augmented algorithm can improve upon the classic worst-case guarantee. Otherwise, the algorithm should be robust and retain a worst-case competitive ratio. The main contribution of the paper is a new error metric of the learned predictions (based on hypergraph edge cover). The paper claims that this metric bypasses some difficult cases of prior related work, and provides algorithms that work well given good predictions under this metric. Empirical evaluations demonstrate that the algorithms achieve good performance on real data. Strengths: ----- The paper gives a novel notion of error metric of learned predictions in online graph problems. Given the “dummy request example” (line 189 below), I think this new notion is reasonably motivated. In particular, this example demonstrates a difficult case where the prediction is conceptually good, but not captured by the error metric of recent work, including Azar et al. [Online graph algorithms with predictions. SODA 22]. The technical claims in the full paper are sound (but I have not checked the proof details in the supplement). The paper also gives experimental evaluations. Weaknesses: ------ The paper is not well written, particularly the early sections. Some statements appear vague and only get specified (much) later, and sentences hard to parse. Below, I list a few for the author(s) to fix. I find they seriously obstruct my reading. * Line 90: “Since we can use cost functions based…” This sentence seems very vague to me. Is it a feature that prior work such as [12] doesn’t have? My understanding is that the matching-based error of [12] is also rather general and independent of the problem (but can be applied to all these node-arrival online graph problems). * Line 92: “Further, it allows to integrate both actual and predicted release dates for online-time problems.” How? Up to this point, the cover error has not been fully specified, so I couldn’t tell why this is true. * Line 97: “Although previously studied…” Regarding this dummy requests example, I suggest it’s written concretely right here in the intro. (I get that it’s defined later in line 189 and below) * C* is undefined in Theorem 1. * Line 131: “These bounds hold simultaneously for [...] the bounds provided in [12]” — What does this mean? (I get that it holds for any k, but not the second part.) Is it suggesting that these bounds recover some of the results in [12]? Finally I suggest that the definition of the cover error be given earlier in the intro section. Minor errors: ----- Line 77: “In all our problems the cost of a hyperedge [...]” In my first reading, this sentence feels long and a bit hard to parse. (What does “anchor” mean here?) Line 80: “E.g….” This is not a complete sentence. Line 85: “unavoidably” -> “unavoidable” Line 89: I suggest using an itemize here to list these points Line 90: “problem-independent” -> “problem-independently” Figure 1: maybe state that the numbers on the nodes are just indices, and do not correspond to the order of the request sequence. Overall, I believe the paper is technically novel and has potential to make an impact. However, as it currently stands, it likely requires significant revision to improve its presentation. I have listed some concrete suggestions above. | "Algorithms via (ML-based) predictions"---especially for online problems---is a young, fast-growing, important area. Of course, the predictions will usually not be perfect and will involve some sort of error. As this area is nascent, it is vital to develop and analyze different forms of error and for various fundamental problems, which this paper does well. In particular, this work develops a new notion of error for two types of "metric" problems in the above genre: online TSP and Dial-a Ride, and online Steiner tree/forest. The first type has arrivals over continuous time, while the second has an online request-sequence (as is typical in the algorithmic study of online problems). The error measure addresses some shortcomings of previous measures, and compares to recent works from Xu et al. (AAAI '22) and Azar et al. (SODA '22). Xu et al. use set-differences to characterize error; the non-erroneous part of the prediction has to exactly match the input locations. Azar et al. relax this by allowing the common set between predictions and actual input to not match exactly and use the cost of a min-cost matching between these points to quantify the extent of the match. The gap addressed in the present paper is that these two types of works force the cardinality of the common parts of the predicted and actual sets to be equal. The present paper's error measure essentially replaces the matching with hyperedges e with one vertex on one side (predicted or actual) and multiple vertices on the other side; cost of e is defined based on the problem. This paper develops the following results parametrized by this error: online TSP and Dial-a-Ride---online algorithms that degrade gracefully with the error, and Steiner tree/forest---showing that the algorithm of Azar et al. does indeed gracefully degrades with this error. The paper was generally appreciated by the reviewers; the authors are encouraged to take the review comments into account. |
The paper proposes a way to learn set functions (ie. find a subset $S \\subseteq V$ that maximizes some utility function $F_\\theta$ that we want to learn) when we are only giving examples of optimal subsets $S^*_i \\subseteq V_i$ from an optimal subset oracle. This is different to other works that learn from function value oracles that provide actual utility values $f_i$ for specific subsets $S_i$. The paper proposes a permutation-invariant architecture. It makes the problem tractable through mean-field variational inference and further amortizes inference via an additional neural network. It reports great empirical results compared to several baselines. --- Thanks for addressing my questions and updating the paper! This was requested as an emergency review. I will try my best to offer a meaningful review (ofc), but I'm very much looking forward to discussing with the other reviewers. The approach seems original (I am not very well acquainted with prior literature in this area though). The quality and clarity of the paper is great overall, and it seems like a significant contribution. I very much like the argument that it is easier to acquire data for the optimal subset oracle than the usual function value oracle in many practical cases as trying to obtain calibrated utility values from human labellers sounds improbable. Thus, I think this idea is of great practical importance. The empirical validation looks sensible. I recommend acceptance with low confidence given the nature of this review and that my background is not aligned with the subfield. The "broader impact" should be removed from the "Limitations & Broader Impact" paragraph title because it does not address broader societal impact (which is okay, but the title is misleading). <doc-sep>The paper addresses the problem of learning set functions. There are two variations of this problem, one is with function value (FV) oracle in which the supervised data takes the form of (set S_i, function value f_i of set S_i) and the goal is to learn the function mapping. One has to gather this info for large number of sets, making this process prohibitively expensive and often the FV oracle is not feasible. The main focus of the paper is on the variation of the problem with Optimal Subset (OS) oracle, wherein the data takes the form of $(V_i, S_i^*)$ pairs where $V_i $ is subset of $V$ (global candidate set) and $S_i^*$ is the optimal subset of $V_i$ that maximizes the utility. This setup implicitly captures the FV oracle and is more practical. Given such data from the OS oracle, they propose a method based on variational inference to learn the mapping. The first step is to cast the problem as maximum likelihood estimation by replacing the utility function $F_{\\theta}(S,V)$ with a probability distribution $P_{\\theta}(S|V)$ such that the probability of S given V is proportional to the utility. They list some basic (natural ) properties that such a distribution should satisfy a) permutation invariance, b) varying ground set etc. Further they want to have minimum prior assumption and scalable learning algorithm. They give a distribution based on energy-based modeling (EBM) which satisfies the aforementioned properties and admits efficient training and inference. Directly learning with the EBM is difficult for reasons such as intractable partition function, to circumvent the difficulty they propose an approximate maximum likelihood learning using variational approximation of $P_{\\theta}(S|V)$ by product of independent Bernoulli distributions. The method is evaluated on production recommendation, set anomaly detection, double MNIST, CelebA, Compund selection in AI-aided Drug Discovery and other real world datasets. The results show that the proposed method significantly outperforms the baselines for learning set functions. Strengths: S1: It studies the problem of learning set functions under the OS oracle, which is practically important but has been addressed by limited number of works. S2: The reduction of this problem to maximum likelihood and later to variational approximation is novel and makes it amenable to application of this well-developed machinery. S3: The empirical evaluation is extensive and beats the closely related baseline comprehensively. Weaknesses: W1: There two steps of approximation that introduce their own approximation error. It is not clear how these are propagating and affecting the eventual solution. I believe with more approximation layers the error (sub-optimality) is higher. W2: Lack of theoretical results to shed light on the sub-optimality of the proposed algorithm. I think, the paper can be much better with these. yes <doc-sep>This paper introduces EquivSet, an algorithm for learning set functions that satisfies the following desiderata: permutation invariance, varying ground set, minimum prior and scalability. EquivSet learns set functions under the optimal subset oracle using a maximum likelihood paradigm. Specifically, the authors use an energy based model(EBM) to define the set mass function. The usage of EBM satisfies the minimum prior requirement. A DeepSets style architecture is also used to satisfy the permutation invariance constraint on set functions and to also handle sets of arbitrary cardinality. However, learning EBMS over sets introduces some difficulties which the authors alleviate by proposing a mean field variational inference approach in the style of variational auto encoders. Amortization is also used to ensure scalability. The proposed method is well motivated and tackles the set representation learning problem from a perspective not yet considered. The EMB approach and the variational inference approach introduce more complexity compared to the well used DeepSets and Set Transformer models. Additionally, Set Transformer, which is trained similarly to DeepSets, is not used as a baseline. Set Transformer is a simple baseline that normally outperforms DeepSets and is easy to train. Hence i recommend the authors to include it in the baselines. Finally, it seems the SAB and ISAB layers of Set Transformer can be used in the model in place of the DeepSets style architecture. Many set representation papers show that these attention based backbones perform much better than DeepSets. **The authors have answered my questions satisfactorily hence i increase my initial score.** Limitations are sufficiently outlined in the Limitations and Broader Impact section. <doc-sep>This paper proposes a learning framework for set functions. It combines a set of procedures and methods to obtain a unique set of properties for the learned function as listed in introduction. Strength: This paper appears to be sound. It is well-written. Algorithms and procedures are clear. Weaknesses: My main concern are the limited experimental results, and also the assumptions made about the data distribution and their applicability for practice. Authors have not discussed whether their assumptions about the data applies to the datasets they have used in their experiments. Experiments presented in Tables 2 and 3 seem to be limited. Can authors please explain how they chose these particular set of datasets? Results on more datasets could make the paper more convincing. Yes, the discussion seems adequate. <doc-sep>The authors tackle the task of set function learning under a weakly supervised setting, where the data points consist of sets and their optimal subsets. The proposed model combines an energy-based model with amortized variational inference to allow for several features (permutation invariance, varying grounds sets, a minimum prior approach, as well as scalability) and is further extended by a Gaussian copula variant to allow for correlation modelling. The model is evaluated on several data sets from different domains. ## Strengths - The overall model is well motivated. Each of the desired requirements for the model is discussed in detail and tackled in a principle manner. - The overall storyline of the paper is well executed. The individual developmental steps and their problems are presented in a coherent structure. - The paper includes a wide range of experiments. ## Weaknesses - The reported decimal places in Table 1/2/3 suggest a measurement precision that is not valid given the reported standard deviations, which give us a precision of only the first two decimals for most reported settings. ## Other - The number of runs and whether standard deviation or standard error are reported should be included in the caption of the tables. The runs are mentioned in the text, but the deviation vs error is not at all discussed (also not in the appendix). - l302 claims "two real-world datasets". Why this specific formulation for MNIST/CelebA when the other experiments are also on real-world data? - (very minor) l78 "delicate framework": The wording here feels suboptimal. It can be read in its negative connotation of being a rather fragile framework that requires great care to be trained. The rest of the paper does not suggest this to be the case, allowing for the positive meaning of the word. However, in the introduction, the reader does not know yet which of the two meanings in his/her mind the paper should be read. ### Typos - l41 lacked $\\to$ lacking Limitations and societal impact are discussed. | Reviewers have expressed strongly in favour of acceptance, two improving their score after the rebuttal and discussion. I’m happy to recommend acceptance. |
The paper presents "a decentralized KG representation learning approach", named decentRL, which encodes each entity from and only from the embeddings of its neighbors. This approach can therefore account for new entities that have no known features to initialize its embeddings, but do have known links to other entities in the graph. The main contributions: - The paper outlines decentralized attention network (DAN), an adaptation of graph attention network (GAT). GAT considers the direct neighbors of an entity in generating its embedding, and computes attention scores based on similarity between each neighbor and the focus entity. Assuming that no meaningful embedding may be available for new entities, DAT represent each entity via the embeddings of its direct 1-hop neighbors. It then considers its 2-hop entity neighbors for computing attention-based embeddings. - The paper further adapts the optimization process - alternately optimizing the representation of the target entity and its neighbors. Experimental results are presented on the task of entity alignment across Knowledge Graphs (KGs) and entity prediction (KG completion), using two datasets per task. Competitive performance is obtained in the general case, with improvements when new entities are considered. Pros: - The approach is sensible, intuitive, and yields good results in practice. Cons: - My main concern is the review of related work, which seems partial; e.g., the discussion is focused on GAT, but the best competing method in the results section is AliNet, which is nowhere described. Also, there exist other methods like DeepWalk which also rely solely on structure information. - The paper does not read easily - It is not clear if the code and data will to be released. At least, the dataset with train-test split should be released for comparison purposes. Comments and typos: - can you motivate the open-world scenario, where only structure information is available for new entities, and it has no known features? - Table1: write Hits@1 instead of H@1 - robuster --> more robust - Oppositely --> In contrast?<doc-sep>=== Summary === This paper proposes a "decentralized" method for representation learning in knowledge graphs that doesn't explicitly depend on a learned embedding for the entity node of interest, e_i. Rather, the embedding for e_i is constructed in a distributed fashion (similar in motivation to the distributional hypothesis/skip-gram word embeddings) from its neighbors via a second-order attention mechanism. The main idea is that this is better for "cold start" problems in which unknown entities might have no features, which makes building any representation that explicitly depends on entity-centric features hard. === Justification for Score === While the method might indeed hold some promise, I find it a bit difficult to get excited about it. Furthermore, in the current version it's presentation and motivation is unclear and hard to follow. It is also not clear from the paper why this method should work better than simpler baselines (see concerns). === Strengths === - Relevant and timely topic, as leveraging knowledge graphs with initially unknown entities etc is an important problem (especially with constantly growing KGs). - Some mixed empirical results w.r.t. other compared models, but generally positive. === Concerns === - It's unclear to me why a purely "decentralized" representation is desired, especially if the experiments aren't purely on unknown entity settings. A natural baseline to me would seem to be a standard GCN/GAT with "entity dropout", i.e., during training time you introduce entities for which some redundancy must be gained by its neighborhood. Another approach would be to use a framework reminiscent of label propagation to deal only with imputing missing features at test time. Fully dropping all entity-specific features seems like overkill, and potentially harmful. After all, at test time, a majority of the KG will be known. - In general, in addition to the comment above, though I am not intimately familiar with their details, it appears that none of the considered baseline methods (AliNet, RSN, etc) are specifically designed to accommodate missing entities. As this is a major claimed contribution, it would be useful to either clarify this, or explain how it is handled differently in these networks. - I'm a bit perplexed by the consistently underperforming H@10 results relative to AliNet---and I'm not sure I find the justification in the paper (data augmentation) that convincing (can you clarify why this would explain worse performance @10, but not @1 or MRR?). - Overall, though there might be something here (I am willing to be convinced otherwise...), the paper fails to convince me of its significance at this stage. I feel that it would benefit greatly from an overall more compelling re-write. === Minor Comments === - In terms of readability, Table 2 is a bit too small. - Lemmas 1 and 2 don't add much to the paper in my opinion---I would recommend moving them fully to the appendix. === Response After Rebuttal === I thank the authors for their responses to my comments. After reading the response as well as the other reviews, I still stand by my original rating. I still find the motivation and empirical results non-compelling, given the current version of the paper.<doc-sep>This paper presents a method for knowledge graph embedding based on graph attention networks (GAT). The key idea is to avoid using the information for a node (i.e., its representation vectors) when computing the attention weights for the neighbors of the node. The paper argues that this approach can better generalize to unseen nodes where no pre-defined features/information is available. As such, the paper does not include the representations for a node $e$ from prior layers in the aggregations to compute $e$'s representations in the next layers, leveraging the representation vectors of the nodes from prior layers to obtain attention weights for the current layer. The paper also proposes to a self-learning method to learn the parameters by optimizing the mutual information of the final and initial embedding vectors for the nodes. A distillation approach is also employed to use the initial embedding vectors as the teachers and the final embedding vectors as the students. The proposed method is applied to two downstream tasks, i.e., entity alignment and entity prediction, leading to competitive performance with many prior works (the learned node embeddings still need to be aligned using task-specific losses). Some experiments on unseen entities and ablation studies are also conducted to demonstrate the benefits of the proposed method. Overall, this paper is well written. It introduces an extension of GAT to address the unseen entity issue and the experiments seem to demonstrate its benefit. However, the motivation and approach of the paper should be better justified to make it more convincing. I have several comments/questions as follow: 1. The technical novelty of the paper seems incremental as the DAN mechanism is a simple extension of GAT while mutual information maximization and the distillation are already applied in prior work. 2. The paper seems to assume that for unseen entities, although pre-trained embeddings are not available, unseen entities are still connected to some seen entities so unseen entities' embeddings can still be obtained via the averages of the embeddings of the seen neighbors. As such, how do we handle two unseen entities that are neighbors of each other? More importantly, as the current method does not use any information specific to the nodes, e.g., node content (so only the connections of nodes are employed), can we just include the unseen entities in the graph and retrain the whole model? This is certainly more expensive, but as the paper is mainly considering downstream task performance, this might be a method to address unseen entities in this work. In general, as node embeddings are initialized randomly without considering node content in this work, the proposed method for unseen entities does not seem significant and convincing to me. 3. Relatedly, as the node embeddings \\textbf{e}_i are randomly initialized, what kind of knowledge can \\textbf{g}_i expect to learn from considering \\textbf{e}_i as a teacher? In general, without the guidance from some specific downstream tasks, it is unclear which information the model would learns when the training finishes. Maybe the auto-distiller should be jointly trained with downstream tasks and the model can be better justified with the idea from information bottleneck? A discussion about the connection of the proposed method and information bottleneck is also helpful. 4. How does the performance changes if we directly use \\textbf{e}_i (not its copy) in Equations (7) and (8)? This might help to better justify the model design.<doc-sep>This work proposes a GNN-based model for learning KG embeddings purely from the embeddings of its neighbours, which would enable learning entity embeddings for previously unseen entities. The model is based on a modified version of a graph attention network (GAT) which only considers the embeddings of neighbouring nodes. Despite the advantage of decentRL over existing approachs of its applicability on previously unseen entities and the results on entity alignment and entity/link prediction showing a lot of promise, I've found the paper quite hard to follow due to factual inaccuracies and English style and grammar issues, which is why I believe it is not ready for publication in its current form. I encourage the authors to revise the paper and resubmit to the next big ML/graphs conference. Detailed comments and questions below: Sec. 1\\ The statement "then TransE and many KG embedding models (Wang et al., 2014; Dettmers et al., 2018; Nguyen et al., 2018; Kazemi & Poole, 2018; Sun et al., 2019), learn representations in a Skip-gram (Mikolov et al., 2013a) manner" is factually incorrect, since the data and learning objective for learning word embeddings differ greatly from those for learning KG embeddings. Analogical properties of word embeddings are an implicit bi-product of the skip-gram training objective, whereas in e.g. TransE this property is explicitly imposed on relation representations through its score function. The score function of other KG embedding models mentioned (e.g. ConvE, Dettmers et al. 2018 or SimplE, Kazemi & Poole 2018) is of a different type and does not even impose the analogical property of relations. Sec. 3.1\\ "Intuitively, if e_i is the embedding of an unseen entity, it is rarely useful in computing the attention scores (as it is just a randomly initialized vector). Thus, purely relying on its neighbors may be a good choice." - Aren't the neigbours randomly initialised as well? Sec. 3.2\\ After being introduced, n_i is not used anywhere. What is its role in Eq. 3? Sec. 5.4 - As one of two main results sections, entity prediction results which include standard link prediction models (Tables 7 and 8) should be moved from the Appendix into the main body. - Given that TransE and DistMult haven't been state-of-the-art for quite a while, it would be interesting to see how decentRL performs with recent state-of-the-art link prediction models, such as TuckER or RotatE. Other comments: Writing quality should be improved, as the paper is hard to follow. Please have your submission proof-read for English style and grammar issues. ===============================================================================================================\\ After rebuttal:\\ I have read the authors' response, but since the actual body of the paper has not changed much from the original submission, I stand by my original rating. | This paper brings interesting ideas (decentralized setting, auto-distillation) but it does not meet the very high requirements that a publication at ICLR requires. Three main reasons for that: 1/ Motivation & justification: Ultimately the paper is advocating for a pure decentralized approach "which encodes each entity from and only from the embeddings of its neighbors" with the main motivation being to represent better on unseen entities at training. This is quite radical and leads to a complex model and training procedure for a benefit and justification that are not very clear. Are there that many unseen entities in general? What would periodically retrain the whole model do? The computational cost associated to DecentRL should be discussed with regards to that. Some implementation details in appendix A.2 seems rather critical and are not motivated. 2/ Missing comparisons and references: as noted by several reviewers, it would be helpful to have comparisons of other methods that are dealing with missing entities. Some much simpler heuristics could be tried for instance (retraining the model, averaging neighbors, etc.). A discussion with DeepWalk, that is really an adaptation of CBOW for KG should also be added. 3/ Clarity could be improved. Thanks to reviewers' comments, the clarity has increased but could still be worked on as noted by several reviewers. For instance, the analogy with CBOW right in the intro is confusing: in the 2nd paragraph, CBOW is used as a common manner for methods that are limited, but in the 3rd paragraph, CBOW is also used as an intuition for DecentRL. Some content from supplementary material like the description in A.1 would add a lot of clarity if added earlier. We encourage the authors to use the many comments from the reviewers to improve further the paper. |
Paper proposes a novel adversarial transferability attack (i.e. an adversarial attack when a surrogate model is used to attack an unknown model). Proposed method works by modifying iterative proceduce to find adversarial examples, such that it tend to find adversarial examples in flatter regions of a loss surface. Authors conduct thorough study of the method. Paper is well written, evaluation is reasonably good and thorough. Results show that method helps to improve transferability of adversarial examples. Nevertheless there are few things in evaluation that could be improved: 1. Authors use I-FGSM as one of the baseline attacks, but it’s not clear whether they do random restarts (as described in https://arxiv.org/abs/1706.06083 ). It would be useful to clarify this and if authors don’t do random restarts then add an attack with random restarts. 2. Most evaluation is done on undefended models. It would be interesting to study attack performance when source and/or target model is adversarially trained. While authors mentioned few defended models from Tramer at al 2018, it has been shown later that ensemble adversarial training is not particularly strong defence. It’s much better to perform multi-step PGD adversarial training, like in https://arxiv.org/abs/1812.03411. Note that it has been shown that denoising used in https://arxiv.org/abs/1812.03411 is not a good defense, nevertheless authors do produce an adversarially trained model without denoising. Reasonably good paper. There are few potential improvements for evaluation procedure. <doc-sep>This paper proposes an adversarial attack, RAP, to boost the transferability of adversarial examples, based on the intuition of flatness of loss landscape and model generalization. Experimental results show that the proposed attack is more effective against real world APIs. The paper is easy to follow and the motivation is clear. However, the technical novelty of the paper is limited. The proposed attack method mainly leverage the minmax framework to search for regions that may have high adversarial transferability loss and then minimize the loss for those regions so as to achieve “flat” loss regions. The algorithms to solve the minmax optimization is standard and follows the existing work. There is no convergence, or any transferability guarantee. Empirically, from table 3, the targeted attack success rate improvement is usually not significant. Since the paper aims to propose a more transferable attack, it would be interesting to see if it can attack against existing defenses, including the gradient obfuscated ones comparing BPDA attack [1] and adversarially trained models. [1] Athalye, Anish, Nicholas Carlini, and David Wagner. "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples." International conference on machine learning. PMLR, 2018. Overall the paper is well motivated, and the problem statement is clear. However the technical novelty is limited and more baseliens on improving adversarial transferability need to be compared with. It would be interesting to see if the proposed method can boost the adversarial transferability for different tasks such as object detections as well. <doc-sep>This paper focuses on the transferability of the adversarial examples and proposes to boost the transferability by reversing the adversarial perturbation. The motivation is that the flatness of the loss landscape can help to alleviate the overfitting to surrogate models, and thus improve the transferability. Specifically, instead of purely minimizing the adversarial loss at a single adversarial point, this paper injects the worst-case perturbation for each step of the optimization procedure. Experimental results show that the proposed method surpasses the SOTA transfer-based attack methods by a clear margin, which demonstrates its effectiveness. Strengths: - The writing is clear and the proposed method is easy to follow. - The idea to seek a flat region of loss landscape to improve the transferability is novel. Instead of purely minimizing the adversarial loss at a single adversarial point, the paper proposes to find a flat region of loss landscape by bi-level max-min optimization, so as to eliminate the overfitting problem of adversarial attacks. - The experimental results demonstrated the effectiveness of the proposed method. By injecting the proposed attack method into the existing attack method, the paper improves the transferability of adversarial examples by a large marge in both untargeted attacks and targeted attacks. Weaknesses: - The evaluation of defense methods is somewhat limited, the paper only considers the ensemble adversarial training, and many advanced defense methods are lost (such as feature denoising [1] and NRP [2]). - A drawback of the proposed method is that it requires to conduct T steps update in the inner maximization, which increases the overhead of adversarial attacks. Suggestions: - In section 4.2, the paper plots the visualizations of loss landscapes of targeted attacks, the loss landscapes of untargeted attacks are also needed. [1] Feature denoising for improving adversarial robustness. CVPR 2019. [2] A self-supervised approach for adversarial robustness. CVPR 2020. Overall, the paper proposes a novel adversarial attack, which is effective and easy to follow. I think the paper is marginally above the acceptance threshold, although there are some weaknesses. <doc-sep>This paper proposed a min-max formulation to improve attack transferability. The key idea is motivated by the fact that the smoothness of the loss landscape could improve model generalization ability. Thus, a reverse adversarial noise (for landscape smoothing) is injected so as to compensate for the effect of an adversarial attack. Numerous experiments are provided to demonstrate the effectiveness of reverse adversarial perturbation (RAP). Strengths: + The bi-level min-max formulation is novel for generating transferrable attacks. + There exist extensive experiments involving a variety of model architectures and baselines. Weakness: - The flatness of loss landscape with respect to (w.r.t.) input vs. model generalization should be more carefully studied. My doubt arises from the paper [https://openreview.net/pdf?id=BylKL1SKvr]. This work showed that model transferability (from the source domain and the target domain) relates to the smoothness of loss landscape, but this conclusion holds for loss landscape w.r.t. model parameters. I understood why authors draw a connection between the flatness of loss landscape and model generalization in the context of attack generation. However, has this been well studied and believed as a grounded conclusion? - The use of a min-max formulation for attack generation has been studied in [https://arxiv.org/pdf/1906.03563.pdf]. This related work should be covered and discussed in Related Work. - If the min-max formulation (2) is replaced with EoT (expectation over transformation), namely, n^{adv} is randomly generated from e.g., Gaussian distribution, then this will lead to an EoT-type baseline. It will be better to cover this baseline as well to demonstrate the superiority of bi-level formulation. - Min-max attack generation could be difficult to tune. Thus, the computation time and hyper-parameter setups of the proposed approach should be clearly stated. - If the n^{adv} is regarded as model noise to flatten the loss landscape, then how does the attack perform compared to RAP? This is another baseline to verify the usefulness of the flatness of loss landscape w.r.t. `input' rather than 'model parameters'. I think this is an Okay submission, but several technical and experimental questions remain. | This paper studies the transferability of adversarial attacks in deep neural networks. In particular, it proposes the reverse adversarial perturbation (RAP) method to boost attack transferability by flattening the landscape of the loss function. The reviewers acknowledge the strengths of the paper, which include effectiveness of the simple RAP method proposed and the extensive experimentation presented. However, a number of outstanding concerns still remain. Some of them include the technical novelty of the paper, insufficient theoretical justification of the proposed method, lack of grounded justification between flatness of the loss landscape and model generalization under the specific context of attack transferability, similarity of the optimization problem with some existing work, potential difficulties of the min-max attack generation problem, among others. As it stands, this is a borderline paper that is reasonably good, but not great. Addressing the outstanding concerns will make the paper more ready for publication in ICLR. |
This paper proposed a new text-to-image generation method that would like to tackle the entangled textual inputs by multi-tailed word-level initial generation, create the region-contextualized text representations for region-aware image refinement, and propose an iterative multi-headed mechanism to allow multiple different changes at each stage of refinement. --- Strengths --- 1. Entangled sentence-level representation and region/semantic-insensitive image refinement are two noticeable issues in the text-to-image generation task. I am glad that the authors would like to tackle these problems in a unified framework. 2. Experiments can validate the performance of each proposed component to a certain extent. 2. This paper is generally well-written and easy to follow. --- Weaknesses --- 1. The disentanglement of word-level attributes at the level of generating a small-size image may not be a necessity, in my opinion. Word-level features can be modulated to the visual features in many ways other than disentangling the sentence-level input. I would like to see more evidence about this assumption. Moreover, the performance of disentanglement was not fully validated, the proposed MTWIG looks like redundant representations of the sentence-level features, rather than ``disentangled'' representations. 2. The spatial dynamic memory is an extension of DM-GAN with more region-contextualization, its design may just be one of the choices to enable region awareness, but may not be the only way to fulfill it. Authors should provide more discussions to tell the motivation, design logic, and/or the advantages of the proposed SDM, also including the rest two proposed modules. 3. Moreover, the proposed three components are loosely related to each other, without a concrete idea that integrates them together. This paper is overall well-written and easy to follow, but the ablation study and discussions may be inadequate to validate that the performances, properties, and novelties of the proposed modules. <doc-sep>The paper proposed a new method to tackle text-to-image generation challenge. In the paper, authors introduced a potential problem that current methods only use sentence embedding at beginning of the network to generate initial images, where different attributes may be entangled and are hard to be refined during following states. Based on this, authors proposed three components to address this limitation. Strengths: 1. Authors introduced a potential entanglement problem in current methods by using sentence embedding at beginning. 2. A spatial dynamic memory module is proposed to capture the contextual information within image regions. Weaknesses: 1. The paper introduced a problem that current methods only use the sentence embedding at the beginning of the network to generate initial image, where attributes may be entangled. I am confused about this assumption. I think these methods using sentence embedding at beginning is mainly because the initial image resolution is quite small, and sentence embedding is enough to produce the rough image. Then, these methods utilise word-level information to improve the details. These methods can also be modified to incorporate word-level embedding at very beginning of the network, like image resolution at 4 X 4. It is better if authors can add experiments to support this assumption by verifying the existence of entanglement and the need to use word-level embedding at the beginning. 2. In MTWIG, the generation from different couple of words share same conv and upsample blocks (if I am not right, please correct me), does this operation can do disentanglement as claimed by authors? Also, the following fusion operation that combines these different features into single image features. Can this operation cause entanglement? 3. The proposed spatial dynamic memory module is based on the memory module introduced in DM-GAN, and the main difference shown by authors is the utilization of region-contextualized representations. For me, the convolution operator somehow can also capture the regional information. Could we achieve similar performance by adjusting the size of convolution operator in pixel-level representation? 4. I am confused about the FID scores of DF-GAN (33.29 for COCO) reported in the paper, which is different from the value claimed in the DF-GAN paper (21.42 for COCO). 5. How to decide the size of region used in spatial dynamic memory module? Is the 8 × 8 grid map enough to capture the contextual information in image? The current version of paper may need experiments to support claims shown in the paper. <doc-sep>This paper is motivated by that most existing text-to-image methods suffer from three limitations and solutions are proposed to address the different limitations. Firstly, it introduces multi-tailed word-level initial generation to enhance global sentence representation with distinct n-gram word presentation. Second, the spatial dynamic memory module is proposed to create a separate region-contextualized text representation for each image region. Finally, it introduces an iterative multi-headed mechanism to make multiple distinct modifications to the prior image features. Strength: 1. MTWIG improves the initial generation by word-level representation, alleviating the entanglement of words in the sentence-level representation. 2. The method about region-specific word representation realizes the alignment between text modal and image modal, and the proposed three levels query in SDM are interesting. 3. The experiment can verify the effectiveness of the proposed modules to an extent. Weakness: 1. MTWIG method seems unstable. As shown in Table 4(A.2), the MTWIG(ks=1) is better than MTWIG(ks=2) in terms od FID score in the CUB dataset, but MTWIG(ks=3) is better than MTWIG(ks=1). It may be due to that LSTM and TextCNN-like networks are poor to capture n-gram word information. 2. The training epochs are different, e.g. the experiments shown in Table 1 and Table 3. Thus, we cannot convincingly ensure the effectiveness of different components by fairly comparing the proposed method with the SOTA methods. 3. The COCO dataset has more examples compared to CUB. When the data set is small, it is easy to produce over-fitting. Authors should consider performing an ablation study on COCO to verify the effectiveness of different modules. This paper has a clear description of the found three problems, but the ablation studies are not sufficient. For details, please see weakness part in the main review section. From the experiment, it can be seen that the stability of the model is poor. And the consideration of the performance reproducibility of training complex models with huge parameters is also very important. | The reviewers' evaluation of this paper are borderline/negative. The AC considered the reviews, rebuttal, and the paper itself, and concurs with the reviewers. The AC found that the paper is an extension of previous work DM-GAN (DM-GAN: Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis, CVPR 2019, https://arxiv.org/pdf/1904.01310.pdf). This work uses the word features in addition to sentence features at the first stage of generation, while DM-GAN and other previous work don’t use word features in the first stage, but use them in the later stages when the feature resolution is higher. The authors improve the dynamic memory in DM-GAN into spatial dynamic memory, and also change the image refinement process in DM-GAN into an iterative refinement. The proposed multi-tailed word-level initial generation, spatial dynamic memory, and iterative refinement are incremental changes to DM-GAN. Moreover, the proposed structure almost doubles the parameter size of DM-GAN (shown in Table 2), yet the evaluation results on COCO are similar to DM-GAN with only minor improvements. It is not clear whether the performance improvement comes from the increased number of parameters or the architecture design. Especially on the CUB dataset with limited number of images, the model can easily overfit with a larger number of parameters. The proposed method shares the similar network structure and dynamic memory blocks as DM-GAN, except for a few changes. Overall, the AC finds this paper not suitable for acceptance at ICLR in present form. |
- The authors address a very relevant problem for medical applications of deep learning. To my knowledge this is a quite unique approach. - The authors show convincing qualitative and quantitative evidence. - I felt that the explanations in 2.2. NormGrad were not really sufficient to understand the proposed approach (in particular without the original paper describing NormGrad itself). I would strongly recommend revising this part and make it easier to follow and more self-contained. - Maybe the authors could add some thoughts about how easily this would generalise to related examples or use cases. <doc-sep>- The paper has successfully applied a recent saliency detection method NormGrad in the context of medical image analysis which shows promising results. - The evaluation was carried out on a fairly large dataset and includes comparison with a few other baseline methods both qualitatively and quantitatively which show advantages. - Insufficient clarity in the description and the discussion of the method. I need to read the original paper in the reference (Rebuffi et al. 2020) to get a more comprehensive understanding of the approach. Please try to address the following points more clearly: - What's the motivation of using a virtual identity layer? Why does it help? - What's the motivation and benefit of using the Frobenius norm? How does the Frobenius norm explain the merit of NormGrad? - What's the difference between the unfold and the flatten operation (Figure 1)? - When combining multiple heatmaps, were the same type of virtual identity layers used to compute them? What's the motivation of using the geometric mean to combine them? - Insufficient clarity in the evaluation of the method - Dataset: in the validation and testing set, how many samples are with foreign objects, how many samples not? - Pointing Game accuracy: - To compute this metric, was the location of the maximum value in the whole saliency map checked, or the locations of the maximum value of each attended area in the saliency map? How does it deal with the situation where there are multiple foreign objects in the image? - what value was set to the offset to the bounding box annotation in the experiment? - Table 2: please explain the difference of accuracy on the setting Bias Single and Bias Combined. - Insufficient validation of the method - The problem setting in this paper is binary classification (with or without foreign objects in an image). Since an image can contain multiple foreign objects, it would also be interesting to know the precision and recall of the computed saliency maps. Such as how many true foreign objects are missed? How many false positive detections in an image? For example in Figure 2, some foreign objects are not detected, which makes me question the explainability of the method. Why does it attend some foreign objects, but not some others? <doc-sep>The paper has a clear aim and focus, is technically sound and mostly easy to follow. Even though the methods are selected from recent computer vision literature, their application to chest x-ray quality assessment seems to be novel and tackles a relevant clinical problem. The evaluation of hyperparameters as well as the comparison of NormGrad to other methods is sound and extensive, the quantitative results are impressive. It is worth mentioning that the authors evaluate the methods on an open dataset and that the code developed in the scope of this study will be made publicly available. This is highly appreciated and follows the open science spirit of MIDL! One downside of the paper is a lack of qualitative results. Even though the authors extensively discuss their evaluation results qualitatively, only one example image is given. It would be great to see more examples in the appendix, including some worst-cases. Unfortunately, the mathematical description of GradCam and NormGrad (sections 2.1 and 2.2) are hard to follow. Replacing some text descriptions by equations might clear this up a bit. Please also refer to the “Detailed Comments” section below. Another issue arises regarding the author’s claim that “this work is the first that uses NormGrad in the medical imaging context” (page 2). In section 4, the authors refer to the recent study of Wang et al. (2020) that also applied NormGrad in the scope of chest X-ray imaging. Hence, the claim should be adjusted and rephrased as it clearly does not hold. <doc-sep>1. The results and message of the paper are very clear. The authors intended to demonstrate the superiority of normGrad for IQA in chest x-Rays versus other techniques - this is clearly demonstrated. 2. The paper is generally well written, the literature review is expansive and covers the state of the art well, placing the context of saliency map well and thus motivates the application of norm Grad. 1. The authors make note of opposing claims to their findings by Wang et al. in Section 4 which is good. However, there is little explanation for why this occurs? What is it about the authors experiments, dataset and configuration which causes this? Further exploration would permit the reader to ascertain the strength and merit of the method for IQA then. | The paper receives unanimously positive reviews from four knowledgeable experts. They all agree that, though the saliency detection method NormGrad is not invented by the authors, its use in the context of medical image analysis shows promising results, which is clearly demonstrated by the authors. They also express some concerns, which are largely addressed by the authors in the discussions. I, therefore, recommend the acceptance of this paper. |
(a) This belongs to the literature of implicit bias/inductive bias, which has gained a great deal of attention among theoretical enthusiasts with an optimization leaning. (b) The paper is carefully laid out and argued, and is at a nice level of clarity and precision. (c) The mathematical argumentation seems to me correct; however I haven't checked line-by-line. (d) The situation being studied is very very special and doesn't much correspond to the big kahuna deep learning. Nevertheless the intellectual clarity of this special case is quite appealing. (e) The implied conclusion seems rather special as well. From one viewpoint it says that if you start from the get-go with perfect separation of a particular strong form, then the future evolution of the training can never spoil things. This is a very weak statement, but I suppose if we can't get results here in a very special case that we can understand well, then the general situation is truly hopeless. Specific Comments. (1) Why is this max-margin if your constraints only consider one class. It seems to be more of a finding a minimum-norm vector aligned with all training data of that class. Its unclear why the concept of separating margins comes in. (2) “Theory III: Dynamics and Generalization in Deep Networks” by Banburski et al. also considers general deep relu networks and shows that the resulting margins are max-margin—requiring only separability, not orthogonal separability. In addition, that paper uses traditional DE methods rather than relying on lesser-known extremal sector techniques. Can you discuss or highlight why the simpler example in this paper might lead to insights not found in the other paper. (3) While the paper says that it is not directly applicable to deep nets, it draws motivation from the popularity of that literature. In that spirit, to justify such an evocation, can you show at least one experiment on a non-synthetic dataset such as MNIST/CIFAR/etc (perhaps even simplified with hand-engineered preprocessed features and subsetted to two-classes) that would support the potential connection to deep learning? (4) Can you provide any evidence why datasets would become orthogonally separated? Is there some feature engineering procedure that tends to produce orthogonal separation? (5) In Figure 1, variance is strange: shows one big outlier, but the plotted projection shows two roughly-equal-magitude directions of variation. (6) It is unclear how Definition 2 relates to strict extremal directions as defined by the sign patterns. (7) G should be clarified: What G is and what it represents should be explained to make the results more insightful 16m 50s Type a message <doc-sep>This paper studies the inductive bias of gradient flow for two-layer ReLU networks for classification problems. Under an orthogonally separable data assumption, it is shown that each node of the ReLU network will converge to one of two directions that linearly separate the data. I think the inductive bias of neural network training is a very important research problem and the result of this paper looks interesting. However, I also have the following concerns about this paper. Perhaps the most obvious limitation of this paper is that the orthogonally separable data assumption is too strong. Under this assumption, the classification problem can be solved trivially: one can simply randomly pick a training example and use it as the parameters in a linear predictor. It seems to be highly unlikely that this assumption can be satisfied by any challenging real-world problems. Moreover, the current submission lacks discussion and explanation of their results: (a) The result of this paper seems to be weaker than the result in Lyu & Li (2020), while also requiring much stronger assumptions than Lyu & Li (2020). Note that the inductive bias given in Lyu & Li (2020) is in the form of a maximum margin KKT point of the ReLU network (as a *nonlinear* classifier), however, the result in this paper is more related to the maximum margin solution of linear models, which in general may be much worse than the margin achievable by wide neural networks. Therefore I guess the most straightforward question the authors should clarify is whether under their setting w^+ and w^- indeed gives a KKT point of the *nonlinear* maximum margin problem given by Lyu & Li (2020). (b) To my knowledge most of the inductive bias results for classification problems (cross-entropy/exponential loss) do not rely on specific initialization methods (except certain assumptions to guarantee achieving zero training error) (Soudry et al. (2018), Ji & Telgarsky (2019b), Lyu & Li (2020)). Therefore the authors may consider providing more explanation on why they require the specific initialization. (c) In Section 6 it is mentioned that Li & Liang (2018) contain the training in the neighborhood of the (relatively large) initialization. While this is to some extent true, I find this comment not very convincing. When studying inductive bias, it is natural to restrict the training to a fixed training dataset, i.e. to treat the online SGD in Li & Liang (2018) as finite sum SGD by considering a uniform data distribution over training samples. In this case, since Li & Liang (2018) considers classification with cross-entropy loss, the weights will eventually go to infinity and therefore will not stay in the neighborhood of initialization forever, as is shown in Lyu & Li (2020). This is also true for other classification results in the lazy training setting including [1,2,3,4]. It seems that a combination of these results mentioned above and the result by Lyu & Li (2020), which has been discussed in Ji & Telgarsky, 2020 and [5], can already imply a much stronger result compared to this paper. [1] Zou, Difan, Yuan Cao, Dongruo Zhou, and Quanquan Gu. "Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks." arXiv preprint arXiv:1811.08888 (2018). [2] Nitanda, Atsushi, Geoffrey Chinot, and Taiji Suzuki. "Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems." arXiv preprint arXiv:1905.09870 (2019). [3] Cao, Yuan, and Quanquan Gu. "Generalization bounds of stochastic gradient descent for wide and deep neural networks." NeurIPS 2019. [4] Ji, Ziwei, and Matus Telgarsky. "Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks." ICLR 2020. [5] Moroshko, Edward, Suriya Gunasekar, Blake Woodworth, Jason D. Lee, Nathan Srebro, and Daniel Soudry. "Implicit bias in deep linear classification: Initialization scale vs training accuracy." arXiv preprint arXiv:2007.06738 (2020). <doc-sep>This paper characterizes the implicit bias of gradient flow of two-layer ReLU networks on orthogonally separable data trained on the logistic loss. The problem of characterizing the implicit bias of gradient descent on neural networks is an important one, and while the authors do make fairly strong assumptions on the data (data corresponding to the different labels lie in separate orthants), the proof is novel, interesting and non-trivial. The proofs are carefully carried out and seemed as far as I could verify. A few questions: 1. Is it possible to characterize what the outer weights (a's) converge to? If yes, I would suggest that the authors include this either in the main theorem, or as a comment after the theorem. 2. Does a similar result hold if the network also has bias variables? 3. Can linearly separable data be made into orthogonally separable data (by appropriate pre-processing) and by also training a bias term? 4. How are the lambda_j's chosen in the near zero initialization? The current description of choosing lambda_j on page 2 is quite vague. 5. I would also urge the authors to add the additional assumption about the positive and negative examples spanning the entire space in Section 2 along with the other assumptions. <doc-sep>This paper studies the inductive bias of two-layer ReLU networks trained by gradient flow. The main challenge is to analyze the global convergence of the flow dynamics. Under a special assumption that the data are orthogonally separable, the paper shows that the dynamics converges to a unique max-margin solution. I find that the paper is well written, modulo some assumptions on the data that would be better to be made more rigorous. The overall quality is good. The novelty compared to the literature is that this paper provides a global analysis of the non-linear non-smooth dynamics without going into the over-parameterized regime. In Theorem 1 of the paper, what does it mean « For almost all such datasets? » What is the probability distribution of (X,y)? Is there any more precision condition on \\lambda, which controls the norm of the initial weights? In Lemma A.3, what does it mean « genetic position »? I think what is needed is to assume that the probability that all the x_i lie on some hyperplane is close to zero. This assumption is crucial for (43) to hold, therefore I think it should be made more precise. Or to put in a less probabilistic, one may assume that the maximal number of samples {x_i} that lie on some hyperplane is smaller than the dimension of x_i. For clarity, is the experiment in 5.2 uses the same data (X,y) as in 5.1? Some typos or confusions of notations are listed below: - \\ell^+(t) right before (18), is confusing, as \\ell^+ is a function of \\theta, I would suggest to use L^+(t) = \\ell^+(\\theta(t)) - Change W+(t) -> W_(t) to (24) - \\ell’_i(t) in (40) and (41) are also confusing, write L_i’(t) = \\ell_i’(\\theta(t)) ? - Change \\Sigma[i,j] -> \\Sigma[j,i] in (42), (43), etc. | The paper shows that under a very restrictive assumption on the data, ReLU networks with one hidden layer and zero bias trained by gradient flow converge two a meaningful predictor provided that the network weights are randomly initialized with sufficiently small variances. While there is some overlap with a paper by Lyu & Li (2020), the paper under review establishes its results for networks with arbitrary widths whereas using the results of Lyu & Li (2020) works, at least so far, only for sufficiently wide networks. The assumption on the data is anything than realistic and actually any "simple, conventional" learning algorithm can easily learn in this regime. Nonetheless, getting meaningful results for neural networks is still a notoriously difficult task and for this reason, the paper deserves publication. |
The paper studies the nonconvex inner problem bilevel optimization, proposed algorithm for such a problem, and theoretically show the asyptotically convergence. The paper provides the comprehensive study of the problem and the result is complete. The paper lack numerical verifications, and I am doubt about the setting of selection map, my inituition is that it is a mapping from $\\mathcal{X}$ to $\\mathcal{Y}$. <doc-sep>This paper studies bilevel optimization. The authors show that existing algorithms is subject to approximation errors. The authors introduce a simple correction to these algorithms for removing the errors. Strength: A good background survey of bilevel optimization. The writing is reasonable. Weakness: 1. It says that many existing algorithms for bilevel optimization suffer from approximation errors. But it only analyzes the "Unrolled Optimization Scheme" 2. Step 9 in Algorithm 1 is using approximation, which is not precise. How can one implement it? 3. Any concrete example to show the other algorithms issue would be much better. 4. It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality. 1. It says that many existing algorithms for bilevel optimization suffer from approximation errors. But it only analyzes the "Unrolled Optimization Scheme" 2. Step 9 in Algorithm 1 is using approximation, which is not precise. How can one implement it? 3. Any concrete example to show the other algorithms issue would be much better. 4. It only solves the bilevel optimization to stationary point and is not able to solve it to global optimality. <doc-sep>The paper analyses bilevel optimization problems with a non-convex lower level problem through the concept of a selection map that chooses a particular solution to the lower problem. They use this notion to define a new selection map based on gradient flows and analyze the resulting games and the differentiability of the selection using a new set of tools in Morse theory. They finally use the analysis to propose an algorithm for differentiating through the inner problem even when the unrolled optimization of the inner problem is solved for a finite number of steps. **Strengths** 1. The paper is well written and was comprehensible even for a practitioner like me without much prior knowledge about the specific theoretical tools used to analyze such problems. It also provided a nice overview of the existing approaches which I really appreciated. 2. As a practitioner, I think the primary contribution of the paper was showing that the rough equivalent of the implicit function theorem is actually the theoretically sound approach to take even in the case where the inner problem is non-convex (which is what we used to do even otherwise, but it’s very useful to actually have a theoretical justification for it ;) ) under reasonable assumption. To my knowledge this is the first paper showing this. **Weaknesses** 1. Although, I realize that it’s probably unreasonable of me to expect experiments from a theory paper, I would have appreciated some experiments showing the practical impact of applying the gradient correction on some toy problems. Yes <doc-sep>Bilevel optimization problems present an ambiguous model when the lower-level problem has multiple solutions. Typical ways to deal with this ambiguity are to consider either the so-called "optimistic" or "pessimistic" versions of the problem. In contrast, this paper proposes the use of a selection map to resolve the ambiguity. Under certain conditions, the authors show the selection map is differentiable which they claim makes the model amenable to numerical computation. The strengths of the paper are related to the technical rigor of the work. The results look reasonable and as would be expected under the specified conditions, though I did not check the proofs. The weaknesses of the paper are that the notation is unclear and, more significantly, that the relevance of the selection map model is questionable. The authors present the pessimistic-BG problem using non-standard notation of separately presenting an upper-level and a lower-level problem. The typical notation is to write a single optimization problem where the lower-level problem is included as a constraint in the upper-level problem. The authors' notation is understandable to readers with familiarity with bilevel optimization, but their notation would be unclear to readers not familiar with bilevel optimization. However, the notation that the authors use to present the selection map within the BGS problem is unclear. As currently written, the authors' notation is unclear, and the resulting model does not resolve the ambiguity that they seek to resolve. The authors' notation does resolve the ambiguity if an additional assumption, which is not explicitly stated in the paper, is made -- specifically that the $y$ variable within the selection map is a kind of "initial condition" or "state variable". Regardless of the above issue about the meaning of how the selection map is used, there is questionable relevance of this model of using a selection map to resolve ambiguity in bilevel optimization problems with lower levels that have non-unique solutions. The reason this model is questionable is that depending on how this selection map is chosen the results of the model can be completely different, and so the selection map approach does not really resolve any ambiguity in which lower-level solution is used in the upper-level problem. The authors present no evidence that this would be a meaningful model. Even if the authors provide evidence that the model is meaningful, it is questionable that this represents an important or major contribution. Based on my understanding of the authors' unclear notation, their idea of using a selection map is equivalent to merely defining a different (parametric) objective function for the upper-level problem. As such, the authors' problem is equivalent to an optimistic bilevel optimization problem with a different (parametric) objective function. The technical results are also arguably: not surprising, not particularly difficult to derive or prove, and not interesting. Not applicable | This paper studies bilevel optimization problems and proposes techniques for disambiguating cases where the lower-level objective has multiple optimal solutions. The main points the reviewers raise in favor of acceptance are that: 1. The paper provides theoretical justification for techniques used to solve ambiguous bilevel optimization in practice, and this theory leads to algorithmic improvements. 1. The paper is well written. 1. The topic and results are of interest to the NeurIPS community. Reviewer oofy argues strongly for reject with the following main concerns: 1. The notation of the paper is confusing (and in particular, there are concerns about the use of the variable y when the bilevel games with selection (BGS)). Reviewer gXUJ also had some confusion about the BGS setup, and reviewer tW4t agreed that the use of y is inconsistent in the BGS setup. 1. Introducing a selection map doesn’t really resolve ambiguity in the optimization problem because choosing the selection map is essentially resolving the ambiguity by hand. In the rebuttal, the authors argue that the theoretical analysis is still interesting and useful because many algorithms used to solve bilevel optimization problems in practice are implicitly making a choice of selection map, and the theoretical analysis of the paper allows us to understand what those techniques are really optimizing for. Reviewer tW4t also found the example selection maps provided in the paper to be compelling. I am convinced by the author rebuttal and reviewer tW4t that the selection map is useful. As for the notational concerns in the introduction of BGS, I think the paper would benefit from added discussion on the role of y in the upper level. In particular, from the author responses and later sections of the paper, it appears that y plays the role of a warm-start or initialization for the agent optimizing in the lower level. This discussion should be included near to the introduction of BGS, since otherwise it is confusing why the selection map is not a function from X -> Y. |
This paper proposes ConfounderGAN, a GAN whose generator can be used to create a noise to an image to make it *unlearnable*, by creating a spurious correlation between the image and the label. The proposed approach has been evaluated on several image classification tasks and the results show it can help reduce the accuracy of a model trained on noisy data in the *non-adaptive* setting. **Strengths** The paper tackles an important issue. The results seem promising. However, several issues need to be addressed to make the paper more convincing. **Weaknesses** - The proposed approach seems a lot like a data poisoning attack. However, discussions on data poisoning and its relation to this solution are missing. Instead, terms like encryption are used, which can be misleading. - The paper did not discuss the asymmetry between users and attackers as discussed in recent literature (e.g., [a]), which may give a false sense of security to users as these types of countermeasures have been proven to be ineffective. [a]- Data Poisoning Won't Save You From Facial Recognition. (Radiya-Dixit et al., 2021) Nothing to report. <doc-sep>This paper proposed using GAN to produce confounder noise for unlearnable examples. The proposed method address an important issue that personal data is being used for unauthorized machine learning training. Unlike existing methods that require bi-level optimizations with multiple backward passes, the proposed method can generate confounder noise in a forward pass after training, making it very practical in a real-world application. Empirically, the proposed method outperforms existing methods. Strengths - Well-motivated method for efficiently generating unlearnable examples, and the context of unauthorized machine learning training is well explained. - The proposed method is efficient and technically sound. Existing works rely on optimizations that may not be practical for a user to generate unlearnable examples on the fly. Using GAN, the unlearnable version of the image can be generated in a forward pass, which improves usability in a practical setting. - Comprehensive empirical evaluations of different datasets and models are appreciated. Results demonstrated the proposed method consistently outperforms existing methods. --- Weaknesses/Limitations: - For experiments on the different proportions of unlearnable examples, it does not make sense to compare Polyline 5 with others. Polyline 5 is still regarded as the 100% unlearnable case. It would be interesting to see $ D_{out,en} $ with $D_{nat}$. - Legends on Fig 5 (c-d) color is not clear for EMN v.s. the proposed method for only $ D_{in,en} $ - Line 215, "When the model trainer downloads these images..." I believe the goal of unlearnable examples is to make the model unable to predict the protected classes/users rather than high-performing models. - Line 237 "we first" -> "We first" - Once the data is released, the defender may not modifies the data anymore, and the model trainer can retroactively apply new models/methods [1]. An adaptive case should be carefully examined. - Comparison with DeepConfuse [2], which also able to generate unleranable samples for $ D_{out,en} $ [1] Data Poisoning Won’t Save You From Facial Recognition, ICML 2021 Workshop AML [2] Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder. NeurIPS 2019 --- After the author's response, I increased my rating score to 7. A detailed analysis of the adaptive method to reverse the original image is well explained and discusses the potential limitation of the proposed method. Based on the author's response, in practice, the owner should keep the parameters of ConfounderGAN private to prevent the model trainer reverse the unlearnable data. Please address the potential limitations in the Strengths And Weaknesses section. <doc-sep>This paper proposes a GAN that makes personal image data unlearnable by DL methods for data protection. The authors utilize the confounder property present in the noise produced by the generator. This property builds spurious correlations between images and labels, disallowing the model to learn the correct mappings. The discriminator is used to ensure that this generated noise is undetectable. The authors conduct experiments using six image classification datasets, 3 of which are natural object datasets and 3 are medical datasets. Specifically, a confounder based framework has been proposed for image data encryption. The paper has been very well written, introduction to concepts have been well laid out and the structure of the paper is clear. The authors have done a thorough study of the past works in this particular domain. There are, however, some typos throughout the paper. For example: Line 12 -> "thereby, remaining the normal utility", Line 14 -> "The experiments are conducted in six image classification datasets, including three natural object datasets and three medical datasets" (this reads slightly ambiguous so a better choice of word for including might be consisting/comprising), Line 237 -> sentence capitalization. There are very few possible negative societal impacts, that are not straightforward in nature. The authors have, however, discussed the positive impact, which is data privacy in DL. | All the reviewers were excited by the idea and a efficient method to solve very critical problem with rigorous experimental support. They all agreed that the paper is above bar for publications. We hope the authors will further improve the paper for camera ready submission. |
The authors expose the vulnerability of split Federated Learning and show how model extraction attacks can be launched by malicious clients querying the gradient information from server-side. They proposed five different variants of model extraction attacks (ME) on split federated learning. The attacks use different gradient schemes, including data crafting, data generating, gradient matching and soft label crafting. They also made different assumptions for the data such as no data, only auxiliary data (out-of-distribution data) and training data (in-distribution data). They interestingly find that in a 5-layer-in-server SFL, an ME attack can derive a surrogate model with over 90% accuracy, and less than 2% accuracy degradation. For the experiment they tried both fine-tuning the SFL models and training from scratch for different gradient consistency. They showed that the ME attacks can succeed even without any data when fine-tuning. They also concluded that the ME attacks would succeed better with the increase of layers on the server side. Strengths: - It’s an original work and perhaps the first one on the possible attacks on SFL systems. - Quality of the work is good, and the writing it’s quite clear. Weaknesses: - Some related work is missing. The authors mention that Split Learning is firstly introduced in reference "[Gupta and Raskar, 2018]" while the idea goes further back and there are already several papers on the field. For example (non exhaustive list): * Kang, Yiping, et al. "Neurosurgeon: Collaborative intelligence between the cloud and mobile edge." ACM SIGARCH Computer Architecture News 45.1 (2017): 615-629. * Yousefpour, Ashkan, et al. "Guardians of the deep fog: Failure-resilient DNN inference from edge to cloud." Proceedings of the First International Workshop on Challenges in Artificial Intelligence and Machine Learning for Internet of Things. 2019. * Liu, Peng, Bozhao Qi, and Suman Banerjee. "Edgeeye: An edge service framework for real-time intelligent video analytics." Proceedings of the 1st international workshop on edge systems, analytics and networking. 2018. * Surat Teerapittayanon, Bradley McDanel, and HT Kung. 2017. Distributed deep neural networks over the cloud, the edge and end devices. In Distributed Comput- ing Systems (ICDCS), 2017 IEEE 37th International Conference on. IEEE, 328–339. The defense mechanism is quite basic and it would be nice to see some strong defense mechanisms with more analyses and numerical experiments <doc-sep>The authors perform five different model extraction attacks in settings of split federated learning (SFL). Such attacks exploit gradient information from the server-side to conduct model extraction attacks. Besides, this paper proposed an approach to obtain the necessary shadow data for constructing attackers. ** strengths 1. Give a comprehensive investigation of model extraction attacks in split federated learning settings, and provide rich experimental results. ** weaknesses 1. Different attack approaches are explained in an abstract way. It is hard to understand the attack assumption, how to craft data, how the attacker works, etc. 1. Gradient matching model extraction (GM-ME) relies on the gradient of x_i, which is infeasible in reality. 2. Training-based model extraction (Train-ME)'s idea is done by previous work, using model transferability. see [1] [1] Fu, Chong, et al. "Label inference attacks against vertical federated learning." 31st USENIX Security Symposium (USENIX Security 22), Boston, MA. 2022. <doc-sep>Federated learning is a distributed learning paradigm that allows multiple clients and a server to collaboratively learn a machine learning model without data sharing. However, federated learning still suffers from model privacy concerns, since the server and the clients all have access to the full global model. Split federated learning is proposed as a solution to both communication reduction and privacy enhancement. In split federated learning, the machine learning model is split into two parts, one held by the server and the other one held by the clients. Since the clients have access to only part of the model, split federated learning was considered to be robust to client-side model extraction attacks. However, in this work, the authors find that this is a false sense of security. Specifically, a malicious client can perform model extraction attacks to learn a surrogate model that have similar accuracy/predictions compared to the target server-side model. ### Strengths - Five different model extraction attacks are proposed in this work. The authors consider attack different settings for the attack, e.g., whether the attacker (malicious client) has local training data. - The authors conduct extensive experiments to show the effectiveness of their model extraction attacks. It is impressive that the attacks do work in many cases and it is interesting to see the impact of different parameters, e.g., the number of queries, the number of layers in the server-side model, and model architectures. - The paper is generally well-written and easy to follow. ### Weakness - When evaluating the model extraction attacks with training data available, it is intuitive to compare the attacks with a simple baseline that uses the training data to train a model locally on the malicious client. I would expect the authors to perform a simple comparison with such baseline. This is essential because if the attacker can locally train a better model, then there is no motivation for the attacker to perform the model extraction attacks. - There is no analysis on the complexity of the proposed attacks. It is interesting to see some analysis on the attack cost, e.g., $O()$ notations with respect to the number of parameters in the server-side model. - There are some defenses in federated learning that leverages statistical analysis of the local gradients (known as Byzantine-robust FL methods). I am curious whether these methods are sufficient to defend against the proposed attacks. - In Section 5.3, it is claimed that "This indicates Resnet architecture is more resistant to ME attack". It would be interesting to see some explanation on why this is the case. - The potential societal impact may need to be stated more explicitly. - Writing issues. On line 3 of Algorithm 1, I would say it is better to average model parameters (i.e., $W$) instead of models (i.e., $C$). In Figure 4, the captions of subfigures mismatch. The negative societal impact may need to be stated more explicitly, e.g., via adding a paragraph in the discussion. <doc-sep>This paper investigates model extraction attacks on split federated learning. The paper assumes that the attacker cannot be able to access the model predictions from the server. Hence, the paper leverages the gradients to extract the server-side model. Five model extraction attack methods are proposed for different data assumptions. Strengths: 1. The paper investigates five model extraction attacks against SFL. 2. The evaluation considers several interesting scenarios, such as non-iid data distributions, adversarial attacks, and attacks without knowing model architectures. Weaknesses: 1. The paper claims that the client-side attacker cannot use the existing model extraction attacks in the split federated learning because the client cannot get the predictions from the server. However, in the inference phase, the clients can get the predictions. The proposed attacks can be further compared with the existing model extraction attacks through predictions in the inference phase. 2. The novelty of this paper is limited. Most proposed attacks leverage the existing model extraction attacks. Train-ME is the most effective attack. However, Train-ME is a very straightforward attack, which does not even need the gradient query. This may indicate that the model extraction attacks in SFL is not a trivial problem. 3. The no data assumption is confusing. If the attacker is a client, the attacker should at least have access to the client’s data. 4. It would great if the proposed attacks could be compared with some baselines. For example, since the attacker is one of the clients, what is the attack performance when the attacker uses the client’s data to train a model? 5. The experiment only considers 10 clients. However, in the real-world settings, the number of clients should be much large. What’s the performance of the proposed attacks when a large number of clients participate in the SFL? 6. The proposed attacks do not perform well on complex datasets (e.g., ImageNet, CIFAR100) and complex models (e.g., ResNet). 7. The scope of the paper is pretty narrow. The proposed attacks can be only applied to split federated learning. The authors addressed the limitations and potential negative societal impact. | The paper studies the vulnerability of split federated learning with model extraction attacks. The paper provides five attacks and evaluates them experimentally. The authors also provided additional experimental results during the author rebuttal. While the topic and techniques are interesting, reviewers raise concerns about the novelty, and lack of experiments on standard FL datasets (e.g., LEAF) or large number of clients. While authors addressed some of these concerns during rebuttals, the paper can benefit from (a) explaining the novelty of the contributions (b) clarifying the assumptions made in the paper (c) explaining if the paper considers cross-device or cross-silo federated learning (b) adding more experiments on standard FL datasets and tasks. |
The paper provides an interesting negative result on that polynomial sized discriminator lacks sufficient discriminative power to distinguish the data distribution and generated distribution by a constant depth generator. The argument rigorously indicates that a small neural IPM over the discriminator network can still yield a large Wasserstein distance. The paper is well organized. Although there are many technical details, which are relatively hard to follow every bit, it is due to the rigor and complexity of the theory. I like the flow of the paper, especially building the theory from easier discrete cases to continuous generalization. Yet some improvement can be made in Section 3.2 and Section 3.3. For example, the connection between Section 3.2 and Section 3.1 is somewhat vague, and some high level idea of extending binary output to continuous output is helpful appearing before Theorem 3.6. The experiments are for illustrative purpose. However, I find it a bit confusing. Does figure 1 report the training loss or testing loss? If I understand correctly, we should achieve approximately zero training loss, while the Wasserstein distance between the data distribution and the generated distribution shows a nonzero gap. In figure 1, the Wasserstein distance is claimed to be large, without numerical verification. By the way, I am curious how is the loss $\\mathbb{E}[-\\log (D(X))] + \\dots$ is computed. The theory in the paper utilizes polynomially-sized Boolean circuit theory, which is an interesting connection. The paper does not have obvious contribution to practically trained GAN models, however, this hardness result provides revealing insights of GANs. In fact, the paper opens some directions to investigate and should be highly important to GANs. For example, if we change the architecture of the generator, does the discriminator in the paper still lack power? Maybe an easier (somewhat orthogonal) question is if the generator network is powerful in representing the data distribution in Wasserstein distance, does there exists good choice of discriminator (poly size for example) to guarantee the distribution recovery in Wasserstein distance. Overall, I am positive on the paper. <doc-sep>This paper studies the problem of learning generative adversarial networks using a ploy-size ReLU generator and discriminator under the standard Wasserstein-1 metric. The main result is that there exists a "bad" generator that can cheat all discriminators under the estimation of the Wasserstein-1 metric while being far from the data distribution under the true Wasserstein-1 metric. The proof relies on two assumptions. The first one is on the diversity of the target data distribution. The second one is a standard assumption in cryptography as claimed by this paper while I'm not familiar with cryptography. The results explicitly consider the computation complexity of the model and may show the learnability of the GAN model in some sense. The paper is interesting. However, I have the following questions about the presentation and its significance. 1. The assumption on the diversity of the data distribution is not discussed in detail, especially the one used in the main Theorem. The authors only say it is a large family of distributions while it is unclear and not so intuitive for readers. 2. How large is the $\\epsilon(m) poly(m)$ in Theorem 3.1? Is it meaningful in practice? Does the result provide any insight on training GANs since the estimation of the Wasserstein-1 metric is monitored during training? 3. As discussed in the conclusion, the paper only proves the existence of a "bad" generator while optimizing the GAN objective does not always lead to the worst case. The authors claim "MINIMAX OPTIMALITY $\\textbf{(PROBABLY)}$ DOESN’T IMPLY DISTRIBUTION LEARNING FOR GANS" while the paper does not discuss how likely we really meet this in practice, which limits the significance of the paper. 4. There is a gap between the experiments and theory. The authors train 4 different discriminators, which may not cover all of the possible ones. It would be better if the authors can conduct an example where the generator can cheat all of the discriminators. Overall, I think this is an interesting paper and currently, I tend to accept it if all of the concerns are well addressed. <doc-sep>The papers offers strong evidence that even if (population) minimax optimality is satisfied for a GAN the generator may not have learned the distribution with respect to the Wasserstein distance. More specifically, the authors show that there exits distributions that: + can be generated by simple generators nets (variants of randomized ReLU nets with constant depth and polynomial size and Lipschitzness) that are fed with simple seed distributions (e.g., uniform on a cube) + are discrete and far (in Wasserstein distance) from any "diverse" distribution + cannot be distinguished from (a simple) diverse target distribution using any polynomial discriminator network (polynomial size/depth/Lipschitzness) The proof is based on a cryptographic assumption, namely the existence of a local pseudo-random generators (they use a specific one proposed by Goldreich). The results are novel, interesting, and non-trivial. The construction draws connections between GANs and local PRGs which is interesting. The authors interpret the result of the paper as saying "minimax optimality does not imply distribution learning". This is by itself a rather inaccurate statement: distribution learning is naturally defined based on a measure of success. The authors have shown that minimax optimality does not imply learning w.r.t. Wasserstein distance (under some assumptions). The choice of Wasserstein is natural but somewhat arbitrary. For example, if we change the Wasserstein to another IPM (e.g,. one which is directly defined based on set of neural nets rather than Lipschitz functions) then the result would be false. In fact part of the success of GANs in applications like image generation can be attributed to the fact that they don't really optimize the usual notions of distance/divergence between distributions. This has been discussed to some extent in the conclusions of the paper, but I still think the message of the paper in other parts of the paper can be misleading. I think the presentation of the paper can be significantly improved. More specifically, some of the notations are hard to follow. Moreover, some background on PRGs are missing in the main text, making it hard to follow the paper for those who are not familiar with them (I suppose this is the case for most of the audience of this ICLR submission) My understanding is that the authors use random networks as their generators, and this enables to have generators that receive discreet seed distributions but the distribution of their output is continuous. Therefore my understanding is that one of the main reductions of the paper does not go through with the use of deterministic (e.g, ReLU) nets. Please elaborate. If this is the case, then it should be mentioned clearly, since most generators used in practice adopt deterministic networks. The experiments section is rather weak. For one, the target distribution is chosen to be discreet whereas in usual applications the target distribution is rather continuous (like images). ================= More comments + In Theorem 1, $\\gamma_m$ is not defined before. + throughout the intro (e.g., theorem 1), we see "polynomial discriminator" and "polynomial generator"; it would be helpful if you mention the parameter(s) w.r.t. which we are taking about in each case + would be helpful to define negl function in the main text since it's been used in the main theorem statements + In Thm 3.1 we see that the size of the parameters as well as W_{F*} depend on \\epsilon(m). This makes it hard to evaluate the strength of the bound. In other words, the generator is also getting more complicated as m grows. Can you demonstrate the power of your bound by choosing a good \\epsilon(m)? + In equation (1) we see $D_{d(m)}$. Should this be $f(D^*_{d(m)})$? This is a solid work and proves an interesting result. The presentation can be improved, and in general the paper is not easy to read. The experiments are quite weak. <doc-sep>The paper shows that if Goldreich’s PRG is able to fool all Boolean circuits of polynomial size, then we can construct a poly size ReLU network as a generator that outputs a distribution that has a constant $W_1$ distance to the target distribution but all poly-size ReLU networks cannot discriminate between the two distributions. The paper reduce the problem of "whether there exists a bad solution of GAN objective" to the existence of local pseudorandom generators. This is an interesting and novel idea. Then the paper shows that under some artificial case, the min-max optimality cannot imply that the GAN learns the distribution. The paper is highly theoretical. My major concern is the significance of the result. It seems to be well-known and intuitive that fooling a weak class of discriminators does not imply exactly learning the target distribution. Besides, I would like to provide some suggestions for the paper writing. 1. In Theorem 1.1, the $\\gamma_m$ is not defined. This makes the theorem hard to understand. 2. Definition 5 is extremely difficult for people who are unfamiliar with cryptography to understand. What do "uniformly random k-uniform hyper-graph" and "k-ary predicate" mean? I also cannot understand what is "circuits in P/poly". 3. In Lemma 2.4, the authors should indicates the failure probability, that is, with probability at least $1-\\delta$ the result holds. Otherwise the lemma just looks erronous. 4. There are a lot of Lemmas that are spread across the paper that make the paper looks somewhat broken and also increase the burden for readers to understand. For example, Lemma 2.1 is only used in the Proof of Theorem 3.6. Lemma 2.3 looks immediate from the definition and is never used in the paper. The authors may not want to present those Lemmas in the paper. The paper is highly theoretical. My major concern is the significance of the result. It seems to be well-known and intuitive that fooling a weak class of discriminators does not imply exactly learning the target distribution. <doc-sep>The paper leverages the techniques in cryptographic to prove a surprising result that a uniform small error against against all poly-size neural network discriminators can not guarantee a small error in type-1 Wasserstein distance. Major Comments: - The potential impact of this paper is strong: by revealing that the class of poly-sized discriminator may not be rich enough to achieve distributional learning, the paper motivate us to reconsider if GAN is more suitable for feature learning rather than distributional learning. Given the successful empirical performance of GAN, one may also reconsider if Wasserstein distance is a reasonable objective function to use. - The assumptions imposed by the paper such as poly-sized neural network discriminators and diverse target distribution seems natural. - To the best of my knowledge, it is novel to apply pseudorandom generators theory to study GAN. - The numerical part is simple but enough to support the validity of the theory. Minor Comments (several typos in the paper): - Definition 2: $\\mathbb E_{x\\sim q}(f(y))$ should be replaced by $\\mathbb{E}_{x\\sim q} (f(x))$? - Eq (1): $\\mathcal D_{d(m)}$ should be replaced by $\\mathcal D^{\\ast}_{d(m)}$? - Last sentence in the proof sketch of Theorem 3.2: we get that there exist a threshold $t\\in \\mathbb{R}_{\\mathrm{poly}(d)}$ for which $|\\mathbb{E}[\\mathcal{M}_\\tau(G(U_m))] - \\mathbb{E}[\\mathcal{M}_\\tau(U_d)]|,...$, I suspect the inequility for $|\\mathbb{E}[\\mathcal{M}_\\tau(G(U_m))] - \\mathbb{E}[\\mathcal{M}_\\tau(U_d)]|$ is incomplete here. The paper provides a new perspective on the important problem if GAN is able to achieve distributional learning. In general, the paper is well-written, and to the best of my knowledge, the proposed methodology is novel to this type of problem. Thus I recommend acceptance. | The provides a complexity theoretic look at GANs. The exposition is multi-disciplinary, and in my personal opinion, it is an interesting look at the GANs in the context of random number generators. |
This paper focuses on the parameter redundancy issue in large transformer architectures. Instead of pruning redundant parameters, it strengthens training them to make them contribute better performance. To this end, it proposes an adaptive learning rate algorithm SAGE, which automatically scales the learning rate for each parameter based on its sensitivity. The sensitivity is approximated by the dot product between parameters and their gradients. An exponential moving average is used to track the sensitivity scores to reduce uncertainty in mini-batch training. The algorithm is applied to fine-tuning pre-trained transformer models in benchmarks for natural language understanding (NLU), neural machine translation (NMT), and image classification. Strengths 1. The paper is well-written and easy to follow. The method section provides helpful intuitions behind the algorithm design. 2. The idea is interesting since it explores a different direction in dealing with redundant parameters. 3. Experiments use multiple benchmarks, including both language and vision, and results show noticeable performance improvements. 4. SAGE is orthogonal to the existing adaptive gradient methods. Jointly using them can bring more gains. Weaknesses 1. Table 1 shows that models with different percentages of redundant parameters have similar performance, implying that performance may not be proportional to "well-trained" parameters. It means that making more parameters "well-trained" does not necessarily improve the performance. This seems not totally in line with the paper's motivation. 2. Figure 5 is not straightforward to visualize the difference. The curve is drawn only on the right subfigure. 3. Sufficiently training all parameters seems a double-edged sword. According to Figure 3, models trained with SAGE are susceptible to parameter pruning. In general, we want efficient and compact models in deployment. Will SAGE be harmful to developing efficient deployment models? This paper proposes SAGE, an adaptive learning rate schedule to train redundant parameters more sufficiently to improve model generalization. SAGE, together with existing adaptive optimizers, shows effectiveness in a wide range of downstream tasks. The main concern is whether SAGE has adverse effects in getting efficient deployment models. <doc-sep>The paper argues that redundant or "useless" parameters are not an axiom we should take for granted, but rather a symptom of current optimization settings. The authors propose an adaptive learning-rate schedule that specifically aims to eliminate redundant parameters. Through extensive experiments on fine-tuning transformers, they show that indeed this method (SAGE) does reduce redundancy and also slightly improves results. I think that overall the paper is good as it raises and studies an important point: redundancy in parameters is not an axiom we have to accept. However, after this introduction and motivation, it reads more like a typical optimizer paper introducing a new optimizer based on hand-wavy intuitions. This is unfortunate and not rigorous. What follows are detailed comments: 1) The use of Taylor approximation is only good close to the operating point for nonlinear functions. However Theta_j,-j may be very large, and thus the approximation may be very bad. 2) I do not agree that the memory and computational costs of SAGE are "marginal". The EMA "I-hat" is a full copy of the model since we need one EMA per parameter, especially for large models this is substantial overhead. I would further appreciate timings to verify that indeed the computational overhead is "marginal". 3) It is not clear to me in what scale the quantities U and I-hat are, and hence, the multiplier to the learning-rate. Because of this, it is not clear how one needs to change the original learning-rate "eta" when one turns on SAGE. 4) I am confused by Fig5, it does not look like the two-moon dataset at all to me? Please image-search "two moon dataset" and compare. minor nitpicks: 5) Fig1: Percent should be [0,100]. 6) I do not agree that one can conclude from the results that SAGE is more effective for small datasets than for large ones, since the "points" betwen the datasets/tasks do not live on the same scale. 7) All experimental evaluation is about fine-tuning of already pre-trained transformer models. This should be more accurately reflected in the title, for example by replacing the word "training" by "fine-tuning". It would further be interesting to see how well it works when training from scratch; even if it does not work, it is still a valuable fine-tuning method, and stating this may save a lot of people a lot of resources and time. 8) In Figure 6, since we have per-parameter learning-rates, which one is shown for SAGE? 9) Typo: "smoothier" -> "smoother" For an "optimizer" paper, it is weak: hand-wavy motivation, no real derivation of update rule or even convergence proofs, and experiments only in one very specific domain: transfer of pre-trained transformer models. I am not convinced that it will be generally useful at all. However, the paper raises a very important point: we should not take redundant parameters as a necessity, and it does propose a method to avoid them. I find this point important enough to still suggest acceptance. <doc-sep>This paper proposes a method for scaling the learning rate during training in order to encourage all parameters in a neural net to be fully used. Specifically, the updates for parameters are scaled in inverse proportion to how much they affect the loss (their sensitivity); the scaling factor also depends on how stable the estimate of sensitivity is, so that high-sensitivity parameters whose role changes rapidly aren't down-weighted as much. This technique is shown to improve the performance of Transformer models on several different problems, even when used in combination with other adaptive learning rate methods (Adam, Adamax). Analysis experiments verify that the method has the intended effect: compared to the baseline, more parameters have higher sensitivity, and pruning is less effective. Strengths: - The idea is novel, straightforward, well motivated, and easy to implement. Computational cost is low. - Potential for large impact on model training best practice. - The experiments are very thorough, demonstrating gains across different settings, and showing that the technique achieves its goal. - The paper is extremely clearly and carefully written. Weaknesses: - The scaling formula seems somewhat ad hoc and hard to characterize. In particular, if the sensitivity of some parameter spikes on a given iteration, it will get a larger update than another parameter with the same moving-average sensitivity that has not spiked. - There are some hints that the gains might be partly related to regularization (eg, better results on IWSLT than WMT). It would have been good to test the combination with dropout or something similar. - The hyper-parameters for SAGE are exhaustively tuned, more so than for the baseline adaptive optimizers, so there’s a potential for bias. Figure 7 counters this possibility, but only by showing heatmaps, so it's hard to gauge the actual numbers. Details: - Table 3 should show results for your implementation of Adamax that’s comparable to Adamax-SAGE, in addition to the Devlin et al numbers. - Figure 2: Consider flipping these plots to the standard convention, and stacking them. - Figure 4: Why do we care that local temporal variation is lower in SAGE than the baselines? A simple adaptive learning-rate formula that seems to work really well, even on top of other optimizers, as demonstrated by very thorough experiments. Downsides are that the formula isn't particularly well justified, and there are a few potential experimental weaknesses. <doc-sep>The paper proposes the SAGE method. The idea is that neural networks have redundant parameters. Some approaches will prune these parameters which has the effect of not decreasing the performance but to decrease the number of parameters. The paper studies if it is possible to learn better these parameters in order to make them more useful for the network. The paper proposes a method that will allow to train differently the parameters of a network in order to have a better use of the weights. The paper is evaluated in NLP and image classification with transformers. Strengths: - **Writing:** The paper is well written and easy to follow. - **Tasks:** The paper evaluates its method with different optimisers and on different tasks in NLP and image classification. - **Results:** The results are quite good; the proposed method surpasses the baseline each time. Weakness: - **Architecture:** The method is evaluated only with transformer architectures. In the context of image classification it would be interesting to evaluate it with CNN and others architecture than ViT. - **Optimisation:** In image classification, the pre-training procedure used is quite sub-optimal since the paper of Dosovitsky et al. [1] many improvements have been proposed such as the DeiT approach [2]. It would be interesting to see if the proposed method still works when the model is trained with more regularisation and data-augmentation which may also lead to a better use of weights. - **Image Classification:** Having only results of models pre-trained on ImageNet-21k and fine-tuned with the proposed method on downstream tasks is not very usual in image classification. It would be interesting to have results on ImageNet only where the SAGE method is used during the training. [1] Dosovitskiy et al, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR 2021 [2] Touvron et al, Training data-efficient image transformers & distillation through attention, ICML 2021 The idea of the paper is interesting and the proposed method seems effective. Nevertheless, more complete experiments in image classification would allow to better evaluate the interest of the proposed method. | The paper observes that the number of redundant parameters is a function of the training procedure and proposes a training strategy that encourages all parameters in the model to be trained sufficiently and become useful. The method adaptively adjusts the learning rate for each individual parameter according to its sensitivity (a proxy for the parameter's contribution to the model performance). The approach encourages the use of under fitted parameters while preventing overfitting in the well-fitted ones. Experimental results are presented covering a wide range of tasks and in combination with several optimisers, showing improvements in model generalization. The paper is very well written and easy to follow (as mentioned by Reviewers NSqH, 4pzE and sSHP). The authors provided a strong rebuttal including new experiments, like training using CNN based architectures (as requested by Reviewers sSHP and MzBV). Reviewer sSHP requested these results to be reported with STD, the AC encourages the authors to do so for the camera ready. Reviewer MzBV points out that the paper could be improved by giving a motivation of the update rule and proving convergence. However, still recommends accepting the paper due to the novelty in the idea of not taking redundant parameters as something inevitable and devising an effective strategy to improve it. This idea was also appreciated by the other reviewers. While the AC agrees that adding these points would improve the work, it takes as valid the point made by the authors. Namely, that the intuition behind the update rule is quite clear, and many other reasonable variants were ablated (in Appendix A.4.4). Furthermore, the empirical evidence shows that the method improves generalization. Reviewer NSqH points out that while SAGE improves the model’s generalization performance for lightly compressed models, its performance becomes more susceptible to pruning when the model is compressed heavier. While the authors responded with good points, the AC encourages them to follow the reviewer’s advice and incorporate further experiments studying this issue (e.g. other datasets). In sum, the paper proposes a simple and effective method that is able to improve generalization of large scale models. All four reviewers recommend accepting the paper. The AC agrees and encourages the authors to incorporate the requests mentioned above. |
The paper claims that the previous sparse training methods mainly focus on MLP and CNN, and fail to perform very well in RNNs. Hence, the authors proposed an approach to train sparse RNNs with a fixed FLOPs budget. The proposed technique is based on defining a mask matrix $M$ and refining it during training. It is initialized randomly to have the desired sparsity level $S$. After each training epoch, a fraction $p$ of the weights with the smallest magnitude is removed, i.e., those locations are zeroed in the mask M. Next, the same amount of parameters are randomly added to M again. Moreover, a variant of the averaged stochastic gradient optimizer (SNT-ASGD) is developed for the training of sparse RNN to account for the effect of weight masks during training. They showed that in practice, the requirements for efficient sparse training of RNNs are different than CNN and MLP. Strengths: By adding some refinements and tweaks to the existing techniques (masking for sparse training and adapting the NT-ASGD), the authors were able to achieve good performance to train sparse RNNs. The paper has a rather extensive set of simulations and experimental setups to analyze the best setup which yields good sparse training, e.g., comparing uniform vs ER distribution for masks, sensitivity to hyperparameters, ... Moreover, they have considered a fairly diverse set of RNN architectures to evaluate their method. Weaknesses and questions: Compared to the existing methods, the technical novelty of the paper is minor. It can be seen as some tweaks and improvements to the existing ones (although I admit that those changes are essential for the method to work for RNN.). What is special about the method that makes it specific to RNN? In other words, is it possible to use the same method for sparse training of MLP and CNN? A minor issue with the paper is the FLOPS analysis the authors used. Effectively, they use the sparsity of the parameters as a measure of FLOPS, not the actual FLOPS that might depend on the sparsity structure, HW, or software implementation. It would be a good idea to directly mention and use total sparsity, instead of FLOPS which can mislead the readers. Some parts of the method are not clear enough, e.g., 1. In the paper, it is stated that "magnitude weight removal" is applied to non-RNN layers. Do the authors mean that for the parameters of RNN, this step is skipped? 2. In "cell weight redistribution", it is suggested that the "magnitude weight removal" is applied to the whole set of RNN parameters $\\{\\theta_1, \\ldots, \\theta_t\\}$. However, in "random weight growth", it is mentioned that the same number of weights is grown immediately after weight removal, i.e., $R$ and $P$ have the same number of 1's. So, does it mean that the number of 1's in mask $M_i$ for each weight $\\theta_i$ ($1\\leq i \\leq t$) remains fixed S during training? 3. Another aspect of training that is unclear for me is the parameters that are updated. Is $\\theta$ updated during training or only $\\theta_s$ is updated? As a result, if a weight is removed in one epoch and its value at the time of removal was $\\alpha$, and later regrown at another epoch, is its initial value set to 0 or started from its previous value before "weight removal", i.e. $\\alpha$? 4. Did the authors add any regularizer (e.g., $\\ell_1$) to the training loss to improve sparsity in their experiments? <doc-sep>In this paper, the authors propose an approach to train sparse recurrent models, and a sparse variant of the NT-ASGD. The proposed method mixes some interesting novel methodologies and achieves interesting empirical results on Penn Treebank and WikiText-2 language modeling tasks. In general, the paper is well written and interesting, but in section 3 many explanations about the rationale behind some architectural choices of the selfish-RNN methodology are only partially explained, and sometimes they are just related to empirical results (e.g. in the cell weight redistribution). To me, a more theoretical explanation would significantly improve the manuscript readability. In section 4 many different approaches were considered. But there are a few points that are not clear. The authors report the results of a “small" dense network, but no information about this model is reported in the text. Reading the results reported in table 5 of the appendix, I found it interesting that the performance of the DSR improves significantly by using SNT-ASGD instead of Adam (it outperforms the Selfish-RNN). This table shows how much the optimizer influences model performance. Even if the ablation study reported in appendix A highlights the benefits of the SNT-ASGD, the results reported in table 5 show that the impact of this component is even more important than the selfish-RNN. Honestly, I think that is fairer to compare all the methods using the same optimization algorithm, therefore my suggestion is to move this table in the main paper and extend the analysis of these results. Reading the manuscript it is not clear how the hyper-parameters considered in the experimental campaigns have been chosen. By reading the first part of section 4.1 seems like parameters like the removing rate or the number of epochs are set without performing any validation on them. Even in appendix D, hyper-parameters (e.g. the learning rate, or the batch size) used to test the RHM are just listed. The authors should insert a more extensive explanation about how the hyper-parameters various models/approaches considered in the comparison have been validated. To perform a fair comparison the hyper-parameters of each model should be chosen according to its performance on the validation set. In this regard, it is important also to highlight how the hyper-parameters are chosen because some SOTA models achieved better results. For instance on the Penn Treebank dataset in “On The State Of The Art Of Evaluation In Neural Language Models”, Melis et al. report perplexities on the test set of 59.7. exploiting better the research space. The reported results in the paper (and in Appendix L) show the benefits of using this approach, but honestly, to me, it is not clear if it helps in exploring the state space. In general, it is not clear what is the reason why the model benefits from using the random growth approach. Moreover, in “Sparse evolutionary deep learning with over one million artificial neurons on commodity hardware” the gradient guided growth strategy outperforms the other sparse training approaches considered in the paper, even in the RNN case. Therefore a more extended evaluation/discussion of this point is required. Another recently proposed approach that uses sparsity in recurrent models is defined in “Intrinsically Sparse Long Short-Term Memory Networks” by Liu et al. the author should compare this approach with the selfish-LSTM.<doc-sep>Summary: The authors improve sparse for recurrent neural networks by developing a greedy redistribution rule for gates and adapting the ASGD optimizer for sparse networks. The work provides good results and a rich analysis of their and related methods. Strong points: - very rigorous experimental setup and analysis - Solid evidence for many new insights into some sparse training phenomena. The work provides broadens our understanding of sparse training. Weak points: - Some might complain that RNNs are outdated. I see this only as a minor weak point. Indeed, RNNs are not much used anymore, but many of the insights the paper provides are quite universal. - The fixed FLOPS only seems to be a by-product of the algorithm and particular network structure but not necessarily an algorithmic contribution. This makes the paper a bit confusing. Recommendation (short): This is a very solid paper with exemplary experimentation and analysis. It provides many unique insights that are very valuable for anyone who wants to work in the field of sparse training. I recommend accepting this paper. Recommendation (long): I think this paper is one of these papers, which is a very solid all-around. The authors invested quite a bit of time in creating rigorous experimental setups that test hypotheses. In particular, I like the graph analysis of sparse connective between networks. Findings of different initialization schemes and performance of other sparse training methods are precious and make the overall literature on sparse training robust. I can see that this paper may seem a bit boring and less impactful to some reviewers, but good science like this is not about being exciting but about providing rigorous results for a small problem. This paper does exactly that. I think any good conference should encourage good science by accepting papers like this one. Comments for authors: Solid work. Here some additional comments and questions. - Please feed your paper through a grammar/spellchecker. There are multiple errors which make the paper hard to read in some sections - It is not entirely clear why ASGD is needed for good performance. Can you elaborate, please? - Do you have any idea how does ER initialization relates to eigenvalues of recurrent matrices? If you can make a connection here, it would be a quite insightful addition to the paper since the top eigenvalue of the recurrent matrix determines the overall long-term behavior of the recurrent matrix and is known to influence behavior. - I would drop the fixed FLOPS contribution and focus on the other parts of the paper. You have more than enough contributions, and the space is better devoted to making the other contributions as clear as possible. - The cell weight redistribution algorithm description is unclear. A weight cannot have "more parameters, I think you mean to say gate-neurons with large magnitude weights gain more parameters over time. - The sparse topology algorithm: Is the correlation between weights computed overall test set outputs between two networks/weights? - Figure 3, unclear. What does Figure 3 (left) show exactly? It is unclear what random initialization means: different sparsity patterns, different weight values, or both? What does the seed do here? Does it affect sparsity pattern, data order, weight values, etc.? <doc-sep>In this paper, the authors studied the possibility of sparsity exploration in Recurrent Neural Networks (RNNs) training. The main contributions include two parts: (1) Selfish-RNN training algorithm in Section 3.1 (2) SNT-ASGD optimizer in Section 3.2. The key idea of the Selfish-RNN training algorithm is a non-uniform redistribution across cell weights for better regularization. The authors mentioned previous sparse training techniques mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs) rather than RNNs. This claim seems to be doubtful because one-time SVD + fine-tuning usually works very well for most RNN training applications in the industry. Overall, this paper is carefully written and provides some interesting empirical results. However, due to the lack of some important information, it is hard to evaluate the contribution of this paper. Here are some of my questions. SNT-ASGD needs to save the weights w_i,t from iteration Ti to iteration K, will that cost additional memory? The authors mentioned that they picked Adam optimizer for SET, DSR, SNFS, and RigL. Is Adam the best optimizer to build a strong baseline? I suspect Adam may not be the best optimizer for each of them. The authors need to give more information on the hyper-parameters like the learning rate. The selection of hyper-parameters usually significantly affects the convergence/generalization performance of an RNN model. For example, the way of learning rate decay has a big impact on the performance of training Penn TreeBank dataset. Can the authors report the training epochs and wall-clock time (e.g. in Table 2)? The sparsity typically makes modern hardware like GPUs perform poorly. That may be a concern. That’s the reason why researchers are studying structure sparsity. For future work, an analysis of computation (flops) to communication (memory access frequency) ratio seems to be necessary. | The authors introduce an approach to train sparse RNNs with a fixed parameter count. During training, they allow RNN layers to have a non-uniform redistribution across cell weights for a better regularization.They also introduce a variant of the averaged stochastic gradient optimizer, which improves the performance of all sparse training methods for RNNs. They achieve state-of-the-art sparse training results on Penn Treebank and Wikitext-2. The method achieves very good performance on sparse RNNs for challenging tasks. The paper is well written and provides solid analysis with new insights into sparse network models. Most reviewers believe it is a very solid paper. However, the technical novelty of the paper is limited. It can be seen as some tweaks and improvements of existing techniques, which seem to work very well. Since the number of papers that can be accepted is very limited, and since technical novelty is an essential criterion for published papers at ICLR, I propose rejection. |
This paper proposes an approach for automatically selecting a few frames from training videos to be annotated by humans and for using these annotations to train models for spatio-temporal video understanding tasks, such as spatio-temporal action detection and video object segmentation. 1. In contrast to prior work on active learning strategies for video understanding, which choose which videos to fully-annotate, the proposed approach chooses specific frames under a specific annotation budget. 2. The first contribution is an approach for active sparse labeling in videos, which utilizes frame-level uncertainty of the model to identify frames that need to be annotated. Motivated by the task of action detection, it also ensures that a) a set of temporally diverse frames are selected (instead of consecutive frames with redundant information), b) that background pixels don’t influence the selection (since the model might be fairly certain for them). 3. The second contribution is a training regime from learning from sparsely-labeled frames, which involves interpolating annotations in the rest of frames, and a loss (max-gaussian weighted loss) that discounts the penalty from wrong predictions at frames that are distant from the ones that have human annotations. 4. The proposed method is evaluated on three datasets: UCF-101 and J-HMDB for spatio-temporal action detection, and YouTube-VOS for video object segmentation, where it is shown that it outperforms other baselines for frame selection given a fixed annotation budget. Strengths ======== 1. The paper addresses an important challenge in the action detection literature, namely how to train systems with fewer labeled data. This is especially important for tasks that require fine-grained annotations, such as bounding boxes or segmentation masks, which is the focus of this paper. Focusing on spatio-temporal video understanding tasks and on selecting frames for annotation instead of whole videos clearly distinguishes this work from prior approaches. The review of prior approaches also seems adequate. 2. The proposed approach is simple, but well-motivated (both intuitively and based on ablations). It builds upon MC-dropout for uncertainty-based active learning, but also takes into account the nature of videos with the proposed Adaptive Proximity-aware Uncertainty, which outperforms other uncertainty-based selection methods (Fig. 4). The proposed loss also outperforms training simply with interpolated annotations or only on annotated frames (Fig. 5). 3. The proposed approach outperforms other frame selection baselines (such as choosing equidistant frames or using approaches that were proposed for images). 4. The paper contains multiple qualitative examples, which clearly demonstrate the issues of related baselines. 5. The proposed approach does not require an actor/object detector. Weaknesses =========== 1. Since this seems to be the first approach for active learning for these particular video understanding tasks (spatio-temporal action detection, video object detection), choosing the right baselines (in Table 1) is crucial. The evaluation could be strengthened by comparing with video selection methods, such as approaches from “What do I Annotate Next? An Empirical Study of Active Learning for Action Localization”[60]. 2. Method is only evaluated for selecting frames up to 9-10% frame annotation percentage: It would make sense to investigate how many annotated frames are needed to reach the fully-supervised performance (if possible). 3. Comparison with state-of-the-art: Table 3 is comparing methods with very different levels of supervision and architectures. Although it is nice that the proposed method leads to the best performance, it should be clear in the table that the comparison is not apples to apples. It would be helpful to add more details about the type of annotations (points?, temporal bounds? bounding boxes?) and the performance of each method given full-supervision if available (this will help clarify whether the models trained were much weaker than the capsule-based network). For completeness, the state-of-the-art fully-supervised method for each dataset should also be included. 4. More details about the training loss (margin-loss, binary-cross entropy loss) and the metrics (do they refer to temporal IoUs, spatio-temporal IoUs etc) are needed. Yes <doc-sep>The paper proposes an efficient learning paradigm for the task of video action detection where all the frames in a video need not be labelled with spatio-temporal annotations. The task is important since the cost of annotations increases as the task granularity increases from action classification to action detection. The paper uses an active learning scheme to select most informative frames to annotate, and manages to achieve the performance comparable to fully-annotated approaches while using a fraction of the labelled frames ## Strengths - The task itself is important and the conclusions are worth looking into for any action detection approaches - The approach is simple, intuitive and most likely general enough to be deployed in the setup of fine-grained video tasks ## Weaknesses - Dataset and baseline in Table 1 and 2 - The random baseline method achieves f-mAP@0.5 score of 69.3 with 10% data as compared to the proposed approach’s score of 71.7 - Given the claim that the proposed approach reduces the need for annotation by 90%, the baseline itself seems to be not that far away, do the authors have an estimate of how much annotation does the baseline method need to achieve the performance of fully-supervised approaches? - If the answer is, say less than 50%, does that say anything about the dataset itself? Do the authors think that experimenting with more complex dataset might help bring out the efficacy of their approach? - Non-activity suppression and relative area of spatio-temporal predictions (para on line 154) - The spatio-temporal annotations should have different relative area that they occupy in a frame, i.e, they lie on a spectrum same as MS-COCO object bounding boxes - Since the approach needs to contend with uncertainty being influenced by background, do the authors have some breakdown on performance w.r.t. relative area of spatio-temporal annotations? This is similar to AP_S, AP_M and AP_L evaluation procedure in MS-COCO object detection task - The reason for asking this is to check whether the algorithmic choice in the paper induces some preference over the relative area of spatio-temporal predictions - Annotation cost of far-away frames v/s nearby frames in a video? - The analysis made in the paper assumes a uniform frame annotation cost regardless of where the frame is located w.r.t. annotated frames - My guess is that, in real life, annotating chunks or blocks of frames is easier than annotating frames one-by-one far away from each other in a video. This suggests that the cost of annotating frames will be lower near the annotated frames, or the selection process can actually select a block of frames at a given location without sacrificing too much of the cost - This argument partially contradicts the design choices made in the paper about not choosing proximal frames using Gaussian kernels - Do the authors have any comment on the above? Did the authors perform real-life analysis of annotation cost on the UCF and JHMDB datasets and validate their assumption that annotation cost of a frame is same regardless of where it occurs in a partially annotated video? ### Low priority - Use of pre-trained weights on Charades dataset (line 215) - Do the authors have any intuition regarding the selection of Charades dataset? Is the approach expected to perform differently with different pre-training datasets such as Kinetics? - Scalability of the approach to larger datasets - Did the authors experiment with larger datasets to check whether the approach generalizes with scale of the datasets? ### Nice to haves: - It would be nice to have the word “utilize” substituted with “use” throughout the text. It will help reduce cognitive overload. - The supplementary has a lot of sections, but are not cross-referenced in the main paper. It would be nice to have that so the reader can go into details on a particular section if they want to Yes <doc-sep>This paper presents an iterative frame-selection approach to select a subset of the most typical/useful frames from all video frames for reducing the annotation cost for the task of video action detection. In particular, the paper introduces a frame-level scoring mechanism in terms of pixel uncertainty in a video aimed at selecting the most informative frames in the video. In experiments, the proposed approach was evaluated on action detection benchmark datasets UCF-101-24 and J-HMDB-21. Strengths 1. This paper works on the problem of active frame selection for the task of video action detection. Weaknesses 1. The novelty seems incremental. The definition of frame uncertainty for selecting video frames as well as the max-gaussian weighted loss for training the action detection network is either taken from or extended from the existing approaches. The paper does not provide any new insights. 2. The paper settings seem unfair. From the settings, seems like the proposed approach only works for easy datasets with a single object on a static background but not for complex video scenarios. Working on the problem of active frame selection in a simplified/reduced constraint setting seems to be unjustified. Also, the comparison with respect to the state-of-the-art approaches seems to be unfair. For example, this paper uses the I3D encoder head with pre-trained weights from the Charades while [23] uses the I3D network trained on the Kinetics dataset. The paper seems incremental and the experiment setting seems unfair. <doc-sep>This paper describes a novel training paradigm for action detection in videos. Technically, the approach consists in an sparse labelling model that implements a frame-level scoring module that tries to select the most informative/discriminative frames for action detection. With this sparse labels, the paper describes a training approach, with its associated loss function. An experimental evaluation is performed in two publicly available datasets, and the results reveal the benefits of the sparse active labeling technique. # Strengths - Writing a scientific article is not easy, but writing it well is really an art. This manuscript has been written in a careful and engaging way for the reader. Ideas are not masked in confusing paragraphs, but are clearly explained. It's a pleasure to be touched to review articles like this. I thank the authors for their efforts. - Section 2 adequately discusses previous works, showing the novelty of the proposed approach. Overall, the application of active sparse labeling for training action detection models in videos has not been previously explored. - The experimental evaluation follows clear experimental setups using publicly available datasets (UCF-101-24 and J-HMDB-21). The comparison with state-of-the-art models is fair and sound. - Active Sparse Labeling idea is worth to be shared with the rest of the computer vision community. The way this paper treats the sparse labels could be of interest for continual learning approaches, where models face to new tasks for which no annotations (or a few) could be at hand. Moreover, the learning with the Max-gaussian weighted loss could be applied to other semi-supervised pipelines, where the loss can offer a mechanism for assigning to the pseudo-labels a sort of confidence measure. - The experimental evaluation is well designed: specific experiments to validate the contributions claimed in the paper. It also reports comparisons with state-of-the-art semi-supervised methods, and a thorough ablation study. The paper defines the state-of-the-art for the semi-supervised setting on the problem, with just 10% of the frames being annotated with ground-truth. # Weaknesses - As it is pointed in the manuscript, as an AL model, scalability is a weakness of the model. We need multiple iterations to select the frames, and this is time consuming. It would be fundamental to know how much time needs the deep learning architecture between iterations. - I found some technical limitations that are worth to be discussed: a) It was not clear to me why does the model need an estimation of the uncertainty per pixel, that has to be obtained utilizing MC-Dropout. To run Eq. 1 should be highly time-consuming. Are there any other uncertainty measures that could be used? Have the authors used any alternative? b) Adaptive Proximity-aware Uncertainty (APU) objective is to select frames with temporal diversity, am I wrong? Eq. 3 presents a combination between the distance and the uncertainty with just a sum. Are both variables scaled? With a lambda of 0.5 the model assumes that the two terms are in the same range of values. An ablation study on the influence of lambda could be interesting. c) In the Non-activity suppression block, I wonder the influence of tau in the performance of the model. d) It is unclear to me how the proposed model is integrated in VideCapsuleNet [9] approach. Section 3.e needs to be extended. # Minor comments: - Please, punctuate all the equations. They are part of the text. Yes. The authors identified as the main limitation of the approach that as most AL is a time consuming model. Clarifications about the runtime are needed, as I pointed above. | Paper was reviewed by four reviewers and received: 2 x Weak Accepts, 1 x Reject and 1 x Accept. Reviewers argued that the task is interesting and approach is simple and well motivated. Some raised concerns included: (1) lack of additional comparisons and baselines, (2) lack of discussion regarding far-away frames v/s nearby frames for annotation, (3) lack of novelty and (4) fairness of evaluation. Three out of four reviewers were reasonably convinced by the rebuttal and argue for acceptance. [2s1J] remains concerned about (3) and (4). This was carefully considered by AC. Because no specific papers were provided by [2s1J] to support the claims of lacking novelty, and given the remaining positives reviews, AC is inclined to accept the paper. |
Summary: - This paper conducted a detailed study on how does the loss modeling affects the final performance of the pruned model. The authors first provided a unified view of various pruning algorithms (e.g., Magnitude Pruning, SNIP, OBD, and OBS), which can be categorized into three classes: weight magnitude, linear and quadratic models of the loss function. In the experiments, the authors seek to answer the questions: 1) how well do each criterion preserve the loss; 2) how does the locality assumption affect the final performance; and 3) how does the loss relate to the final performance? Empirical, the authors found that the quadratic model preserves the loss the best, as expected. Also, the loss after pruning seems not strongly correlated with the performance after fine-tuning. Overall: This paper is well-written and easy to follow. The authors did a great job of unifying the analysis of several pruning algorithms. More importantly, revisiting the loss modeling of network pruning is interesting, and it might invoke further research efforts in better understanding the pruning techniques developed in the past and also inspire researchers in designing improved pruning algorithms. However, I still have the following questions: - The authors show that the loss after pruning does not correlate strongly with the accuracy after fine-tuning. In Figure 3, the change in the loss ranges from 0 to 5. Can you show the plot with a smaller range of the change in loss, e.g., 0~0.5? I believe a large change in loss means that the pruning results are very close to random, so the comparisons in this regime may not be meaningful. - For testing the locality assumption, you introduce an L2 penalty on the changes. To me, this is more like a weighted combination of the original pruning criteria and the magnitude pruning criteria. Why not using some other techniques, such as backtracking line search for determining the pruning ratio at each iteration? - In equation (5), why do we need to take the absolute value? I think preserving the loss is only meaningful when the network is converged. If the model is not converged, then it would be preferable to prune those weights whose removal will decrease the loss. In this sense, the sign of the loss change should not be ignored. - The third plot in the second row of Figure 1 shows that OBD with more iterations has a larger change in the loss. Do you have any explanation for this? Rating: - I vote for a weak acceptance due to the above reasons. I believe the studied topic in this paper is important and impactful for the pruning community. In the meantime, it would be great if the author can propose some hypotheses on this phenomenon. To me, preserving the loss is a way to enforce the pruned network to stay close to the original solution, and in this regime, it's easy for the optimization algorithm to find a good enough solution. I will raise my rating if the authors can address my concerns well during the rebuttal period. ==========================After rebuttal================================= Thanks very much for your efforts to address my concerns. I kept my score unchanged. I agree with most of the responses, except for the response to "L2 penalty, backtracking line search for determining the pruning ratio". This paper is not proposing a practical algorithm but a revisiting, so I don't think the computational cost is a bottleneck in preventing you from using more advanced methods to get more robust conclusions.<doc-sep>Although the paper is covering an interesting topic, much of what's in the paper can be found in other works, and there's not a lot of novelty to the insights, nor a large breadth of experiments to justify it as a survey paper. - The linear and quadratic loss functions are not new. - The enforcing locality part is essentially why the pruning strength annealing schemes exist, this insight is not new and can be found in Zhu&Gupta, or in Bayesian settings like in the Molchanov paper, they suggest annealing from a Bayesian statistical perspective - The fact that this leads to multiple stages of pruning to be a good idea, is also known in the literature this paper cites. - The most meat of the paper is in the 'survey' part of it, investigating the results... but this section feels lacking since there are not a lot of experiments, and the insights of e.g. post-finetuning can be largely found in e.g. Blalock et al. I'm missing deeper insights/analysis here. What is the reason for this? How do we remedy this? What are the characteristics of networks that lead to this behavior? - It would have been great to see a lot more insight/experiments on this topic. The authors throw up a lot of hypothesis and suggestions/ideas throughout the paper, but don't back them up. E.g. If the ||Theta|| term is better of to be constant throughout the pruning procedure... can we somehow make an annealing scheme that keeps ||Theta|| small and constant throughout the pruning process, and show that that works well? This can be proven/shown somehow. I do think the paper is well written; and I encourage the authors to look further into this topic and come up with more novel insights/results and methods to improve pruning Other things/questions/suggestions: - In formulations (1), (2) and (5), (6). Why are the absolute brackets necessary? Especially for models that have not converged, why would you want to stay close to the original loss, as opposed to just decreasing the overall loss of the model? - For most of the discussion in section 3.2, the authors talk about the norm of the delta theta squared being large. But this largeness is relative to the 0th and first order term which the authors glance over. Under 'other considerations' for example, if the weights theta are large, the gradients likely follow suit. Thus the absolute magnitude of the weights might not matter, as it's the relative size of this to the gradient terms that should be considered. - Constraining the step size in section 3.2. Interestingly, if you take the assumptions that for each layer, the hessian of the loss w.r.t. the output is sqrt(lambda/2), and your input distributions to a layer are normalized, you end up with the weaker norm penalty. This is a crude approximation of the second order term, which would give this method a bit more of a theoretical foundation than just a regularization term. - 5.1 Convergence assumption. I don't get this part, both the OBD method, and the linear and quadratic loss terms depend on the locality, so all will also depend on the amount of steps taken. For OBD, as long as you recalculate the Gauss-Newton matrix, I don't see why this method is different when not doing fine-tuning. - 5.1 Convergence assumption. The result cited in appendix A is a very well-know result. How could this link explain the OBD performance on the VGG network? 'Could' is not strong enough to make it into a paper. Small tidbits: 3. Do pruning criteria better at preserving the loss lead to better fine-tuned networks? <- this sentence doesn't flow nicely. I would add a 'that' so you have do pruning criteria that are better ...<doc-sep>The paper develops two modified version of Optimal Brain Damage (OBD) criteria, namely LM and QM (linear and quadratic model) to measure the importance/saliency of model weights. It then compares these three together with Magnitude Pruning (MP), and show that these among four criteria: 1. for the first three, using iterative pruning to enforce locality of the gradient calculation is important, 2. the best method for training loss before fine-tuning does not necessarily lead to best validation accuracy after fine-tuning. The paper's empirical investigation is valuable and appreciated. It helps me understand OBD more throughly and the assumptions behind it. It is also useful to know that using iterative pruning can improve these gradient approximation-based methods because of locality. 1. My primary concern is the experiments does not seem to lead to a useful guidance for future practice. The paper does conclude for the first three using iterative pruning is useful, but these three criteria are rarely used nowadays and MP is the mainstream, and the simplest method. The paper also didn't conclude which of the four criteria is in general best and recommended. From table 2 it seems to be LM but the paper did not conclude this way. This is also possibly due to that the experiments are not run extensively on different datasets and architectures. 2. The paper compared the training loss before fine-tuning, and validation acc. after fine-tuning. But I think validation acc. before fine-tuning is also a quantity worth investigating. 3. More importantly, the paper shows the training loss (before fine-tuning) and valid acc. (after fine-tuning) are not necessarily correlated, but did not give explanation on why this could be the case through experiments, or give useful suggestions to achieve a good valid acc. after fine-tuning. Overall I appreciate the empirical study but I suggest conducting the experiments on more datasets and architectures, and extract a useful conclusion to guide future practices. +++++++++++++++++ I appreciate the clarified messages of the paper, and would like to see them emphasized more clearly in the next version of the paper. But due to the limited experimental scale on ImageNet (added in rebuttal, and in my understanding, it only verifies one of the multiple observations mentioned in the paper), I'm still leaning on rejection. I updated my score from 4 to 5.<doc-sep>Summary: The authors study the use of loss-modeling to maintain model quality when inducing unstructured sparsity in deep neural networks. They study a range of different approximations and modifications that can help improve the quality of the approximation (taking local steps, avoid large changes in weight magnitude, avoiding assumptions about convergence). The authors conduct a thorough empirical investigation that yields practical observations for the design of future pruning techniques. Pros: The paper is well written and well organized and the empirical investigations are well done. The observations made by the authors are interesting and practically useful for the development of future pruning techniques. Namely, 1. That including first order terms in loss approximations can relax the convergence assumption behind some existing loss modeling approaches to enable more flexible application of the pruning algorithm. 2. The quality of the local loss approximation can be improved by taking a series of smaller pruning steps. 3. That loss-preservation does not necessarily translate into accuracy preservation. Cons: It would be nice to see experiments in domains other than computer vision. For example, language modeling with RNNs or Transformers. Results at a wider range of sparsity levels for ImageNet would also have been useful, as it seems possible that these techniques could perform differently for high sparsity (>90%) than they do for moderate sparsity (e.g., the 70% sparsity reported in Figure 4). Comments: Another ICLR 2021 submission is highly relevant to your investigation: https://openreview.net/forum?id=rumv7QmLUue. Their theoretical/empirical results appear to corroborate your conclusions that loss preservation is not necessarily the best metric to optimize for when you care about accuracy preservation. | This paper presents a systematic breakdown and evaluation of several assumptions and algorithmic choices for pruning algorithms. As covered in the reviews, the evaluation and its conclusion offers a timely contribution to the broader community. In particular, this paper uncovers the observation that precisely modeling the loss (and hence minimizing the drop in loss after pruning) may not in fact yield improvements in pruning. This is an important observation as the community continues to propose new techniques with the justification that their improved performance results from improved loss modeling. A significant concern on the part of the reviewers is the limited practical prescription offered by the paper. Specifically, the paper does not propose a new algorithm. It also doesn’t necessarily identify why this interesting phenomenon emerges. For example, to the latter, it doesn't articulate what features of the network or loss landscape is indicative of this property. Ultimately, the decision for this paper is very challenging given the reviews. Whether or not a phenomena is interesting is an inherently subjective consideration. Moreover, without a clear technical prescription or path forward that can be evaluated on its merits, the reviews fall into two categories of either 1) those that --- by my estimation --- felt personally inspired by the work and 2) those that could not intuit the impact of the observation. A significant complication is that the narrative of the paper includes claims around addressing locality and convergence which, if not read with the understanding that contributions here are simply a synthesis of current work, appear as claims to novelty (when these techniques have no or limited novelty). This is a source of contention in at least one review. Given this partition, my recommendation is Reject. For future versions of this paper, I recommend that the authors narrow the claimed contributions to exclusively focus on the final observation that modeling the loss may not be as important as thought. The work in this paper on developing the ideas around convergence and locality can, instead, be cast as efforts to provide best available baselines for the topline claim. I believe these changes will eliminate a significant source of distraction, enabling readers (and reviewers) to avoid any attempt to evaluate the novelty of the locality and convergence narratives, which have indeed been considered in other work in various ways. An additional step that I highly recommend for this paper to unambiguously clear the bar is to identify with what the performance of pruning does correlate. Appendix C.4 provides an evaluation of two recent gradient preservation methods. Unfortunately, the paper did not present if, instead, the preservation of the gradient correlated with additional performance. In essence, the paper need not solve the mystery by providing a SoTA algorithm that exploits the right features of the problem for pruning. However, it would be valuable to provide a roadmap for future directions along with an articulation of the challenges down those directions. |
This paper proposes two datasets: VCE (samples containing emotion label distributions) and V2V containing pairwise comparative labels in terms of degree of pleasantness. These datasets can be used to predict emotional responses, and wellbeing of viewers. Authors train standard CNNs on these datasets, and observed that while these networks perform well, they do leave a significant performance gap to be fulfilled. Although, not altogether novel (since there have similar smallscale datasets), these datasets are largest of their kinds, which would allow end-to-end representation learning. In my opinion, these building such dataset would require significant efforts, which adds to the value/contribution of these datasets. Authors have diligently designed dataset collection/annotation protocols. Although, please see Weaknesses section. Annotators were allowed to use audio signals in order to annotate, so the labels might not be guaranteed to have been grounded in video signal alone. Although, authors did ask annotators to take notes of samples for which they relied heavily on the audio signal. I think for annotators making such choices might be somewhat ambiguous. Such ambiguity might be ultimately reflected in annotations as well. Can authors please lend their views in this regard? Also, authors remove the audio signal from the final dataset. Why not provide it to allow multimodal analysis if desired? In my opinion, removing audio signals just hurts the overall utility of the dataset. Unless, the authors want to get another publication (saying this with all due respect) by enhancing this dataset with audio included version of the dataset. I think authors should provide the accompanying audios. <doc-sep>For understanding how viewers feel while watching videos, the authors introduce two large-scale datasets for predicting emotional state and wellbeing of viewers directly from videos. The dataset contains 60,000 videos with human annotations for 27 emotion categories. - The paper highlights the motivation and idea in very efficient manner. - This paper introduces two large dataset to understand the human emotions after watching a video. - The background of 400 annotators are not mentioned in the paper, if the majority of people are from a single locality/country then the emotions annotated by them can be biased. As population from one diversity can feel completely different from populatio from different diversity. - If the videos contains mixed emotions like first sad then happy, such type of videos will have very subjective label depending on the person. <doc-sep>The authors introduce two datasets, Video Cognitive Empathy (VCE) and Video to Valence (V2V). The datasets contain over 60,000 short videos that were annotated with emotional content and general emotional valence by crowdworkers. The authors train machine learning models on the datasets and demonstrate that some models reach good performance, but that there is also room left for further improvement. The utility of the datasets is very clear, both for direct application as well as more foundational AI research. The datasets are large and constitute a major contribution. I could not identify any signficant weaknesses in this work. | Overall, the paper is capturing a really timely and increasingly important topic. Although the authors mention that they want to make their dataset particularly useful for academics/future research, I believe, that the topic is of great interest for companies, making the raised ethical considerations even more important. A critical reflection upon possibly ‘unintended uses’ can’t be emphasised enough, as well as possible biases in the data. However, I appreciated that the authors have done a fantastic job in their response and strengthening their article. Moreover, all reviewers recognise the utility and relevance of the proposed dataset and the authors did provide satisfactory responses to all raised concerns/comments. I believe this paper will stimulate some interesting and hopefully critical reflections on how to use/or not use the dataset and future improvements to overcome dataset biases. |
The authors propose an approach to estimating causal effect for populations for which only observational data exists, given that experimental data exists for a related population. **Strengths** * The authors focus on how to exploit situations in which multiple data sets (both experimental and observational) are given, and increasingly realistic situation for problems of interest in social science, medicine, and other areas. * The authors focus not just on obtaining point estimates, but on obtaining confidence intervals on those effects. * The paper is well-grounded theoretically but also features an empirical demonstration of the approach on a well-known data set. * The paper is clear about its assumptions. **Weaknesses** * The authors refer to observational estimates as "valid" or "invalid". Any estimate will have error due to bias and variance. More precise language would focus on the bias and variance of a given estimator, rather than a binary determination of "valid" or "invalid". * The experiments in Section 4 are more of a demonstration than a convincing empirical evaluation of the proposed approach. The experiments employ only a single data set (IHDP), a practice which recently has been strongly critiqued (e.g., Curth et al. 2021). The approach used by the authors on IHDP could be applied to other RCT data (see Gentzel et al. 2021), but is not. The result is that readers are left with little empirical evidence that the method works in practice, and theoretical treatment that makes a large set of assumptions that may (or may not) be valid. * The homogeneity of causal effects is a standard (though highly suspect) assumption of most methods for estimating average treatment effect from observational studies. The authors authors assume that average treatment effect varies among subgroups, but that enough homogeneity exists that valid extrapolations can be made. However, RCTs and observational studies often differ in ways that go far beyond the randomization of treatment assignment. Randomized experiments often involve other artificial conditions that are not replicated in observational studies. Meanwhile observational studies are often substantially different from each other (and very different from experimental settings). * For most of the paper, the authors oddly refrain from naming the proposed algorithm. This leads to odd linguistic constructions such as a section headers that read "Implementation and Evaluation of Meta-Algorithm" and "Meta-algorithm produces confidence intervals that cover the true GATE with nominal probability". Then, in section 4.2, the authors name the algorithm (Extrapolated Pessimistic Confidence Sets (ExPCS)). Use the name early and throughout the paper. * The authors use of the term "meta-algorithm" seems unnecessary. * The use of hypothesis tests, particularly given the large literature on the limitations of this basic framework, is open to criticism. To their credit, the authors are very clear about many of the limitations of their proposed approach. Clearly, the authors' approach requires that both experimental and observational data are available for a given (super) population of interest. The authors also assume that at least one observational estimate (among several) is "valid" across all subpopulations. This is a large assumption, given that any given estimate may have high bias or variance for any given subpopulation, and that those error properties are likely to vary substantially across different subpopulations. Finally, the authors assume that "every observational dataset has support for all groups." This seems unlikely. Furthermore, as the authors state clearly: "...we may reject an observational estimator due to failures in transportability, even if it yields unbiased estimates of the extrapolated effects." Again, the authors are fairly clear about the limitations and assumptions of their proposed approach. This makes an extensive empirical evaluation all the more important as a demonstration that, even with these assumptions and limitations, the approach can produce accurate estimates of causal effect. Unfortunately, the empirical evaluation is limited to a single data set with known issues. <doc-sep>This paper proposes a meta-analysis for reliably extrapolating group level causal effects from multiple observational datasets when experimental data is available for some subgroups. The first part involves falsifying effect estimates from observational data that are biased. This is based on hypothesis testing where the statistic developed compares effect estimates from observational data to that of the RCT for groups that have experimental data. Under the assumption that at least one observational data exists for each group providing a consistent estimator (and assuming that all estimates are pointwise asymptotically normal), the statistic allows to reject biased estimates efficiently. Following this confidence intervals are generated using a simple algorithm that conservatively estimates the intervals based on the intervals of the observational data. Proposed method is evaluated on semi-synthetic IHDP data compared to simple meta-analysis and baselines that do not consist of falsification. ------------------------------------ Post rebuttal update --------------------------------------------------------------------------------------------- I have read the full author response and I believe they address my major concerns with the paper. I have updated the score based on the response. Strengths: 1. The proposed falsification method is interesting although feels impractical due to concerns/clarifications I mention below. 2. The paper is well written, assumptions are clearly stated. 3. The simplicity of the approach is appealing. 4. Interesting experimental results. Weaknesses: 1. It is really unclear how often RCT data could be available that could provide consistent group level effect estimates. This implies that the design of the RCT itself needs to be explicitly targeted for estimating group effects. Hence I am not sure how practical this approach really is. 2. Although the IHDP results are interesting, and evaluation specific to the data is thorough, I believe just one semi-synthetic evaluation is fairly limited to be convincing. I believe the authors have adequately described limitations of their work. Based on my clarification questions, please consider adding comments on practicality in the limitations section. <doc-sep>Randomized controlled trials are high-standard with inclusion criteria in recruiting patients but may fail to include some heterogeneous patient subgroups in the full population. On the contrary, large-scale observational studies are likely to contain more diverse patient subgroups but they can be invalid to use due to hidden bias from some unmeasured confounders. The idea of this paper is to first validate the estimators from observational studies by comparing them with the estimator based on RCTs. This comparison is made on the patient subgroups that are observed in the RCTs. This step filters out the observational studies and their estimators that are inconsistent with the RCTs. After that, the authors use the non-rejected estimators and construct conservative confidence intervals to extrapolate the treatment effects for the subgroups that are not observed in the RCT. Strengths 1. The paper recognizes the advantages and disadvantages of RCTs and observational studies and lets them complement each other in the method proposed to estimate group average treatment effects. The idea is interesting and well motivated! 2. I am convinced by Assumption 2.3 that at least one observational estimator is asymptotically normal and consistent for both the validation and extrapolated effects, and Assumption 2.2 which says that the RCT estimator is also consistent. I think the assumptions are necessary so that at least in large samples, we can pick up the consistent observational estimator to estimate both effects. 3. The experiments uncover both the power and conservativeness of the proposed method. I still think the proposed idea is useful in practice. Weaknesses 1. The groups $I_R$ and $I_O$ are given instead of being learnt from the data. I don't think this is often not the case in practice. We may know something about the effect heterogeneity but it is too strong to assume our knowledge is close to the ground truth. 2. The proposed method (Algorithm 1) is not very novel and a bit trivial, which includes some standard techniques we often do in practice, e.g. constructing asymptotic normal estimators and doing a t-test. The author does not do much to make the method powerful, i.e., less conservative. I feel all the theoretical results are expected and follow from the DML literature. 3. The method looks quite conservative in Figures 3 and 4. The interval width is similar to the simple Union, which reports the union bounds of the confidence intervals of all observational studies, with no falsification procedure. N/A <doc-sep> Randomized controlled trials (RCTs) are considered the gold standard for studying the causal relationship between treatments and outcomes, and Clinical Practice Guideline (CPG) policy recommendations are based on experimental results from RCTs. However, due to the cost (time and money) and ethical and methodological considerations, the populations in RCTs are narrow. Hence, the treatment effects outside of the support are missing. An alternative approach is to estimate treatment effects using observational data. However, the estimated treatment effects from these historical data might be biased due to the failure to control confounding effects or the existence of selection bias in the data. This work considers the problem of providing unbiased treatment effects, along with the confidence intervals, outside of the support of RCTs from the existing observational datasets. Given RCT data and multiple observational studies, this work first provides a hypothesis-testing technique to remove the observational estimates that are not consistent with the estimate of RCT. Under the assumption (Assumption 2.3) that there is a least one observational estimator that is a consistent estimator of RCT, i.e., the strong ignorability assumption holds, and the confidence intervals on the extrapolated treatment effects outside of the support of RCT is then can be provided. Throughout the experimental validation of the IHDP dataset, this work shows that their meta-algorithm can provide the confidence interval that covers the true GATE and with narrow width. Strengths: 1. The paper is very written: Problem formulation, notations, assumptions, and derivations are clearly provided. 2. Empirical results are compared with existing meta-analysis and the comparison is comprehensive. Weakness: 1. A lack of a real-world dataset is provided. This work provides a meta-algorithm that can report unbiased causal effects outside of the support of RCT with confidence intervals. This work's only potential negative social impact might happen when they falsely accept the biased estimator from an observational study that is not consistent with the estimator of RCT. The author may provide more examples and cases of what would happen when this case is true. | The authors propose an approach for estimating causal effects when both observational and limited experimental data exists. The authors propose falsifying effect estimates from observational data before using the effect estimate on other populations. This is an important idea that may improve reliability of causal inference. The authors provide confidence intervals for the proposed procedure. The considered problem is of clear importance; and the simplicity of its approach is appealing (cPQd). There have been some concerns about the limited empirical evaluation (icYd). The authors provide additional numerical evidence during the rebuttal period. This evidence should be added to the appendix for the camera-ready version. Note: The reviewer most critical of the paper (rating 4, icYd) does not seem to have updated their score post-rebuttal. |
============================================ Final recommendation after rebuttal The authors gave a good rebuttal, and the current version of Fig 5 and the new Fig 6 are making the paper stronger in my opinion. However, I will stick to my previous rating as: a) the main weakness, the tradeoff in performance for few vs overall makes the contributions weak, especially since the cRT baseline is as simple as finetuning only the classifers with balanceed sampling. I would expect gains over that b) Fig 6 further shows the marginal gains over a weight-sharing baseline and makes the basic approach questionable c) there are no experiments on real-life long tailed datasets - a note here about iNaturalist: the argument the authors make about iNat makes some sense to me, and I want to thank them for replying. But this is exactly why we should test on real long-tail datasets, ie they dont behave as the artificially created ones. ============================================ The paper presents an interesting idea, transferring of knowledge between head and tail classes at the classifier level, ie create stronger classifiers for the tail classes by linear combinations of a tail classes' nearest neighbour classifiers with the current "weak" one. In general, although interesting conceptually, the approach doesn't seem to work better than the baselines overall, and the paper doesnt offer any further interesting analysis or insights for long-tailed recognition that would make the performance part be negligible. Strengths: *Long-tail recognition and learning from imbalanced data is an interesting, realistic and important problem * The authors propose an approach that helps learn better few shot classifiers increasing the performance on the tail classes, trading it off with slightly reducing performance overall Weaknesses: * The authors compare to strong baselines, and do indeed increase performance for tail classes, but in the end they harm overall performance. To get the "10%" margin mentioned in the abstract, they also reduce overall performance by 1.5% and head-class performance by around 3%. * It seems there is a tradeoff here, where to learn better tail classifiers you hurt head class performance. Would another hyper-parameter setup give the same med/many performance (not harm) and increase few shot performance? Maybe baseline performance (as horizontal lines for few/med/many/all) could be included in the hyperparameter exploration plots. * The authors do not present results on any real long-tail dataset, eg iNaturalist, but only on two smaller and artificially created LT datasets (which are the standard, but also many times accompanied with results on a real long-tailed dataset like iNat or faces) * The clipping hyperparameter $\\gamma$ seems to control performance a lot, more than the number of classifiers combined (K). Controlling this with a clamping parameter seems heuristic without further discussion. Although the parameters are ablated, and seem relatively stable, it is not discussed why clamping is so important and why "we consider γ to be our ‘control’ for performance trade-off". * Given that the input to the added 2-layer network Alpha-net is the ordered list of the K closest classifiers and the weak classifier, it is unclear why the authors choose to have one $A_k$ model per class. What would performance be if there was a single network for all classes? This is a missing baseline that is kind of needed to justify the added computational complexity. Notes: * From the "three advantages" enumerated in Sec1, I dont understand how the second is an advantage over other approaches; to me it is more of the way to make this approach work. Same with the third advantage; the coefficients are learned for this method - I dont understand how it is an advantage that they are learned more adaptively when related works dont have those coefficients in the first place. * It is unclear what Figure 4 offers and it is hard to comprehend. Some more analysis (or a citation) on what the kernel density estimator is is needed * Do all $\\alpha_i$'s for a weak classifier sum to 1 after clamping?<doc-sep># Summary This paper focuses on how to transfer knowledge between classes. The authors proposed to transfer classifiers instead of features. The proposed to linearly combine the classifiers from rich classes to construct more robust classifiers or rare classes. The combination weights are predicted from a learned neural network for each rare class. The experimental results on two benchmark datasets outperform some existing methods. # Strengths - While the idea of constructing classifiers by a linear combination of other classifiers has been proposed for several different problems (e.g., zero-shot learning), its application to long-tailed classification seems to be novel. - The idea is clean and clear. The approach part is well-written. - The proposed method improves the performance of tail classes. # Weaknesses 1. There is no comparison to existing methods (proposed in other problems) that use linearly combine classifiers. Note that, those methods can be applied to long-tailed classification with minimal modifications. For example, for ZSL, due to the lack of visual information of unseen classes, the combination weights can only be estimated from the semantic descriptions. Here, with the W_j for each class, one baseline is thus to replace the semantic descriptions by W_j or even visual features for estimating the combination weights. The authors should compare to those methods. 2. While the approach is clearly written mostly, the need for independent alpha-net modules for each tail class is unclear. Note that, an alpha-net is a two-layer network with lots of parameters, and for tail classes, there are only a few labeled data instances. Learning for each class an alpha-net may be vulnerable to over-fitting. - There is no equation of the training loss for the alpha-nets. It will be great to provide it. If I understand correctly, the loss is still a softmax loss across all the classes (head and tail), but only the alpha-net parameters (for the tail classes) are being learned. - The alpha-nets seem to be learned from the data that have been used to train the classifiers in the first stage. As neural networks can usually achieve very low training error, alpha-nets with a one-hot output (i.e., only use V^j_0) may already lead to very high training accuracy. I’m not sure if alpha-nets learn anything meaningful. - An ablation study on the algorithm design, for example, using a shared alpha-net for all the classes, should be included. 3. The related work and experimental comparison are insufficient. Only “one” paper has been compared in Table 1 and Table 2. There have been many papers published in CVPR 2020 and ECCV 2020. The authors, however, cited NO papers published in 2020. 4. Can the authors provide more discussions on why the performance at medium, many, and all classes drop in comparison to the baselines? For now, it seems that the performance improvement comes simply by trading the prediction/adjusting the classifier strengths among classes: for example, increasing the classifier “norms” of tail classes. # Minor - The figures and captions can be improved. Specifically, Figures 1 and 2 are not self-contended: it is hard to understand the figures without looking at the main text. # Justifications While the proposed idea seems novel for long-tailed classification, the paper lacks comparisons to existing methods and comparisons to similar algorithms proposed in other problems (with minimal changes). There is no ablation study on why we need an alpha-net module for each class. There is no overall performance gain, making it hard to tell if alpha-nets really improve classifiers or simply trading predictions/adjusting the classifier strengths among classes. I thus give a score of 3. ----------------------------- Post rebuttal ----------------------------- I read the author's rebuttal and I greatly appreciate their efforts. The authors have done many more experiments and I would like the authors to incorporate them into their manuscript and modify their manuscript, even methods, accordingly. I think these new materials can greatly strengthen the paper. 1) It seems that ZSL with the original classifier involved is quite strong (this could not happen in ZSL as there is no original classifier for the unseen classes). I would suggest that the authors further investigate this for a detailed comparison. These methods may even simplify the authors' methods, and a connection to ZSL can strengthen the paper. For instance, Changpinyo et al., 2016) showed that their method can outperform [1] and it will be interesting to have some further comparison. 2) It's nice that the authors compare the shared and non-shared alpha net. I still have doubt that why non-shared alpha net won't over-fit given that there are only a few labeled data instances. A shared alpha net might be more suitable for robustness. 3) There is one difference to Kang's method. Kang's first stage stopped earlier so tailed classifiers have not covered. Did the author do the same thing? 4) One method that can simply trade-off the accuracy is Kang's method. I think you can tune their hyperparameter to get a higher tail accuracy. Now the question will be, what will their head accuracy be? Without having a more ground comparison among methods, my question still remains unsolved. 5) Besides ImageNet-LT and PlaceNet-LT, there are several CVPR/ECCV papers that outperform Kang's paper on CIFAR, iNaturalist but do not report on these two datasets. I have increased the score to 4, but I think the paper needs significant work to incorporate my comments as well as other reviewers' comments to be ready for being published. <doc-sep>Significance: This article is a useful contribution to transfer learning for tasks where there is not enough data available, showing a modest improvement over the other methods that employ transfer learning in the classifier space. Novelty: The main contribution of this paper is the improvement of weak classifiers when there is not enough data for a class by combining the weak classifiers with the most relevant strong classifiers. This method finds k closest strong classifiers to the weak classifier and then combines the weak classifier with existing classifiers without creating new classifiers or networks from scratch. Potential Impact: The approach presented in this paper is well-evaluated in computer vision, but potentially useful in many other settings. Technical Quality: The technical content of the paper appears to be correct. Presentation/Clarity : The paper is generally well-written and structured clearly. While this method is a clear winner on Few classes, it is not performing as well in Medium classes, as shown in Table 1. An explanation about this issue could strengthen the paper. Reproducibility: The paper describes all the algorithms in full detail and provides enough information for an expert reader to reproduce its results. I would suggest the authors release their code on GitHub or other sites to help other researchers reproduce their results.<doc-sep>This paper addresses the well-known long-tail classification problem. The argument made here is that most of the existing methods attempt to transfer knowledge in the feature space, which is true. Based on this motivation, the paper proposes a method to do the knowledge transfer in the model space instead. The idea is to apply K-NN method in the model space to pick up a group of strong classifiers trained on the head classes with sufficient training samples available that are closest to a weak classifier in the model space and then a linear combination of the group of strong classifiers and the weak classifier to form a stronger classifier to the tail classes where only few samples are available for training; the linear combination weights are learned from a simple neural network, called Alpha Net. Two datasets artificially truncated from ImageNet and Places, respectively, that were also used in the peer work in the literature, were used to report the evaluations. The paper reads very well, except for a few grammatical errors. The presentation is clear and easy to follow. My major comments follow. The long-tail learning problem is not new, and the idea of knowledge transfer in the model space is not new either (e.g., King et al 2019 referenced in the paper and in fact that reference was updated in 2020 with better results beating what this paper reported). Consequently, the novelty of this work is rather limited. Further, I have a strong reservation in considering the proposed method as a technically sound approach. Conceptually, the idea of combining a group of closest strong classifiers with a weak classifier to form a stronger classifier in the model space is based on the proximity presumption. Regretfully, unlike in the feature space where the proximity presumption is valid in general (unless the feature points are located close to the class boundaries), I am not convinced that the same proximity presumption is valid in the model space, as it is easy to give many counter examples. Regarding to the experiments, I would like to mention that the authors of the closest competitor, Kang et al 2019 referenced in the paper, have updated their work in arXiv this year with results beating what was reported in the paper. Also they used more datasets to evaluate their method than the two datasets used in the paper. So it is difficult to argue that the proposed method represents the state-of-the-art. Overall, I am not convinced that the proposed method is technically sound and advances the state-of-the-art literature. --- I appreciate the authors' effort in responding to my comments. But the arguments in their response appear in conflict. Overall, I am still not convinced by their arguments. So I stay with my original review. | The paper proposes to create models that address tail classes by computing a linear combination over models (concatenated weight vectors). Reviewers had grave concerns about the technical contribution, including justification of linear averaging of non-linear models, and about the experimental results, which improve on tail classes but hurt overall performance. As a result, the paper cannot be accepted to ICLR. |
This paper proposes two methods for instance-wise feature importance scoring, which is the task of ranking the importance of each feature in a particular example (in contrast to class-wise or overall feature importance). The approach uses Shapely values, which are a principled way of measuring the contribution of a feature, and have been previously used in feature importance ranking. The difficulty with Shapely values is they are extremely (exponentially) expensive to compute, and the contribution of this paper is to provide two efficient methods of computing approximate Shapely values when there is a known structure (a graph) relating the features to each other. The paper first introduces the L(ocal)-Shapely value, which arises by restricting the Shapely value to a neighbourhood of the feature of interest. The L-Shapely value is still expensive to compute for large neighbourhoods, but can be tractable for small neighbourhoods. The second approximation is the C(onnected)-Shapely value, which further restricts the L-Shapely computation to only consider connected subgraphs of local neighbourhoods. The justification for restricting to connected neighbourhoods is given through a connection to the Myerson value, which is somewhat obscure to me, since I am not familiar with the relevant literature. Nonetheless, it is clear that for the graphs of interest in this paper (chains and lattices) restricting to connected neighbourhoods is a substantial savings. I have understood the scores presented in Figures 2 and 3 as follows: For each feature of each example, rank the features according to importance, using the plugin estimate for P(Y|X_S) where needed. For each "percent of features masked" compute log(P(y_true | x_{S\\top features})) - log(P(y_true | x)) using the plugin estimate, and average these values over the dataset. Based on this understanding the results are quite good. The approximate Shapely values do a much better job than their competitors of identifying highly relevant features based on this measure. The qualitative results are also quite compelling, especially on images where C-Shapely tends to select contiguous regions which is intuitively correct behavior. Comparing the different methods in Figure 4, there is quite some variability in the features selected by using different estimators of Shapley values. I wonder is there some way to attack the problem of distinguishing when a feature is ranked highly when its (exact) Shapley value is high versus when it is ranked highly as an artifact of the estimator? <doc-sep>This paper provides new methods for estimating Shapley values for feature importance that include notions of locality and connectedness. The methods proposed here could be very useful for model explainability purposes, specifically in the model-agnostic case. The results seem promising, and it seems like a reasonable and theoretically sound methodology. In addition to the theoretical properties of the proposed algorithms, they do show a few quantitative and qualitative improvements over other black-box methods. They might strengthen their paper with a more thorough quantitative evaluation. I think the KernelSHAP paper you compare against (Lundberg & Lee 2017) does more quantitative evaluation than what’s presented here, including human judgement comparisons. Is there a way to compare against KernelSHAP using the same evaluation methods from the original paper? Also, you mention throughout the paper that the L-shapley and C-shapley methods can easily complement other sampling/regression-based methods. It's a little ambiguous to me whether this was actually something you tried in your experiments or not. Can you please clarify?<doc-sep>The paper proposes two approximations to the Shapley value used for generating feature scores for interpretability. Both exploit a graph structure over the features by considering only subsets of neighborhoods of features (rather than all subsets). The authors give some approximation guarantees under certain Markovian assumptions on the graph. The paper concludes with experiments on text and images. The paper is generally well written, albeit somewhat lengthy and at times repetitive (I would also swap 2.1 and 2.2 for better early motivation). The problem is important, and exploiting graphical structure is only natural. The authors might benefit from relating to other fields where similar problems are solved (e.g., inference in graphical models). The approximation guarantees are nice, but the assumptions may be too strict. The experimental evaluation seems valid but could be easily strengthened (see comments). Comments: 1. The coefficients in Eq. (6) could be better explained. 2. The theorems seem sound, but the Markovian assumption is rather strict, as it requires that a feature i has an S that "separates" over *all* x (in expectation). This goes against the original motivation that different examples are likely to have different explanations. When would this hold in practice? 3. While considering chains for text is valid, the authors should consider exploring other graph structures (e.g., parsing trees). 4. For Eqs. (8) and (9), I could not find the definition of Y. Is this also a random variable representing examples? 5. The authors postulate that sampling-based methods are susceptible to high variance. Showing this empirically would have strengthened their claim. 6. Can the authors empirically quantify Eqs. (8) and (9)? This might shed light as to how realistic the assumptions are. 7. In the experiments, it would have been nice to see how performance and runtime vary with increased neighborhood sizes. This would have quantified the importance of neighborhood size and robustness to hyper-parameters. 8. For the image experiments, since C-Shapley considers connected subsets, it is perhaps not surprising that Fig. 4 shows clusters for this method (and not others). Why did the authors not use superpixels as features? This would have also let them compare to LIME and L-Shapley. | The paper presents two new methods for model-agnostic interpretation of instance-wise feature importance. Pros: Unlike previous approaches based on the Shapley value, which had an exponential complexity in the number of features, the proposed methods have a linear-complexity when the data have a graph structure, which allows approximation based on graph-structured factorization. The proposed methods present solid technical novelty to study the important challenge of instance-wise, model-agnostic, linear-complexity interpretation of features. Cons: All reviewers wanted to see more extensive experimental results. Authors responded with most experiments requested. One issue raised by R3 was the need for comparing the proposed model-agnostic methods to existing model-specific methods. The proposed linear-complexity algorithm relies on the markov assumption, which some reviewers commented to be a potentially invalid assumption to make, but this does not seem to be a deal breaker since it is a relatively common assumption to make when deriving a polynomial-complexity approximation algorithm. Overall, the rebuttal addressed the reviewers' concerns well enough, leading to increased scores. Verdict: Accept. Solid technical novelty with convincing empirical results. |
This paper considers the ridesharing matching problem and builds the solution upon NeurADP. The main contribution over NeurADP is that the action values of each agent (vehicle) takes into account the impact of its action on the neighboring agents within the same cluster, which is obtained through clustering of the intersections on the road network. The impact of agent's action on neighbors is measured by the neighboring agents' independent values weighted by their action probabilities conditional on the agent's action. Benchmarking was performed on the NYC taxi data set against NeurADP. Results on different values of tolerance for delay, capacity, and number of vehicles are reported, and a significant improvement is demonstrated in all cases. Strengths - Ridesharing (pooling) is a challenging domain for RL/ADP. The paper presents an incremental step forward on top of NeurADP for the matching problem. - The paper proposes a way to incorporate agent interactions with neighbors without relying on joint action values, which suffer from the curse of dimensionality. - Different values of problem configuration parameters are investigated in the experiments. - The experiment dataset is public so that reproducing the results is possible. Weaknesses and comments - The paper uses the number of completed requests as the problem objective. In practice, a common measure for ride-pooling is the ratio between the sum of completed individual trip distances (as if they were fulfilled as single trips) and the total distance that the drivers actually travel to fulfill those requests. This metric measures both the amount of request fulfilled and the quality of the pooling. - The proposed estimation of agent interaction effect involves a handcrafted conditional agent probability based on a softmax over distance between destination pairs. There's no empirical justification on this choice. Why should the probability based on how far away are the action destinations? And, matching decision is ultimately a system decision, so shouldn't it be learned from data of system decisions? - Figure 3: This may not be the impact of enforcing positive values. This is just showing that CEVD learns the individual values better than NeurADP. It could be because of its consideration of impact on neighboring agents. I suggest the authors do a more careful ablation study to separate the effect of multiple algorithmic differences. - Section 4: For finding optimal \\lambda, I don't see \\lambda appearing here in the expression. How did you tune it exactly? - The paper claims that the algorithm can be executed in real-time setting. However, the decision epoch in practice is much shorter, only a few seconds. A run time of 60 seconds to compute a matching decision is way too slow for 'real-time execution'. - There are too many mentions of "e.g., Uber, Lyft, and Grab". Once is good enough, and we all know they are well-known ridesharing companies. The paper improves upon NeurADP by proposing a way to incorporate agent interactions without resorting to joint action values. While there's some merit in technical contribution in this regard, there are a number of major issues in algorithmic justification and empirical validation. <doc-sep>This paper focuses on the ride-pool matching problem to efficiently allocate combinations of user requests to vehicles online under quality constraints and matching constraints. Intending at this, the authors come up with a conditional expectation based value decomposition (CEVD) method. The proposed approach in the paper considers the impact of other agents actions on individual value by computing conditional expectations to improve the overall performance. The experimental results verify that the CEVD is an effective method for improving the overall requests served by 9.76 compared to the baseline, which seems promising. This work is quite innovative and the paper is generally well-written. Some concerns are shown as follows: -- The background of the abstract is a little longer. -- The first paragraph should be the definition of Approximate Dynamic Program (ADP) rather than FV because the definition of FV was given in the previous paragraph. -- The mathematical symbols of this paper are a bit too many, and it is recommended to list a table to show the meaning of each symbol to facilitate the reader to understand the paper. -- I am very confused about the data features used in K-Means to cluster intersections to clusters. As stated in the last paragraph of page 5, the function M is clustering locations. But in the second paragraph of section 5 experiments, it says that K-Means clustering is based on the average travel times between different intersections. So which feature is used? locations or average travel times? This needs to be explained clearly and it is crucial to experiment setups. -- In Section 6 conclusion, I highly recommend to the authors to add more open issues and future directions of their work. In general, the paper makes very solid work and is suited to be published in ICRL. <doc-sep>This paper studies an RPM problem (ride-pool matching problem) for on-demand transportation services. This problem is recently studied in various papers, but it is hard to choose a good matching by just using a bipartite graph matching due to the future demands, and it is an online decision-making problem. A recent breakthrough, NeurADP by [Shah et al. 2020], has shown a good performance, but the proposed approach in the paper, CEVD, achieved much more performance gain (reported as 3.8%-9.76%), which has a significant impact on the ToD service. An essential technique of the proposed CEVD is considering the effect of other agents (i.e., other vehicles) when estimating the value of actions. # Pros and Cons ## Pros - The performance gain achieved by the paper is quite significant from the application perspective. - The idea of considering the effect of other agents seems to be correct and valuable for multi-agent decision-making problems like ToD systems. - The computational efficiency is shown (i.e., the optimization is done within the batch time of 60 seconds). ## Cons - Some notations are not clearly defined, and it is hard to follow the details of the CEVD and its idea. # Comments To clearly understand the paper and clarify its score, I would like to clarify the following point. (I) Many notations in Sec. 2 are not explained or defined. These notations or missing explanations make readers hard to follow the proposed study and know the difference between NeurADP and CEVD. The below are examples. (I-1) Eq.(1): What is $[\\mathcal{U}]^{c'}$? (I-2) Between Eq.(2) and Eq.(3): What are $T^a(\\cdot,\\cdot)$ and $T^\\xi(\\cdot, \\cdot)$? (I-3) Page 4: What is $V^i(\\big\\langle r_t^{i,a}, r_t^{-i,a}\\big\\rangle)$? I cannot follow this $\\langle\\cdot\\rangle$ notation. (I-4) Page 4: What is the definition of $r_t^a$? (I-5) Page 5 before sec 3.1: $\\sum_{j\\in E}$ means $\\sum_{j|(i, j)\\in E}$? What is the difference between Pr and P?: $\\mathit{Pr}(a_j\\mid a_i, s)$ and $\\mathit{P}(\\text{Agent j takes action g | Agent i takes action f})$. (I-6) Page 5: What is the definition of $P^j$. Agent $j$'s probability? (I-7) Page 6: What is the definition of $s_t^{i,f}$? (I-8) Page 7: What is the definition of this bracket $[g_t, F_t]$ of $\\xi_{t+1}$? (II) What are 'other issues' of the over/under estimation of individual values in FV and DJV? (Page 6). Please give an example of such issues. (III) In optimization (sec 4), the proposed method estimates $\\theta, \\lambda, \\alpha$ step by step. Please clarify the property of this optimization problem. Are the resulted $\\lambda^\\star,\\alpha^\\star,\\lambda^\\star$ are globally optimal? (IV) Please clarify the relation between experimental results with the size $N$. That is, experimental results are reported only with $N=500, 750, 1000$. However, some research have modeled the ToD service with more agents (e.g., 3000 agents in [Alonso-Mora et al. 2017]), and therefore, I'm interested in more details of this aspect. For example, the authors reported that "The average time taken to compute each batch assignment using CEVD is less than 60 seconds (for all cases)". Is this due to the size $N\\leq 1000$ or not? Further, are computational results affected by the clustering ($k$-means) or not? (V) Please clarify the overview of the computation of CEVD (e.g., with a pseudo-code or system overview). What are the input and output? How is the ILP solver (CPLEX) used? Which part consumed the time within 60 seconds? I guess that such additional information is helpful for readers. To the best of my knowledge and experiences, the achieved performance of CEVD seems to show important progress on the ToD services. The computational times and settings are reasonable. However, the paper contains some unclarified notations. It is hard to follow the details of the proposed method. I want the authors to give more explanations and clarify some parts to help readers, which is also important to increase my review score. | This paper extends a recent approximate dynamic programming method (i.e., DP with neural networks) for a ride sharing problem. An elegant trick is proposed to obtain a more expressive function approximation without suffering a combinatorial explosion of the action space. While the idea is somewhat ad hoc in its implementation, and limited in novelty w.r.t. the ADP work that the paper builds on, the empirical performance improvement on the ride sharing problem is clear. Initially, the reviewers also raised several clarity and presentation issues, but the authors did a good job in addressing them in their rebuttal. The reviewers gave scores of 5,8,5. The main critique is limited novelty. During the discussion, we focused on the novelty of the approach, whether the ideas can be generalized beyond the very specific ride sharing problem, and whether the work is strong enough if viewed as an application paper. The conclusion, which my final decision is based on, is that currently, the contribution is very specific to the ride sharing problem, and it is not clear whether this idea can be extended to more general optimization problems. This means that the scope of the algorithmic approach, taken with respect to the ICLR audience, is rather narrow. On the other hand, the current presentation does not meet the bar of a strong application paper, as there is not enough novelty in the problem and data. My advice to the authors is to broaden their investigation and evaluation. Another option would be to target a venue that is more focused on the ride sharing problem. |
The paper studies the gradient flow dynamics over smooth homogeneous models with two types of weight normalized parameterization - standard weight normalization (SWN) and exponentiated weight normalization (EWN). Thm 1 shows the induced dynamics in the unnormalized parameter space resulting from gradient flow on respective weight normalized parameterization. This result is a good starting point that highlights the different dynamics arising from the two parameterizations. However, in the remainder of the paper, there are several technical issues/confusions, outlined below (p.s., please number the equations): 1. In the proof of Proposition 3 and also Thm 2 (see e.g., last eq. in page 23 and corresponding equations for GD in Appendix E.1.2, and similarly, last eqn in page 20), the following equality is used which is not true in general. Please clarify if I missed something: ||w(t)||=||w(t_2)||+int_{k=t_2}^t ||dw(t)|| -- triangle inequality would show that the RHS is an upper bound but I do not see how we can get exact equality. 2. In the proof of Thm 3 (page 20), why does Proposition 2 imply that w_u and its negative gradient are aligned in opposite directions? Specifically, why should there be a t_2 such that for all t>t_2, cos(-\\nabla_{w_u} L,w_u)<=\\epsilon? 3. In Appendix D.2 (page 22) while bounding ||w_u(t)|| for SWN, along with the above two concerns, I am also not sure how the two terms in ||dw(t)|| from Thm 1 lead to the simplified bounds on ||w_u(t)|| in the first non-thm equation on page 22. 4. Finally, although not a technical mistake, I believe that the discussion comparing between EWN and unnormalized GF (which I will simply call GF) is conceptually confusing. As the authors themselves note, EWN and GF both follow the *same trajectory*. EWN simply has a scaling factor of ||w(t)||^2 which affects the “speed” along the trajectory but the path itself if the same -- both have dw(t) = -s(t) nabla_w L(w(t)) for different scalar speeds s(t) and it corresponds to the same path in the space of w but with different time warping. Thus, if one solves the differential equations indefinitely both EWN and GF will trace the exact same path albeit at different times and will eventually lead to the same separator. But the plots and the discussion about Fig 5 for example suggest that EWN and GF leads to different asymptotic solutions, which is not correct. Thus, when comparing EWN and GF, the message could be that EWN when discretized could lead to faster convergence - this is somewhat justified experimentally (from Fig 5) but not theoretically as to truly compare one needs to show analysis for the discretized algorithm. Also experimentally to provide correct comparison of the speed, in Fig 5, the number of iterations of the two methods (EWN and GF) should be matched which is not true in the current plots. On the other hand, it is simply wrong to phrase the message as “EWN and GF lead to different solutions asymptotically”. <doc-sep>This paper analyzes weight normalization methods, including exponential weight normalization (EWN) and standard weight normalization (SWN), in contrast with unnormalized networks. Under a number of assumptions, the paper characterizes the asymptotic relation between weight norm and gradient norm at the node level (Theorem 2), which shows a distinction between SWN and EWN. Then it's argued that SWN leads to sparser solutions (Proposition 3), which is potentially beneficial for pruning. The paper also shows a convergence rate for SWN which is slightly faster than unnormalized and SWN from previous work, but under stronger assumptions. The paper verifies these results empirically on some toy examples. pros: + The exponential weight normalization method seems new. + The paper has some interesting findings regarding the asymptotic behavior of weight normalization methods (if the results can be justified properly). cons: The theoretical results are based on very strong asymptotic assumptions, which are not justified properly. The experiments are on very toy settings which are far below the bar. Either the theory or the experiments need to be stronger for this paper to be a solid contribution. - The assumptions (A1)-(A4) used throughout the paper are much stronger than those in previous work, such as Lyu & Li (2020). In particular, (A3) and (A4) are nonstandard. I'm not sure when these assumptions are expected to hold, and they are only empirically verified on an extremely simple dataset (4 examples). - In Proposition 3, which is where it is shown that SWN leads to sparsity, there is an extremely strong assumption that the ratio of two gradient norms at two nodes stays constant forever after some point in training. How can this possibly be true? ---------- after rebuttal ---------- Thanks for the response and the updated manuscript. I'm raising my score from 4 to 5. I'm still leaning towards rejection since I still find the results quite subtle and I hope to see more empirical justifications. In the updated Proposition 3, the sparsity-inducing property 3 assumes the existence of a time $t_2>t_1$ when the ratio between the two weight norms deviate from $1/c$. However, it seems entirely possible that this ratio will have already converged $1/c$ after time $t_1$; in this case the two weight norms grow at the same rate. It would be good to investigate this more carefully to see which cases are more likely to happen. I'm also concerned that the advantage of EWN for pruning only shows up in extremely small loss value (Figure 7), and therefore the practical relevance shown in the current paper is not very convincing.<doc-sep>### Summary This paper studies the inductive bias of gradient methods with normalization on smooth homogeneous models. The focus is on two normalization methods, standard weight normalization (SWN) and exponential weight normalization (EWN). The authors show two main results. The first characterizes the trajectory of normalized gradient methods from which they provide theoretical evidence that EWN is biased towards sparse solutions. The second provides convergence rates for the normalized methods which shows the difference between convergence rates of normalized and unnormalized methods. The theoretical results are corroborated with experiments on several toy datasets. ### Reason for score I am currently inclined towards accepting the paper because the results are novel, solid and should be interesting and useful for researchers working on theory of deep learning. However, the score is only marginally above the acceptance threshold, because I have several concerns regarding the clarity and significance of the results. I am willing to raise my score if the authors address my concerns in the rebuttal. ### Pros 1. The theoretical results are solid, novel and the proof techniques might be useful in other inductive bias analyses. 2. The sparsity result for EWN is interesting and provides novel insights on pruning neural networks as the MNIST experiments show. 3. Most of the paper is clearly written. ### Cons (roughly ordered from major to minor comments) 1. It is not clear in which cases SWN and EWN are used in practice. The authors do not explicitly cite papers that use them. Therefore, it is not clear how to assess the significance of the results. 2. After Theorem 2 it is claimed that ||w_u(t)|| is inversely proportional to ||grad_u L(t)||. I am not sure why this is correct. If ||w_u(t)|| = t and ||grad_u L(t)|| = 1/t^2 for all u, then the theorem result holds, but the claim after the theorem (mentioned above) does not hold. Am I missing something? 3. In Proposition 3, the assumption that the ratio of gradient norms is exactly c from some t onwards is very strong. The authors should comment on this. Does it hold in practice? Does the Proposition hold under weaker assumptions? 4. Most of the experiments are performed on very simple datasets with few points in the training sets. I think that experiments on other datasets (e.g., with 1000s of points) can strengthen the results. 5. In Proposition 1, eta(t) is said to be a constant but it seems to depend on the loss which changes with time. What is the L in the denominator of the learning rate equation? Is it the loss? 6. In Figure 1, the neighborhood of a point for different geometries is not formally defined. The current figures are not clear. 7. In several experiments, it is claimed that the loss achieved values of order e^(-300). This seems like an unrealistic precision to get empirically. Is there a mistake here? 8. The presentation of the normalization methods in the equations in page 2 is not very clear. Specifically, why these equations result in a form of normalization. Can the updates be presented in a concise equation where the normalization is showed explicitly? 9. I think that the authors should provide more context to the pruning results in Section 6. Specifically, say why the insights on EWN in previous sections can be useful for pruning applications. <doc-sep>This paper analyzes the implicit bias of gradient descent with both the standard weight normalization (SWN), which basically uses the gradients with respect to the radial part and spherical part of the weights, and the exponential weight normalization (EWN), which further parameterizes the radial part using an exponential function. Under a few convergence assumptions, it is shown that for SWN, given a node in the network, the norm of the input weight vector is proportional to the norm of the gradient with respect to this weight vector, while for EWN, the norm of the weight vector is inversely proportional to the norm of gradient. It is further shown that such an implicit bias implies that EWN induces sparse limiting directions, and empirical support is provided. I think SWN and EWN proposed by this paper are interesting, and it is surprising that they introduce opposite implicit biases. It is also interesting that EWN can find sparse or "simple" solutions. I have the following questions regarding experiments: 1. Can Proposition 3 be verified on MNIST? For example, can you compare the distribution of norms of weight vectors for EWN, SWN, and unnormalized gradient descent? 2. Can EWN also improve generalization or sparsity on more complicated datasets, such as CIFAR? | The main concern is that the results in this paper are based on strong asymptotic assumptions. (At least) more empirical results are needed. |
This paper introduces a new component to the unsupervised machine translation framework called cross-model back-translated distillation. The proposed approach is applicable to the other unsupervised methods. Experimental results in several translation tasks show that the proposed approach improves the translation accuracy of the standard unsupervised machine translation models, outperforming the cross-lingual masked language model. - The analyses are interesting to understand the proposed approach. Table 4 reports the diverse of synthetic data, but what about the quality as parallel data? Can you use parallel data instead and conduct the same analyses so that you can use BLEU score as an evaluation metric? - Is it possible to apply the proposed approach to supervised NMT training, by creating BT data from monolingual data? - Table 1 reports that the results from your experiments show that the equivalent/better performance agains the existing models with much fewer data. What about scaling up the monolingual data size 5x/10x more? Will the performance be improved better and better? - "the translated products (x-y) of the UMT teachers." at p.6. What does this "x-y" mean? Typo: p.3 5.2.In Appendix -> 5.2. In Appendix<doc-sep>This paper describes a method to enhance unsupervised machine translation through data augmentation. The idea is pretty straight-forward, if not altogether intuitive, you begin by training two bidirectional (i.e.: they can translate source to target and target to source) unsupervised MT systems A and B. The tested scenarios always have A and B be identical architectures trained with different initializations. They then produce synthetic source-target pairs by first having A (source->target) translate the provided source sentence x to y, and then having B (target->source) translate y back to z. They then train supervised MT on both x,y and z,y. The same procedure can be repeated with source and target reversed. The authors show substantial (1-2) BLEU improvements with 3 different UMT systems in 5 low-data scenarios (En-Fr, Fr-En, En-De, En-Ro, Ro-En), all subsampled to 5M monolingual sentences for each language. In En-Fr and Fr-En and En-De, they are able to match reported XLM results from Conneau and Lample 2019, despite using much less data. This simple idea is explored extremely thoroughly. The paper reads more like a journal paper that has undergone several stages of review than a conference paper. The authors make connections to and compare against a number of relevant ensembling strategies (to account for two systems being used) and back-translation-diversification strategies (to account for multiple sources being produced for the same target), and consistently show that only their specific recipe leads to the same levels of improvement. The authors really leave no stone unturned. The biggest knock against this paper is the relatively small data scenario. Having two UMT systems allows them to provide two source sentences (one original and one synthetic) for each target sentence (always synthetic), but how important is this when we have 25x more original source sentences? I can imagine arguments for why the high-data UMT scenario is unrealistic (many monolingual sentences implies the likely presence of parallel data), but those arguments aren’t presented in the paper. It would be greatly strengthened by a full-data experiment for even just one or two of the language pairs. Beyond that, I have few concerns. The paper is clear, easy to follow, and as I said, very thorough. But I’ll do my best to make some constructive criticisms: I think the Related Work section feels a little superfluous after all of the comparisons made to related work in the Background and in the Experiments. I think I would like to have seen more discussion of the highlighly related work in sections 5.3 and 5.4. In particular, a more detailed discussion of this method’s relation to multi-agent dual learning would be worth giving up parts of Related work that are already mentioned in Background (like pre-neural statistical unsupervised MT). It would be useful to specify how BLEU is calculated, to help readers understand just how useful the cross-paper BLEU comparisons in Table 1 are.<doc-sep>Summary: The paper proposes an additional stage of training for unsupervised NMT models utilizing synthetic data generated from multiple independently trained models. The generated synthetic data uses two stages of back-translation, with different models, in order to "diversify" the set of training data used for fine-tuning the models. This is similar to the approach in [1], but uses an additional stage of back-translation with a different model. The authors add this additional stage of training to unsupervised NMT models using different pipelines (PB unsupervised MT, Neural Unsupervised MT, XLM) and show that their approach improves all of these approaches by 1.5-2 Bleu on WMT En-Fr, De-En and En-Ro. Strengths: 1. The paper is well written, the approach is simple and seems to improve quality by significant amounts in a variety of experimental settings. 2. The authors do a great job of comparing against several relevant approaches (sampling during back-translation, ensembling, multi-agent dual learning). The paper compares against most of the relevant approaches I could think of while reading the paper. Weaknesses / Questions for authors: 1. As with any NMT model trained with synthetic data, it would be better to report results on source and target original splits of the test data to provide a clearer evaluation [2,3]. Also clarify the Bleu scripts, tokenization and other post-processing used for evaluation. 2. The datasets used for experimentation are much smaller than the ones used for the baseline unsupervised-NMT approaches. It would be great to report results in the original training conditions (this is not a major limitation however, since the proposed approach seems to improve over baselines trained with more data). 3. Did the authors try any experiments with unsupervised models utilizing parallel data in unrelated languages, similar to [4,5] or in real low resource settings [6]? These are more practical conditions for unsupervised MT in true low resource languages. Recommendation: Overall, this is a good paper and I would recommend acceptance. While I would have also liked to see experiments in more realistic low-resource settings, the current paper does a good enough job of evaluating the approach in standard unsupervised NMT settings on related high resource languages. References: [1] Data Diversification: A Simple Strategy For Neural Machine Translation, Nguyen et al. [2] APE at Scale and its Implications on MT Evaluation Biases, Freitag et al. [3] On The Evaluation of Machine Translation Systems Trained With Back-Translation, Edunov et al. [4] Multilingual Denoising Pre-training for Neural Machine Translation, Liu et al. [5] Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation, Siddhant et al. [6] When Does Unsupervised Machine Translation Work?, Marchisio et al.<doc-sep>In this paper, two unsupervised agents are utilized at cross-model by using the dual nature of the unsupervised machine translation model, in which forward translation of agent_1 is combined with the backward translation of agent_2, more synthetic translation pairs are obtained to train a new supervised machine translation model. The result is improved on multiple unsupervised machine translation, and this paper claims that more diversity is brought to the synthetic data, so a better translation model can be trained. This paper uses a reconstruction BLEU or BT BLEU [1] metric to compare the effect of the inside-model with that of cross-model, and finds that cross model translation has a lower back-translation effect, which shows that the diversity is enhanced. Furthermore, CBD is compared with the ensemble method and achieves better performance. The proposed method is quite simple yet effective, but it is also a kind of data enhancement. In addition to these contributions, the paper also has some shortcomings 1. The evidence in this paper can not support the claim that the current performance bottleneck of UMT is due to the lack of diversity: the performance upper limit of UMT is still due to the lack of clear supervision signal, which limits the further performance growth. Because the training of CBD is divided into two stages, the diversity of the second stage only brings more training data to enhance the supervised machine translation model, rather than unsupervised machine translation effect. 2. Source of promotion: the second stage of CBD method adopts (x_s, y_t), (z_s, y_t), (y_s, x_t), (y_s, z_t) synthetic translation pairs, it is not clear how much performance growth comes from increased data and how much growth comes from the new model implementation (ott et al., 2018). It is not appropriate to attribute all contributions to the diversity brought by CBD. I suggest that the author should use (y_s, x_t) data to train based on the (ott et al., 2018) model, and report the effect comparison (In my experiments, the second stage model implemented with fairseq trained only on (y_s, x_t) surpass both agents trained with XLM due to more efficient implementation in fairseq). 3. Unfair comparison with the enable distillation: authors need to compare CBD with the model trained on the synthetic data (y_s, x_t) of the ensemble of agent_1 and agent_ 2. In the training data (x_s, y_t), (z_s, y_t), (y_s, x_t), (y_s, z_t) for the second stage of CDB, x_t, golden language sequences as translation target is stronger than synthetic language sequences (silver) as target. Therefore, It is necessary to report the real result of ensembled distillation. The current results are very unreliable. In addition, it is necessary to compare the training time of the CBD method and ensemble distillation training (including the decoding process after the first stage of training) to show the efficiency of CBD. 4. The non-golden language sequence as a translation target is called pseudo-NMT (PNMT). The author adopts a variety of model structures, which is slightly redundant. They can directly add the synthetic data decoded by cross model to continually train the original XLM model with a supervised translation objective (which is naturally supported in XLM from my experience), and report the effect comparison between them. 5. The essence of the CDB approach is a process of self-supervised training, so it is necessary to compare self-training/tri-training introduced in [2]. In general, the CBD method in this paper is a simple and effective data enhancement method to improve the performance of the model. However, due to the lack of many important details of the implementation, despite the promotion, the source of promotion is unknown. In addition, the unreasonable comparison of the baseline models deepens my concern about the real promotion of this CBD method. [1] Li, Zuchao, et al. "Reference Language based Unsupervised Neural Machine Translation." arXiv preprint arXiv:2004.02127 (2020). [2] Sun, Haipeng, et al. "Self-Training for Unsupervised Neural Machine Translation in Unbalanced Training Data Scenarios." arXiv preprint arXiv:2004.04507 (2020). | This paper proposed an additional training objective for unsupervised neural machine translation (UNMT). They first train two UNMT models and use these models to generate pseudo parallel corpora. These parallel corpora are used to optimize the UNMT training objective. The experiments are conducted on several language pairs and they also compared with several alternative works. All the reviewers admit that the proposed method is straightforward and effective. The authors claim that the new training objective is used to enhance the "data diversification". This point has been questioned by the reviewers. Some reviewers are convinced by the response and some still have different opinions. From my point of view, the proposed method can also be considered as a kind of combination of (pseudo) supervised NMT and unsupervised NMT. The presentation and description of its key contributions seem unclear. However, we encourage the authors to modify their paper and we believe this proposed method can inspire the MT community for further research. At the moment, the paper is seen as not yet ready for publication at this time. |
The authors present a distributed implementation of signSGD with majority vote as aggregation. The result is a communication efficient and byzantine robust distributed training method. This is an interesting and relevant problem. There are two parts in this paper: first the authors prove a convergence guarantee for signSGD, and then they prove that under a weak adversary attack signSGD will be robust to a constant fraction of adversarial nodes. The authors conclude with some limited experiments. Overall, the idea of combining low-communication methods with byzantine resilience is quite interesting. That is, by limiting the domain of the gradients one expects that the power of an adversary would be limited too. The application of the majority vote on the gradients is an intuitive technique that can resolve weak adversarial attacks. Overall, I found the premise quite interesting. There are several issues that if fixed this could be a great paper, however I am not sure if there is enough time between rebuttals to achieve this for this round of submissions. I will summarize these key issues below. 1) Although the authors claim that this is a communication efficient technique, signSGD (on its communication merit) is not compared with any state of the art communication efficient training algorithm, for example: - 1Bit SGD [1] - QSD [2] - TernGrad [3] - Deep Gradient compression [4] I think it is important to include at least one of those algorithms in a comparison. Due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD. 2) Although the authors claim byzantine resilience, this is against a very weak type of adversary, eg one that only sends back the opposite sign of the local stochastic gradient. An omniscient adversary can craft attacks that are significantly more sophisticated, for which a simple majority vote would not work. Please see the results in [b1]. 3) The authors although reference some limited literature on byzantine ML, they do not compare with other byzantine tolerant ML methods. For example check [eg, b1-b4] below. Again, due to the lack of comparisons with state of the art it is hard to argue on the relative performance of signSGD. Overall, although the presented ideas are promising, a substantial revision is needed before this paper is accepted for publication. I think it is extremely important that an extensive comparison is carried out with respect to both communication efficient algorithms, and/or byzantine tolerant algorithms, since signSGD aims to be competitive with both of these lines of work. This is a paper that has potential, but is currently limited by its lack of appropriate comparisons. [1] https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/IS140694.pdf [2] https://papers.nips.cc/paper/6768-qsgd-communication-efficient-sgd-via-gradient-quantization-and-encoding.pdf [3] https://papers.nips.cc/paper/6749-terngrad-ternary-gradients-to-reduce-communication-in-distributed-deep-learning.pdf [4] https://arxiv.org/pdf/1712.01887.pdf [b1] https://arxiv.org/pdf/1802.07927.pdf [b2] https://arxiv.org/pdf/1803.01498.pdf [b3] https://dl.acm.org/citation.cfm?id=2933105 [b4] https://arxiv.org/pdf/1804.10140.pdf [b5] https://arxiv.org/pdf/1802.10116.pdf ######################## I would like to commend the authors for making a significant effort in revising their manuscript. Specifically, I think adding the experiments for QSGD and Krum are an important addition. However, I still have a few major that in my opinion are significant: - The experiments for QSGD are only carried for the 1-bit version of the algorithm. It has been well observed that this is by far the least well performing variant of QSGD. That is, 4 or 8 bit QSGD seems to be significantly more accurate for a given time budget. I think the goal of the experiments should not be to compare against other 1-bit algorithms (though to be precise, 1-bit QSGD is a ternary algorithm) , but against the fastest low-communication algorithm. As such, although the authors made an effort in adding more experiments, I am still not convinced that signSGD will be faster than 4 or 8 bit QSGD. I want to also acknowledge in this comment the fact that these experiments do take time, and are not easy to run, so I commend them again for this effort. - My second comment relates to comparisons with state of the art algorithms in byzantine ML. The authors indeed did compare against Krum, however, as noted in my original review there are many works following Blanchard et al. For example as I noted https://arxiv.org/pdf/1802.07927.pdf (the Bulyan algorithm) shows that there exist significantly stronger defense mechanisms for byzantine attacks. I think it would have been a much stronger comparison to compare with Bulyan. Overall, I think the paper has good content, and the authors significantly revised their paper according to the reviews. However, several more experiments are needed for convincing a potential reader of the main claims of the paper, i.e., that signSGD is a state of the art communication efficient and byzantine tolerant algorithm. I will increase my score from 5 to 6, and I will not oppose the paper being rejected or accepted. My personal opinion is that a resubmission for a future venue would yield a much stronger and more convincing paper assuming more extensive and thorough comparisons are added.<doc-sep>This paper continues the study of the signSGD algorithm due to (Balles & Hennig, Bernstein et al), where only the sign of a stochastic gradient is used for updating. There are two main results: (1) a slightly refined analysis of two results in Bernstein et al. The authors proved that signSGD continues to converge at the 1/sqrt(T) rate even with minibatch size 1 (instead of T as in Bernstein et al), if the gradient noise is symmetric and unimodal; (2) a similar convergence rate is obtained even when half of the worker machines flip the sign of their stochastic gradients. These results appear to be relatively straightforward extensions of those in Bernstein et al. Clarity: The paper is mostly nicely written, with some occasionally imprecise claims. Page 5, right before Remark 1: it is wrongly claimed that signSGD converges to a critical point of the objective. This cannot be inferred from Theorem 1. (If the authors disagree, please give the complete details on how the random sequence x_t converges to some critical point x^*. or perhaps you are using the word "convergence" differently from its usual meaning?) Page 6, after Lemma 1. The authors claimed that "the bound is elegant since ... even at low SNR we still have ... <= 1/2." In my opinion, this is not elegant at all. This is just your symmetric assumption on the noise, nothing more... Eq (1): are you assuming g_i > 0 here? this inequality is false as you need to discuss the two cases. "Therefore signSGD cannot converge for these noise distributions, ..... point in the wrong direction." This is a claim based on intuitive arguments but not a proven fact. Please refrain from using definitive sentences like this. Footnote 1: where is the discussion? Originality: Compared to the existing work of Bernstein et al, the novelty of the current submission is moderate. The main results appear to be relatively straightforward refinements of those in Bernstein. The observation that majority voting is Byzantine fault tolerant is perhaps not very surprising but it is certainly nice to have a formal justification. Quality: At times this submission feels like half-baked: -- The theoretical results are about signSGD while the experiments are about sigNUM -- The adversaries must send the negation of the sign? why can't they send an arbitrary bit vector? -- From the authors' discussion " we will include this feature in our open source code release", "plan to run more extensive experiments in the immediate future and will update the paper...", and "should be possible to extend the result to the mini-batch setting by combining ..." Significance: This paper is certainly a nice addition to our understanding of signSGD. However, the current obtained results are not very significant compared to the existing results: Theorem 1 is a minor refinement of the two results in Bernstein et al, while Theorem 2 at its current form is not very interesting, as it heavily restricts what an adversary worker machine can do. It would be more realistic if the adversaries can send random bits (still non-cooperated though). ##### added after author response ##### I appreciate the authors' efforts in trying to improve the draft by incorporating the reviewers' comments. While I do like the authors' continued study of signSGD, the submission has gone through some significant revision (more complete experiments + stronger adversary). <doc-sep>The paper proposes a distributed optimization method based on signSGD. Majority vote is used when aggregating the updates from different workers. The method itself is naturally communication efficient. Convergence analysis is provided under certain assumptions on the gradient. It also theoretically shows that it is robust up to half of the workers behave independently adversarially. Experiments are carried out on parameter server environment and are shown to be effective in speeding up training. I find the paper to be solid and interesting. The idea of using signSGD for distributed optimization make it attractive as it is naturally communication efficient. The work provides theoretical convergence analysis under the small batch setting by further assuming the gradient is unimodal and symmetric, which is the main theoretical contribution. Another main theoretical contribution is showing it is Byzantine fault tolerant. The experiments are extensive, demonstrating running time speed-up comparison to normal SGD. It is interesting to see a test set gap in the experiments. It remains to be further experimented to see if the method itself inherently suffer from generalization problems or it is a result of imperfect parameter tuning. One thing that would be interesting to explore further is to see how asynchronous updates of signSGD affect the convergence both in theory and practice. For example, some workers might be lost during one iteration, how will this affect the overall convergence. Also, it would be interesting to see the comparison of the proposed method with SGD + batch normalization, especially on their generalization performance. It might be interesting to explore what kind of regularization technique would be suitable for signed update kind of method. Overall, I think the paper proposes a novel distributed optimization algorithm that has both theoretical and experimental contribution. The presentation of the paper is clear and easy to follow. Suggestions: I feel the experiments part could still be improved as also mentioned in the paper to achieve competitive results. More experiments on different tasks and DNN architectures could be performed. | The Reviewers noticed that the paper undergone many editions and raise concern about the content. They encourage improving experimental section further and strengthening the message of the paper. |
This paper presents a method for 3D scene reconstruction from a single image using implicit surface representations such as occupancy or SDF. The authors propose to incorporate loss functions on the spatial gradients to provide dense supervision in the 3D space in the case where 3D labels may be incomplete (e.g. open 3D meshes) or not well-defined everywhere. Experiments are performed on ShapeNet and ScanNet show that the proposed method can achieve competitive performance on single-image scene reconstruction tasks. Strengths: - I like the motivation of learning from 3D meshes that are not necessarily closed. This seems to allow 3D reconstruction neural networks to learn from a wider range of dataset resources, such as 3D scans or mesh reconstructions. - The experimental settings are described in a very detailed manner with justifications. The results from the proposed method seem to show some improvements upon baseline methods, more notably in scene reconstructions. Weaknesses: - Although learning from either closed or open 3D meshes is an interesting motivation, this only allows one to learn SDF. One cannot define occupancy for open 3D meshes, and the example in Fig 3 is misleading. How would you define occupancy near open wholes? This part is unclear. - The contribution is unclear and at most incremental. Two main components are described in the paper: 1. Loss functions for learning occupancy or SDF (Sec 3.1). In my understanding, the only novel term is the spatial gradient penalty for occupancy. The rest of the two terms are standard loss functions for learning occupancy, so the additional first term is proposed. The "spatial gradient" term for SDF is precisely the eikonal regularization (Crandall & Lions, 1983), and the remaining terms are standard losses as well. (It is also unclear why the loss in the background paragraph is not incorporated). The authors argue that conditioning the spatial gradients on pixels is novel. I think the authors should clearly state why conditioning on pixels is novel enough to be a standalone paper itself, as this to me is an overstatement and I don't see how simply conditioning for a different task is novel. Penalty on the spatial gradients have also been previously adopted for other single-image 3D SDF reconstruction tasks [A,B]. Also, the authors mentioned that "the spatial gradient $\\nabla_{x,y,z}\\hat{f}_\\Theta(x,y,z)$ can be conveniently computed without the sampling procedure" -- how? 2. The gradient expression of "spatial gradients" (Sec 3.2). My understanding is that this is basically treating the finite pixel differencing operation as a network op, which in nature is differentiable. I think this part is confusing in many aspects. First, I don't see why the authors emphasize that the formulation is closed-form, as the spatial gradient expression is clearly taken numerically and its derivative can be computed via automatic differentiation. If the authors meant to present the gradient of gradient expressions, please use second-order derivatives (e.g. $\\frac{\\partial^2 f(x)}{\\partial x^2}$). It is also unclear how the 3D case (Eq 7) is derived; the authors merely presented an equation without elaborating its meaning. It is unclear what exactly $h$ and $w$ are; the authors referred to Fig 4(c) but there are no explanations in the captions either, which prevents a total understanding. Finally, a very important reference of Spatial Transformer Networks [C] is missing, as it was the first to advocate differentiable sampling. - It is unclear what "engineering constraints" refer to (Fig 2 caption and before Eq 1). - What is the relationship between $x,y,z$ and $i,j$? These notations are cross-referenced throughout the paper but their distinction was never explained. In Fig 4, are the feature maps sampled according to $x,y,z$ or $i,j$? What does $\\nabla_{x,y,z}\\phi_{i,j}$ in Fig 4(a) mean? - Experiments: - The results from the proposed method has its own training recipe (architecture, optimization etc), and thus it is unclear where the better performance is coming from. (It could not be about the losses or spatial gradients at all, but a better design of architecture.) I think it is essential to see results where the baseline methods (e.g. OccNet, DISN) are retrained with the proposed losses incorporated. Would these methods yield a boost of performance? - Why are different baseline methods compared for different versions (low-res and high-res) of ShapeNet? I think it's sufficient to present just the high-res version, and have a more complete comparison with the baselines in the low-res table. - Fig 5: it is undefined what "positive/negative precision/recall" mean. How close is a surface/surfel prediction to the ground truth is considered a "positive precision"? The red/blue figures are meaningless without precise definitions. - Could the authors elaborate more what "amodal depth" means, why the surface are evaluated in terms of this new metric, and how they are visualized? Why is not the naive depth definition used to evaluate? In addition, if only depth were used to evaluate quantitatively, why would one care about reconstructing the entire scene, as one could alternatively go for a scene depth prediction task which yields better quality? - There is no conclusion section. What have we learned about this paper? Other minor problems: - Please use only a set of notations for occupancy labels, not both {0,1} and {+,-}. Also, please do not mix the use of $\\phi(I)$ and $\\phi$, where the former should be a function and the latter a variable. - Fig 6: why is the bed in AdelaiDepth only visible in view 0? [A] Jiang et al. "SDFDiff: Differentiable Rendering of Signed Distance Fields for 3D Shape Optimization." CVPR 2020. [B] Lin et al. "SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images". NeurIPS 2020. [C] Jaderberg et al. "Spatial Transformer Networks." NeurIPS 2015. I think this paper has an interesting motivation of learning 3D reconstruction from incomplete raw 3D scans / open meshes, but there are major flaws in the method description and the experiments (detailed above). I also don't think there are either sufficient novelty or insights in the paper. I think the submission needs much major revisions with additional experiments to validate the effectiveness of the proposed DSG. <doc-sep>This paper describes novel loss functions for learning to predict an implicit 3D scene representation from a single image. They argue that when working with real scan data of scenes (rather than single objects) it is difficult to generate accurate occupancy or signed distance function (SDF) ground truth as would be required for supervised learning. Instead, they propose to only use occupancy or SDF supervision near the surfaces of objects; elsewhere, they rely on constraints on the gradient of the occupancy or SDF adapted from Gropp et al. 2020. They perform a thorough evaluation on several benchmark datasets and compare against state-of-the-art competing methods. They show that they outperform competing methods, even though in some cases their method has access to less supervisory data. They also perform an ablation study to show the importance of various parts of the loss function. Their proposed loss functions are novel as far as I know. The idea makes sense, to only apply supervision near surface boundaries where the labels can be reliably produced. They perform a thorough set of experiments including comparisons and an ablation study. They also derive the closed-form gradients of their loss function and show the importance of using them over numerical derivatives. Both the quantitative and qualitative results are convincing. One question I had was why there were no coefficients to balance the strength of the regularization terms in equations (1) and (2). This paper has novel and interesting contributions to the field of single-image 3D reconstruction. They provide convincing experiments to validate their contributions. The paper is also well-written and nicely presented. <doc-sep>This paper presents a new method to learn implicit 3D scene reconstructions from single image input. The main improvement is a closed-form Differentiable Gradient Sampling. By taking spatial gradient into consideration, the proposed method can apply back-propagation of the loss on spatial gradients to feature maps and allow the training for the case of without dense 3D supervision. pros: 1. the formation of DGS is novel and interesting, with promising performance. 2. detailed experiments on both shapenet and scan data 3. detailed ablation study 4. better performance compared with pervious methods. cons: 1. the overall learning loss is not novel (Eq 1 & 2), for the Eikonal regularization part. 2. there are some camera parameters are involved in DGS, which is a bit hard to get in general. 3. it is not clear what is the percentage of the know voxel occupancy/SDF, and how much would the rate effect the learning. It ill be better to have some discussion and ablations. IMHO, generally if the value of voxels closed to the surface is known, it might be ok to learn with other regions are missing. 4. from the results (Fig9 &10), DGS seems to be more likely to generate floating points, some discussion would be better to have here. Overall, I think the proposed method is novel and with reasonable performance. I am in favor for acceptance if the authors can provide some discussion about the cons listed above. <doc-sep>In this paper, the authors propose a new method for single view 3D reconstruction. A conditional (image feature prior) implicit representation framework is proposed to reconstruct 3D scene from a single view. In this paper, the authors propose that feature gradient is essential for watertight reconstruction and propose a differentiable gradient sampling method for the formulation. Experiments have been performed on both synthetic and real datasets. Superior results have been presented. ## Strength # Interesting and novel idea with the use of implicit representation Implicit representation has been extensively explored in 3D object reconstruction and novel view synthesis recently. It is interesting to see how we can use implicit representation in various applications, e.g. single-view reconstruction. # New sampling scheme for gradient The authors have proposed a new sampling scheme for computing spatial gradient and thus a closed-form solution for loss propagation. Although the approach is new, I still have some concerns listed below. # Good result The authors have shown good results in a variety of datasets, quantitatively and qualitatively. # Adequate ablation study An ablation study is also provided to support the effectiveness of the proposed method. ## Weakness # Generalization evaluation The authors provide a qualitative evaluation (single sample) on an unseen test image. It will be much better if more examples can be provided. Moreover, a quantitative evaluation will be more appreciated to show the generalization ability of the trained model. # necessity of the gradient sampling scheme. Though the proposed sampling scheme is new, I don't understand why it is necessary (i.e. eq 5). What will be the difference between this approach and a naive approach that compute a feature gradient map first, followed by simple differentiable sampling (eq.4). Could authors shed some light on the difference? If the naive approach is a reasonable approach, why the proposed method is essential in this case? The paper is well written and presented overall. The essential experiments are performed and the results are well presented. However, I have a question regarding the major contribution (see weakness). | The paper presents a new way to train the prediction of implicit 3D scene representations from a single view. The main innovations are a novel numerically stable and memory efficient formulation of the derivatives of a loss function based on the spatial gradients of the implicit field, and focusing the training on regions near the surfaces of objects. The method leads to good performance, especially when training on imperfect ground truth scan data. Concerns were raised about the novelty of the approach and its significance. These were adequately addressed in the author response and revisions. The experiments were found to be well described and executed, which increases the confidence in the approach and its potential impact. I recommend acceptance. |
Summary: This paper attempts to solve the problem of seeking novel policies in reinforcement learning from a constrained optimization perspective. This new perspective motives two new algorithms to solve the optimization problem, which are based on feasible direction and the interior point methods. The authors provide empirical results on several Mujoco benchmarks. Details: The idea of formulating the problem from a constrained optimization perspective is interesting. This new perspective motivates new and better algorithms to solve the optimization problem. However, I feel like the presentation is poor and the writing should be improved. What’s the exact problem setting? The authors should clearly describe the problem setting before presenting the methods, even one paragraph would be helpful. A lot of algorithm details are missing: Q1. $\\bar{D}^q_W (\\theta_i, \\theta_j)$ is a metric for any state distribution $q$. What’s the motivation of using $q = \\bar{\\rho}$? Q2. When computing the policy distance, what is $\\rho_{\\theta_i}$ in (4)? Is it the current policy, or a reference policy? Q3. I assume $\\theta_i$ is the current policy. According to (4), the algorithm uses $\\theta_i$ to get samples, and compute an importance correction ratio $q/\\rho_{\\theta_i}$ to approximate the distance. How is the $q(s)=\\bar{\\rho}(s)$ computed? The authors propose to approximate $\\rho_{\\theta}$ using monte-carlo methods. Does it mean the algorithm need to approximate $\\bar{\\rho}(s)$ using the reference policies for each $s\\sim \\rho_{\\theta_i}$? Is there a computation issue? Q4. This goes back to Q1. Why just using the on policy samples to estimate the distance? Is there any potential advantage to use $q = \\bar{\\rho}$? Q5. Learning the stationary distribution is a hard research problem itself. See recent work for example: Zhang, R., Dai, B., Li, L. and Schuurmans, D., 2019, September. GenDICE: Generalized Offline Estimation of Stationary Values. In International Conference on Learning Representations. I agree the stationary distribution can be approximated using MC methods, but it might need a lot of samples as the variance is very high. This makes me wonder how is the algorithm implemented in practice, and how does the stationary distribution estimation subroutine affect the algorithm’s performance. Other suggestions: If I understand correctly, this paper tries to solve the problem of finding a set of novel polices that solve a given task while exhibiting different behaviors. This seems also related to the exploration problem, as some works try to make the current policy different with previous policies to encourage exploration. See for example: Hazan, E., Kakade, S., Singh, K. and Van Soest, A., 2019, May. Provably efficient maximum entropy exploration. In International Conference on Machine Learning (pp. 2681-2691). It might be worth to discuss how the novel policy seeking problem is related to the exploration problem. <doc-sep>Summary: This paper proposed a method to leverage the constrained optimization for policy training to learn diverse policies given some references. Based on a diversity metric defined on policy divergences, the paper employs two constrained optimization techniques for this problem with some modifications. Experiments on mujoco environments suggest that the proposed algorithms can beat existing diversity-driven policy optimization methods to learn both better and novel policies. Generally, the paper is well-written and easy to follow. Some concerns/comments: * The state distributions of proposed CTNB and IPD are different: In the CTNB method, the trajectories will keep rollout until they reach some termination conditions such as time limit or failure behavior. However, in the IPD method, if the cumulative novel reward is below some thresholds, then the trajectories will be truncated. It will be helpful to compare the CTNB with that extra termination condition. * Using the divergence of policies to quantify the difference between policies seems not a very innovative metric. Some related work could be: Hong, Z. W., Shann, T. Y., Su, S. Y., Chang, Y. H., Fu, T. J., & Lee, C. Y. (2018). Diversity-driven exploration strategy for deep reinforcement learning. It will be great if the authors can compare and explain the relationship between the proposed metric and some related ones. * The experiments can be more convincing if more locomotion environments are included, especially some higher-dimensional environments such as Humanoid and HumanoidStandup. Also, some other environments with a long-term/sparse reward setting can be more illustrative such as some mazes or Atari games. For some of those games, since it is stage-based, the IPD might terminate some rollouts if all reasonable policies are similar at the beginning of the trajectory. For a maze example, all good policies should choose to open the door at the beginning and then behave diversely. Other/Minor Comments: * The choice of r_0 can affect the performance: When sequentially training the policy, should r_0 be adjusted when training each new policy? * It can be more interesting if some visualization of hopper policy diversity is included.<doc-sep>This paper aims at novel policy seeking which incorporates curiosity-driven exploration for better reinforcement learning. This paper first propose to use a Wasserstein-based metric to calculate the difference between policies, and use it to define the policy novelty. With these, the authors modeled the novel policy seeking as a constrained markov decision process(CMDP) and solved it using CTNB and IPD. 1. This paper allows to consider the novelity issue dynamically. However, when training policy according to the proposed CTNB or IPD, there should be some pretrained policies as perconditions, in other words, the proposed method needs some prior knowledge rather than learning policy from scratch. This may be a limitation for its application. 2. About the proposition 2, the single trajectory estimation is unbiased, however, the variance seems to be large, the influence about the estimation variance should be considered. 3. in fomula (5) and (6), is $r_{int, t}$ equal to $r_{int}$ ? If is, why use t? and what does moving average mean since there are several kinds of moving averages? 4. in fomula (4) and (6), Are Ts the same? 5. Fig.2 shows that in Waklker2d and HalfCheetah, the proposed CTNB has less novel than PPO, which doesn't match the purpose of CTNB. 6. It seems not easy to tune the novelty threshold for different task, as it performs different on different tasks. Can the author provide some insight on how to tune this. 7. Five random seeds is not sufficient for experiments. <doc-sep>This paper proposes a novel constrained optimization based method, to optimize the expected return as well as encourage novelty of a new policy in contrast to existing policies. By modeling the problem as a constrained optimization problem, they can avoid excessive novelty seeking effectively, which is common in existing methods which model the problem with multi-objective optimization. To be specific, they first propose a novel metric to measure the novelty of a new policy. To estimate such metric on sampled state with dense online reward, they propose an importance-based estimator for the proposed metric. With the estimation of the novelty metric, they propose to formulate the problem as a constrained optimization problem. The novelty is constrained to be larger than certain threshold r_0. In this way, the algorithm will only encourage larger novelty when the novelty is less than r_0, therefore avoiding excessive novelty seeking which may hurt the performance. They improve TNB proposed in (Zhang et al., 2019) with CTNB, where the ∇θg term exists only when the constraint is violated. They also propose another method based on Interior Point Method. Since IPM is computationally expensive and numerically unstable, they made an adaptation to RL setting, by bounding the collected transitions in the feasible region. Overall, the method is intuitive and reasonable. I have the following questions: 1. The first contribution of this paper, is proposing a novel metric to measure the novelty of current policy in contrast to existing policies. Why propose a novel metric? Is existing metric for measuring the novelty not good? If so, can you verify your claim in experiments? 2. The hyper-parameter r_0. From Figure 3, we can see the different performance under different novelty thresholds r_0. The algorithm seems to be sensitive to r_0, which is of course reasonable. How did you choose r_0 for different environments? Did you consider a soft r_0 rather than a hard constraint(that is, maybe the constraint has different weight for different r_0)? | This paper investigate the interesting problem of policy seeking in reinforcement learning via constrained optimization. Conditioned on reviewers' judgements, this is a good submission but hasn't reached the bar of ICLR. |
The paper proposes a method using piecewise-linear neural networks as the surrogate model (and the acquisition function) in a black-box optimization framework. At each step, the acquisition function is optimized by casting the learned neural network (and a set of constraints excluding the already-visited data points) into a mixed integer linear program, which can be then solved using off-the-shelf solver. The proposed method is empirically evaluated on a number of unconstrained and constrained tasks. ### Strong points - The declarative nature of the proposed method eliminates the need for developing an algorithm for solving the inner loop optimization (as long as the structural constraints can be represented in MIP formalsim). It also facilitates to adpating an existing method to similar variations by simply modifying the constraints. ### Aspects to be improved - The choice of a piecewise linear neural network for both the surrogate model and acquisition function is not properly motivated. In particular, it is not clear how the proposed approach maintains an exploration-exploitation balance. - The proposed approach does not demonstrate significant improvements in empirical evaluations. In the unconstrained setting, it is outperformed by the baseline, and in the constrained setting, solving the inner loop problem to optimality does not seem to provide an advantage. The argument of "ease of implementation" would have been convincing if implementing the alternative (evolutionary) approach was prohibitively difficult. But this does not seem to be the case. - The proposed approach relies on solving a MIP problem repeatedly. This does not allow the method to be applied to problems with more than a certain number of variables. The problems studied in the experiments have few number of variables. In particular, *TfBind(8, 4)* can be probably solved by simple enumeration. It is not clear to what extent the proposed method is applicable to larger problems. ### Question - When comparing different methods, each algorithm is evaluated in terms of the best reward observed after 1000 queries. At each iteration, the MIP solver is given a 500 seconds timeout, and the evolutionary aglorithms is given a budget of 10k queries. Does this give the inner loop algorithms equal opportunities for finding good solutions? Isn't it more fair to compare the methods subject to an equal overall time budget? The paper presents an interesting direction for using a declarative framework (i.e. mixed integer linear programming) to encode combinatorial structures in black-box optimization. The proposed method lacks a theoretical motivation, and the empirical evaluation does not demonstrate significant advantages over existing methods. <doc-sep>The paper develops a technique to optimize an unknown, black-box function "f" by leveraging a combination of neural networks with mixed-integer linear programming (MILP) methodology. More specifically, authors encode an approximation of "f" using a neural network with piecewise-linear activation functions, which is optimized using its associated reformulation as an MILP with no-good cuts. Numerical results evaluate the approach with respect to other baselines and neural network optimization mechanisms. Overall, the paper is well-written and suggests an interesting and relevant approach to address black-box optimization problems equipped with discrete domains. I found the basic framework to be well thought-out and, in my view, of potential value for a large array of settings within this field. My major concern, however, is that many design choices are somewhat unclear, and the paper often feels that lacks depth in more specific areas. In particular: (1) Optimizing inputs with neural networks is challenging with MILPs, often requiring more sophisticated implementations as that of Anderson et al., 2020 (as nicely emphasized by the authors). I wonder whether the fact that the authors were limited to a very simple network, with only a single layer, did not hinder some key insights on the numerical evaluation? For instance, wouldn't the function approximation be quite poor for larger instances? Also, is this approach really scalable? Perhaps my suggestion here is for authors to consider encodings that are much more efficient/scalable for MILPs (and other model-based approaches). For example, many black-box functions have a natural discrete structure, and authors could consider a decision tree, which is much "simpler" to optimize. Similar reasoning applies to the "no-good" constraints, which in this case are quite simple and only eliminate one solution at a time. This could really impact MILP performance (e.g., similar to combinatorial Benders cuts). Would there be cases where it is possible to eliminate several points from the acquisition domain? For example, suppose "x" are binary and represent subsets. If you eventually learn all subsets of size <= N, you could eventually replace those constraints by \\sum_i x_i >= N + 1. (2) The numerical experiments do not seem to reflect well the benefits of the approach. To the best of my understanding, the final conclusion is that problems are "easier" to model with NN+MILP, but performance improvements are marginal (if any). I believe this is indeed the case, but the paper lacks more concrete evidence of this statement. In particular, when comparing NN+MILP and NN+ConEvo, why not consider a problem class that the MILP has clear benefits? For example, a set-packing/set-covering acquisition domain, perhaps with other side constraints, or a scheduling feasible set (e.g., "f" could encode a weighted completion time with unknown weights, and the feasible set are the valid sequences). The authors could compare with any global optimizer (as opposed to RejSample and ConEvo) because it is well-known that MILP is one of the state-of-the-art techniques for these problems. The ideas are novel and significant, especially given the modeling expressivity provided by MILPs. However, the paper lacks some justification concerning the scalability of the approach and the fact that neural networks had quite a limited size. Moreover, in my view, the numerical experiments do not explore well the benefits of their approach. <doc-sep>This paper solves a constrained discrete black-box optimization problem that employs a surrogate model in modeling an unknown objective function. Unlike the formulation of standard Bayesian optimization, it constructs a surrogate model using a piecewise linear neural network. Under the assumption that a randomly-initialized neural network is able to produce an uncertainty for exploration (against exploitation), the proposed method optimizes a surrogate model directly with mixed-integer linear programming. It follows a spirit of Thompson sampling. Finally the authors conduct their method on several experimental circumstances and show the validity of their method. ### Reasons to Accept + This paper is well-written and well-organized. + It solves a very interesting problem that contains constraints on discrete search space. ### Reasons to Reject - I do not think that a piecewise neural network models an uncertainty of unknown objective appropriately. - Following the point described above, even though we train a neural network every iteration, it tends not to reflect a factor for exploration, which implies that regression results are almost same where the same observations are given. - GP is a popular choice for a surrogate function, since it has sufficient expressiveness, which is defined on a RKHS space. I am curious that the surrogate function used in this paper is sufficiently expressive of unknown function. ### Questions to Authors Please answer the comments described in Reasons to Accept and Reasons to Reject. 1. Can I ask what the difference between no-good constraints and generic equality constraints is? If they are similar, it can be considered in a continuous search space? 1. Is there any specific reason not to compare the proposed method to COMBO? In my experience, COMBO can be thought of as the state-of-the-art model now. To cope with a no-good constraint, you can just apply a rejection strategy in a step of acquisition function optimization (i.e., local search with rejection sampling). It solves a very interesting topic which is defined on a discrete search space with constraints. However, the choice of surrogate model is not convincing and the baseline is missing. Thus, I would like to recommend rejection. <doc-sep>The authors present NN+MILP, a framework for the optimization of an expensive to evaluate blackbox fuction with a discrete combinatorially constrained domain. The acquisition problem of finding the surrogate minimum is solved to global optimality by solving a MILP formulation of the acquisition problem. The MILP formulation limits the considered neural network surrogate class to networks with piecewise linear activation functions. However, it provides a simple declarative language for integrating the problem-specific constraints. The experiments cover both the cases of unconstrained and constrained optimization of the acquisition function. The unconstrained case compares NN+MILP to general purpose algorithms for unconstrained discrete blackbox optimization. It shows that the global optimization often achieves better results than a local search evolution based method for solving the acquisition problem, while also solving the inner loop problem faster. Additionally, it demonstrates that even with the restrictive function class of a single layer neural network, comparable performance to better-suited surrogate hypothesis classes can be achieved, thanks to the global optimization. In the constrained case artificial subset-equality constraints are used to create a combinatorial domain for the blackbox function. NN+MILP performs similar to NN+ConEvo, a manually adjusted local search method that ensures feasibility of the proposals at every step. Other methods employing random search for inner loop optimization or local search with a different surrogate hypothesis class perform much worse. Finally, a case study for the NAS-Bench-101 benchmark is provided. A novel MILP formulation for cells of a valid architecture design is described, which consists of an MILP formulation for directed acyclic graphs with added null operations to allow DAGs with a reduced number of cells. Despite the generality of NN+MILP it outperforms a strong evolution based baseline. # Strengths This paper tackles an interesting problem of optimizing expensive blackbox functions even in the case of complex combinatorially constrained domains, by using a MILP formulation of a piecewise linear neural network. Even though such formulations have been used before (the authors mention neural network certification as an example), it offers a very flexible way of easily integrating combinatorial constraints, without handcrafting a local search method to maintain feasibility of solutions to the acquisition problem. The experiments show, that 1) solving the acquisition problem to global optimality can provide benefits over local search methods even in case of unconstrained domains 2) performance of handcrafted methods to ensure proposal feasibility in the constrained case can either be matched (NN+ConEvo in RandomMLP with subset constraints) or surpassed (RE in NAS-101), with a simpler way to implement the constrained domain in a general framework. With the flexibility of the framework this can be used for various other future problems that involve MBO in the presence of combinatorially constrained domains. # Weaknesses ### Limitations of surrogate model The surrogate model is limited to a) neural networks with piecewise linear activation functions, this is inherent to the MILP formulation and restricts the framework from using potentially better-suited surrogate hypothesis classes such as Random Forests. The surrogate model is also limited to b) a relatively small number of neurons in the network due to runtime limitations of the MILP. The dimensionality and number of constraints in the MILP formulation scales linearly with the number of nonlinearities in the network and as mentioned in the paper, the runtime scaling of the MILP solver is then often unpredictable. This means that the runtime comparison between methods is a crucial factor in the comparison, which leads to some questions. 1) In section 4.4 the authors reported the inner-loop optimization runtimes for the unconstrained case. To me it is very surprising that the local evolutionary search RegEvo has a larger average runtime than the global MILP optimization. It would be great if the authors could provide intuition on why this is the case. Additionally, is this also the case when comparing MILP to ConEvo in the constrained case, where those two methods are the main competitors? As the constrained case is the one this framework is designed for, this would be the more important comparison than in the case of an unconstrained domain. 2) In the experiments a single hidden layer with 16 neurons is used, and ablations with two slightly bigger architectures (32 and 16+16 neurons) are provided. While the slightly bigger architectures are still within the computational limits, the network probably cannot be scaled to much larger (deep) networks. In the ablations it is stated that for the TFbind8 experiment larger architectures did not improve performance, which suggests that the small surrogate model is already expressive enough in this case. It would be interesting to know whether this is also the case for a larger experiment such as the NAS case study. In any case, the runtime restriction could be a limiting factor in future applications, where more complex surrogates could be required, giving faster but less optimal local search methods an advantage over the global optimization. ### Integration of continuous variables In the current stage the framework doesn’t support mixed-integer domains because the no-good constraints do not naturally extend to this case, as stated in the conclusion. Therefore, in its current stage, the method should be called NN+ILP instead of NN+MILP, as the name otherwise suggests that mixed-integer domains can be used. ## Typos These do not affect my rating. 1) Before section 2.2: ‘are’ should be removed Strengths: The presented framework is flexible and easier to implement than other methods in the presence of combinatorially constrained discrete domains. The experiments show that the performance is also competitive with other approaches. Weaknesses: The MILP formulation limits the hypothesis class of models to neural networks with linear activation functions, runtime limitations restrict the number of neurons in the network. Mixed-integer domains are currently not supported by the framework, even though the name suggests otherwise. | The paper considers the problem of black-box optimization and proposes a discrete MBO framework using piecewise-linear neural networks as surrogate models and mixed-integer linear programming. The reviewers generally agree that the paper suggests an interesting approach but they also raised several concerns in their initial reviews. The response from the authors addressed a number of these concerns, for instance regarding scalability and expressivity of the model. However, some of these concerns remained after the discussion period, including doubts about the usefulness for typical applications in discrete black-box optimization and some concerns about the balance between exploration and exploration. Overall the paper falls below the acceptance bar for now but the direction taken by the authors has some potential. I encourage the authors to address the problems discussed in the reviews before resubmitting. |
This paper first evaluate current gradient inversion attacks for both implicit and explicit model changes. It shows that when the batch norm mode, training epochs, skip connection or channel size is different, the result of gradient inversion attacks also changes. Finally, it proposes angular lipschitz smoothness and shows its positive correlation to the success of gradient inversion attacks. ### Strengths 1. It is the first paper to study gradient inversion attacks on the different settings of training procedures and models, including batch-norm mode, skip connection and channel size. 2. Its second part tries to find another signal, which is able to indicate the leakage in the end. Such signal, if it has good performance, would be useful in practice. 3. The paper is well structured. ### Weaknesses The main weakness of this paper is that some experiment results are not very reliable and and some conclusions from the results are ambiguous. 1. Results of BN: (a). Why all MSE numbers in table 1 are mostly larger than 1? If all images are lying in the 0-1 cube, MSE > 1 means nothing recovered. However, results from previous work already shows the success of image recovery at some setting of BN. (b). What is the conclusion from the BN experiment results? 2. Results of skip connection: why the gradient inversion attack is failed for the ConvNet? [1] and its following work all shows the success for ConvNet. 3. Results of channel size: the result "the reconstruction quality worsens as the number of channels increases for BN set to the train mode" is lack of explanation. 4. Angular Lipschitz smoothness: The intuition from theorem 1 is that smaller L means faster dropping at one step of optimization. However, it doesn't mean the minimum value of L_grad^x would be smaller, i.e. the reconstruction result is better. As the experiment details are missed in both main text and appendix, I am hypothesizing it is likely due to the unfair optimization set-up, e.g. the larger L_grad might need more iterations to converge, etc. As for the experiment set-up, what gradient inversion attack is evaluated for those experiments? [1] Zhu, Ligeng, Zhijian Liu, and Song Han. "Deep leakage from gradients." Advances in neural information processing systems 32 (2019). -------After Discussion-------- I have read all responses and I'll keep my score. Thank the authors for the clarification and the additional experiment results. I hope authors can improve their paper by incorporating these results in the revision and improve their writing by highlighting the conclusions for each setting (BN, skip connection, training stages etc.) with the clear conditions. Yes, the authors fairly describe the limitations. <doc-sep>This paper works on the privacy attacks in the black-box FL setting, specifically using the gradient inversion methods. The contribution is on the image tasks and empirically solid. Different aspects of models are systematically evaluated to identity the vulnerability to privacy attacks. The authors also propose a new measure, the angular Lipschitz constant, to measure the privacy risk. Strength: Overall the paper is well-written and tackles a clear task. The FL privacy is an important and practical concern, especially in the black-box setting in which the authors have worked on. The motivation of an honest-but-curious host could be realistic. Empirically speaking, the ablation study of BN modes, number of channels, skip connection, etc. is thorough. From a theoretical viewpoint, the proposed angular Lipschitz constant has nice properties (scale-invariance); the theorem has a premise reasonably justified. Weakness: My main concern is the impact of this paper: the experiment is limited to one simple dataset, thus limiting my confidence that the trends observed in this paper can hold in a broader range of cases (see more comments in "Limitations"). All conclusions except the scale-invariant property of the new Lipschitz measure are empirical, which requires more experiments to support the trends claimed in the paper. The limitations are discussed in the paper. However, I believe there could be additional limitations that worth consideration. For example, the experiment is very limited, only CIFAR datasets are tested, which contains tiny images only. It would be desirable to understand how the gradient inversion attack work on moderate images (e.g. CelebA or ImageNette) and even maybe on language samples. <doc-sep>The submission "Foreseeing Privacy Threats from Gradient Inversion Through the Lens of Angular Lipschitz Smoothness" is concerned with gradient inversion attacks in federated learning. The first part of this submission contains additional results and ablation studies about the success conditions of such attacks. The second part proposes a new measure of vulnerability that is claimed to be a key to client-sided defenses. I, unfortunately, have several points of criticism with this work, which I will list out below: * This submission analyzes gradient inversion attacks as developed around 2019/2020, but of the analysis given in this submission has been included in other work also evaluating these attacks since. Questions brought up concerning batch normalization have been already been analyzed in works such as Huang et al., "Evaluating Gradient Inversion Attacks and Defenses in Federated Learning", from last year's NeurIPS. Solutions to the normalization question have then been proposed in Hatamizadeh et al., "Do Gradient Inversion Attacks Make Federated Learning Unsafe?". Whether labels are required was also discussed in Huang et al., and work such as Wainakh et al, "User Label Leakage from Gradients in Federated Learning" have made strides towards showing that label knowledge is not a requirement for a succesful attack. The impact of training state in recovery success has been discussed several times, from Zhu et al, "Deep Leakage from Gradients", Geiping et al., "Inverting Gradients - How easy is it to break privacy in Federated Learning" and also appears in follow-up work such as Wei et al., "A Framework for Evaluating Gradient Leakage Attacks in Federated Learning". * The submission states that choices in Zhu et al., were based on the unavailability of automatic differentiation. Yet, AD was certainly already developed and present in 2018! These paragraphs in the related work are incorrect, and it is unclear what "computed the loss in closed form" here should mean. The existence of closed form solutions (which Zhu et al. do not employ, as they also backpropagate gradients using AD) is unrelated to the smoothness of the loss function. Zhu et al. use smooth activations because their optimization strategy is based on an L-BFGS solver that nominally requires well-defined higher-order derivatives. Later works use adaptive first-order optimizer and hence drop this requirement. * I am doubtful about the proposed black-box setting. The submission proposes a black-box setting in which the global model parameters are hidden from the user, but the user application requires these parameters to compute the update gradient. The parameters have to be send to the user device, and the user has full control over their device, so the users necessarily have access to model parameters as well. * Further, for a user to use known inversion attacks to gauge their loss of privacy is an inherently unsafe proposal. Even if a user uses these attacks themselves and finds that they cannot reconstruct their data, this is no guarantee of privacy! These attacks can only be used to prove vulnerabilty, never to ascertain safety. * The introduction mentions evaluation of additional models. However in the experimental section I find only the ConvNet and ResNet variations used in previous work? * Minor: The cited work by Bauschke, Bolte and Teboulle is one of their few work where Lipschitz smoothness of the gradient in the classical sense is not actually required, in contrast to how this work is cited here. * I like the general idea of the section concerning angular Lipschitz smoothness, but the concept makes up such a small part of the submission that it is difficult to quantify its effectiveness. The submission shows some correlation with attack success if measured in LPIPS, but the correlation is not overly strong and it is unclear whether this is a strong argument for robustness. LPIPS scores are only be a limited measure of safety. The main question here would be whether some threshold of angular Lipschity smoothness would imply that no reconstruction succeeds. The measure discussed in Yin et al., "See through gradients" might be helpful here. Further it would be necessary to discuss whether this measure is robust to adaptive attacks, meaning whether this proposed measure can be "gamed" by the attacker in some way. For example, angular smoothness might correlate with gradient obfuscation which would undermine its effectiveness. * From a more general vantage point, I do think that this submission makes a categorical mistake about the nature of safety. The submission shows scenarios where the (even unmodified) baseline attack works more or less well and concludes from this that the attack should be evaluated in more scenarios, but this misses the asymetric nature of safety research. It is not necessary for a single attack to work optimally in all scenarios, in each scenario the question is whether an attack could exist that breaks privacy. Will small modifications as discussed in this work, categorically defeat all attacks and prove a reliable defense in the future? Will angular smoothness defend against adaptive attacks that circumvent gradient matching losses? The submission discussses some limitations, however limitations of the proposed measure are not discussed. There is not enough evidence presented whether this really would be a key factor in future defenses. <doc-sep>This paper studied the gradient inversion problem where an honest-but-curious server aims to reveal clients’ private data from model weights and gradients shared by clients in federated learning (FL). The paper first evaluated SOTA gradient inversion algorithms (mainly [6] with an additional BN-related loss term), under implicit model variations (different BN modes and training epochs with the same model architecture) and explicit variations (different model architectures). The evaluation showed that the reconstruction results vary across model variations both qualitatively and quantitively. The paper then proposed a measure of angular Lipschitz constant to indicate the outcome of reconstruction, which was shown to be stronger correlated with LPIPS and attack loss drop than gradient norm. Strengths: - The paper addressed an interesting problem of evaluating gradient inversion algorithms for different models. A systematic experimental study may help gain insights about the mechanism of gradient inversion. - The paper made a good summarization on the optimization-based gradient inversion approach and related prior art in Section 2. - The paper showed some interesting results regarding the BN train mode. Firstly, the inversion achieves the best quantitative performance at early training stages (e=5). Secondly, the inversion achieves best results for ResNet18 than for ResNet18-2 and ResNet18-4, showing that wider networks are not necessarily easier to attack. Weaknesses: - The main contribution of this paper seems ambiguous. The title and abstract suggest that the paper mainly proposes a new measure, angular Lipschitz constant, for assessing inversion attacks, but a large body of the paper focuses on re-evaluation of existing methods over model variations. While the re-evaluation is valuable, it should be more clearly stated as primary results or just motivational study. - The scope of re-evaluation of SOTA algorithms in this paper overlaps with [10], which focused on evaluating gradient inversion attacks and defenses. In particular, [10] evaluated the algorithm in [6] with BN loss term (as in equation (2) of this paper), and considered different settings on knowledge of BN statistics. What is the difference between the BN settings of this paper and [10]? A detailed discussion and comparison with [10] would be beneficial. - The re-evaluation of SOTA algorithms was performed on limited variations. If the empirical study is positioned as the primary contribution, a wider range of models may be considered, such as initialization schemes (pre-trained or random) and random seeds for implicit variations, and more architectures beyond ResNet18 with different widths for explicit variations. - The paper only considered CIFAR100 images in experiments, but not higher-resolution (e.g. ImageNet) images. - The paper motivates the proposed angular Lipschitz constant measure with the case that clients may only have access to gradients provided by the server. However, wouldn’t the client need white-box access to compute gradients using private local data for model training in FL? If the client only has black-box access, what is the interplay between model training, gradient inversion, and risk assessing (Section 4)? And how does the client compute the measure (in Line 277) through random sampling? The paper discussed limitations in Section 5. | Several analyses provided in the paper are not novel and known in the literature. The black-box setting is not properly motivated and impractical. The angular Lipschitz constant is a good contribution, but it seems to be only a small part of the paper. For these reasons, the reviewers are not convinced that the contribution of this paper is significant enough. |
The authors have previously published the dataset, which greatly facilitates reproducing this and related work. The techniques and results look promising and interesting for the MIDL audience. The glottis segmentation task is relevant for voice disorders and voice research and justifies real-time segmentation techniques. My main concern about the paper is that both methods and quantitative results are only briefly sketched. I am unsure whether to recommend to accept this MIDL contribution, because I believe the underlying work (as described in the IEEE Access paper) is highly relevant and interesting to the MIDL audience, or whether to (weakly) recommend rejection because the adaptations necessary for the Edge TPU are not really described, and also the quantitative evaluation has been stripped down too much to be able to understand. | The paper summarises an interesting journal paper about Edge-TPU computing. The reviewer points out that the underlying publication is of high interest but criticises that the reduction to three pages reduced the readability. I agree with them and recommend acceptance, but would also encourage the authors to modify where possible the paper that it is better to understand. |
This paper does unpaired cross-domain translation of multi-instance images, proposing a method -- InstaGAN -- which builds on CycleGAN by taking into account instance information in the form of per-instance segmentation masks. ===================================== Pros: The paper is well-written and easy to understand. The proposed method is novel, and does a good job of handling a type of information that previous methods couldn’t. The motivation for each piece of the model and training objective is clearly explained in the context of the problem. Intuitively seems like a nice and elegant way to take advantage of the extra segmentation information available. The results look pretty good and clearly compare favorably with CycleGAN and other baselines. The tested baselines seem like a fair comparison -- for example, the model capacity of the baseline is increased to compensate for the larger proposed model. ===================================== Cons / suggestions: The results are somewhat limited in terms of the number of domains tested -- three pairs of categories (giraffe/sheep, pants/skirts, cup/bottle). In a sense, this is somewhat understandable -- one wouldn’t necessarily expect the method to be able to translate between objects with different scale or that are never seen in the same contexts (e.g. cups and giraffes). However, it would still have been nice to see e.g. more pairs of animal classes to confirm that the category pairs aren’t the only ones where the method worked. Relatedly, it would have been interesting to see if a single model could be trained on multiple category pairs and benefit from information sharing between them. The evaluation is primarily qualitative, with quantitative results limited to Appendix D showing a classification score. I think there could have been a few more interesting quantitative results, such as segmentation accuracy of the proposed images for the proposed masks, or reconstruction error. Visualizing some reconstruction pairs (i.e., x vs. Gyx(Gxy(x))) would have been interesting as well. I would have liked to see a more thorough ablation of parts of the model. For example, the L_idt piece of the loss enforcing that an image in the target domain (Y) remain identical after passing through the generator mapping X->Y. This loss term could have been included in the original CycleGAN as well (i.e. there is nothing about it that’s specific to having instance information) but it was not -- is it necessary? ===================================== Overall, while the evaluation could have been more thorough and quantitative, this is a well-written paper that proposes an interesting, well-motivated, and novel method with good results. ========================================================================== REVISION The authors' additional results and responses have addressed most of my concerns, and I've raised my rating from 6 to 7. > We remark that the identity mapping loss L_idt is already used by the authors of the original CycleGAN (see Figure 9 of [2]). Thanks, you're right, I didn't know this was part of the original CycleGAN. As a final suggestion, it would be good to mention in your method section that this loss component is used in the original CycleGAN for less knowledgeable readers (like me) as it's somewhat hard to find in the original paper (only used in some of their experiments and not mentioned as part of the "main objective").<doc-sep>Post rebuttal: I am satisfied by the points mentioned by authors! ---------------------------------------------------------------- Summary: The paper proposes to add instance-aware segmentation masks for the problem of unpaired image-to-image translation. A new formulation is proposed to incorporate instance masks with an input image to generate a new target image and corresponding mask. The authors demonstrate it on multiple tasks, and show nice results for each of them. Pros: 1. The formulation is intuitive and well done! 2. The idea of sequential mini-batch translation connects nicely to the old school of making images by layering. 3. Nice qualitative analysis, and good results in comparison with Cycle-GAN (an obvious baseline for the formulation). I would make an observation that two domains for translation (such as sheep to giraffe, jeans to skirts etc) are thoughtfully selected because Cycle-GAN is somewhat bound to fail on them. There is no way Cycle-GAN can work for jeans to skirts because by design the distribution for images from both set would be mostly similar, and it is way too hard for the discriminator to distinguish between two. This ultimately leads the generator to act as an identity mapping (easily observed in all the qualitative examples). 4. The proposed approach can easily find direct application in places where a user-control is required for image editing or synthesis. 5. The literature review is extensive. Cons: 1. My biggest criticism of this work is the absence of simple baselines. Given the fact that the formulation use an instance segmentation map with the given input, the following obvious baseline need consideration: Suppose the two domains are sheep and giraffe: a. given the input of sheep and its instance mask, find a shape/mask in giraffe from the training images that is closest (it could be same location in image or some other similarity measure). b. mask the input image using the sheep mask. Use giraffe mask and add corresponding RGB components of the masked giraffe (from the training set) to the masked input image. The above step would give a rough image with some holes. c. To remove holes, one can either use an image inpainting pipeline, or can also simply use a CNN with GAN loss. I believe that above pipeline should give competitive (if not better) outputs to the proposed formulation. (Note: the above pipeline could be considered a simpler version of PhotoClipArt from Lalonde et al, 2007). 2. Nearest neighbors on generated instance map needs to be done. This enables to understand if the generated shapes are similar to ones in training set, or there are new shapes/masks being generated. Looking at the current results, I believe that generated masks are very similar to the training instances for that category. And that makes baseline described in (1) even more important. 3. An interesting thing about Cycle-GAN is its ability to give somewhat temporally consistent (if not a lot) -- ex. Horse to Zebra output shown by the authors of Cycle-GAN. I am not sure if the proposed formulation will be able to give temporally consistent output on shorts/skirts to jeans example. It would be important to see how the generated output looks for a given video input containing a person and its segmentation map of jeans to generate a video of same person in shorts? <doc-sep>This paper proposes a well-designed instance level unsupervised image-to-image translation method which can handle the arbitrary number of instances in a permutation-invariant way. The idea is interesting and the results on various translation datasets are reasonable. Pros: * The proposed method process each instance separately to handle multiple instances. The summarization operation is a simple but effective way to achieve the permutation-invariant property. The context preserving loss is suitable for preserving the background information. * The paper is well written and easy to follow. Cons: * My main concern is about the comparisons with CycleGAN in Figure 4 to 6. Although the CycleGAN+Seg results are shown in Figure 9 indicating that the proposed method can handle multiple instances better. I think there should also be CycleGAN+Seg results in Figure 4 to 6, since the instance segmentation is an extra information. And in my opinion, the CycleGAN+Seg can handle the situation where there are only a few instances (also can be observed in the 1st row in Figure 9). Besides, CycleGAN+Seg can naturally handle the arbitrary number of instances without extra computation cost. Questions: * I wonder what will happen if the network does not permutation-invariant. Except that the results will vary for different the input order, will the generated quality decrease? Since the order information may be useful for some applications. Overall, I think the proposed method is interesting but the comparison should be fairer in Figure 4 to 6. | This paper addresses a promising method for unpaired cross-domain image-to-image translation that can accommodate multi-instance images. It extends the previously proposed CycleGAN model by taking into account per-instance segmentation masks. All three reviewers and AC agree that performing such transformation in general is a hard problem when significant changes in shape or appearance of the object have to be made, and that the proposed approach is sound and shows promising results. As rightly acknowledged by R1 ‘The formulation is intuitive and well done!’ There are several potential weaknesses and suggestions to further strengthen this work: (1) R1 and R2 raised important concerns about the absence of baselines such as crop & attach simple baseline and CycleGAN+Seg. Pleased to report that the authors showed and discussed in their response some preliminary qualitative results regarding these baselines. In considering the author response and reviewer comments, the AC decided that the paper could be accepted given the comparison in the revised version, but the authors are strongly urged to include more results and evaluations on crop & attach baseline in the final revision if possible. (2) more quantitative results are needed for assessing the benefits of this approach (R3). The authors discussed in their response to R3 that more quantitative results such as the segmentation accuracy of the synthesized images are not possible since no ground-truth segmentation labels are available. This is true in general for unpaired image-to-image translation, however collecting annotations and performing such quantitative evaluation could have a substantial impact for assessing the significance of this work and can be seen as a recommendation for further improvement. (3) the proposed model performs translation for a pair of domains; extending the work to multi-domain translation like StarGAN by Choi et al 2018 or GANimation by Pumarola 2018 would strengthen the significance of the work. The authors discussed in their response to R3 that this is indeed possible. |
Summary: This paper proposes a future frame prediction framework where the video generation can transition between different actions using a Gaussian process trigger. The framework consists of three components: an encoder which encodes the frame to a latent code, an LSTM which predicts the next latent code given the current one, and a Gaussian process which samples a new latent code. The framework can decide whether to switch to the next action by adopting the new latent code, depending on the number of frames passed or the variance of Gaussian. Strengths: The paper is easy to follow overall. The usage of Gaussian process to trigger the transition to the next action is reasonable and intuitive. Quantitative evaluations show that the method outperforms existing works for both reconstruction and output diversity for various datasets. Weaknesses and comments: There are quite a few typos in the writing, especially toward the latter part of the paper. I’d encourage the authors to do a thorough check to ensure the paper is typo-free. It seems switching actions at some fixed number of frames beats using the Gaussian variance for FVD, which is quite surprising. Can the authors provide some insights? Is it due to some inherent nature of FVD, or there’s still some room for improvement for the choosing criteria? How important is the heuristic of changing states when using GP? Currently it is triggered when the variance is larger than two standard deviations. How will it affect the performance if a different threshold is used? There’s a mistake in Table 1. The diversity score for DVG@15,35 is the best for KTH frames [10,25] (48.30), but DVG GP is bolded (47.71). This might also be an interesting point to discuss about why fixed number of frames performs better than GP. <doc-sep>### SUMMARY The authors propose to use a Gaussian Process (GP) to model the uncertainty of future frames in a video prediction setup. In particular, they employ a GP to model the uncertainty of the next step latent in a latent variable model. This allows them to use the GP variance to decide when to change an "action sequence", corresponding to a deterministic dynamics function implemented using an LSTM. ### STRENGTHS AND WEAKNESSES [+] Empirical results [+] Well-motivated model [+] Clear presentation [-] Experimental section could be improved (missing baselines in some tables, results seem to differ from those in the literature) ### DETAILED COMMENTS The paper proposes a novel approach for video prediction. Following the standard latent variable model setup used by many VAE-based video prediction models, the authors propose to use a GP to model the uncertainty in the latent space while also learning a deterministic dynamics model (LSTM) on this latent space. Then the GP is used to decide when a future frame has high uncertainty, and in those cases multiple latents can be sampled from the GP. In general the paper is clear and well-written. The experimental section could be improved. In particular, more details about how the comparison to some baselines was made would be appreciated. For example, the results for the VRNN model in Figure 4 and 5 do not follow the results in the literature, where it outperforms SVG and SAVP, and its unclear whether its due to an architectural change, suboptimal hyperparameters, or a different reimplementation. Further this model is missing from some other comparisons such as Table 1. For SAVP the results for Figure 4 seem much worse than those reported in the original paper. On the other hand, the authors did some ablation experiments and included different metrics to analyze the performance of their method. ### SCORE I vote for accepting the paper. The model formulation is clear, well-motivated and novel. The results are positive and overall it seems like a valid alternative to current approaches that will be of interest to the video prediction community. I would encourage the authors to provide a more thorough experimental section. ### POST-REBUTTAL UPDATE After reading the other reviews and the authors' rebuttal, I stand by my rating of 6.<doc-sep>In this work, the authors propose to apply Gaussian Processes to generate future video frames with high diversity. Specifically, they use variance of GP prediction as a trigger to control when we should switch to a new action sequence. Strength 1 The paper is written well, and the organization is OK 2 The idea of using GP for video generation sounds interesting Weakness 1 The way of using GP is kind of straightforward and naive. In the GP community, dynamical modeling has been widely investigated, from the start of Gaussian Process Dynamical Model in NIPs 2005. 2 I do not quite get the modules of LSTM Frame Generation and GP Frame Generation in Eq (4). Where are these modules in Fig.3 ? The D in the Stage 3? Using GP to generate Images? Does it make sense? GP is more suitable to work in the latent space, is it? 3 The datasets are not quite representative, due to the simple and experimental scenarios. Moreover, the proposed method is like a fundamental work. But is it useful for high-level research topics, e.g., large-scale action recognition, video caption, etc? | All three reviewers agree on accepting the paper and think that the proposed approach will be of interest for those working in vdieo prediction. The authors are asked to include the extra discussion with R3 as part of the paper and include the proposed changes by R2 to provide more thorough experimentation. The paper is recommended as a poster presentation. |
This paper introduces learnable compressible subspaces, which attempts to learn a set of models that can be switched at inference time to adapt to different resource requirements. This work is motivated by previous work in neural subspaces and slimmable networks. It is evaluated on CIFAR10 and ImageNet and compared against other recent works for adaptable inference models. These results show under certain conditions LCS can maintain higher accuracies at larger sparsities compared to other works. === Strengths === Table 1 is useful in summarizing the related works and the claimed advantages for LCS. The experiment section listed a significant amount of detail on hyperparams and methods. The small discussion and figures on batch norm stats shifts were interesting and helpful for motivating group and instance norm for this application. This area of adaptive inference is becoming more and more important with larger models and specialized big-little architectures. === Weaknesses === It is unclear how these networks switch between models at inference time, which of course should depend on the type of compression used. For sparsity, this seems like it would require dynamically pruning the model at inference time which seems very dangerous. For unstructured sparsity, this may require special hardware for taking advantage of that unstructured sparsity. For quantization, this may require hardware can support fine-grained switching of the quantization bitwidth. My understanding is that batch norm stats have to be recomputed for NS and US in a post-training way but not necessarily at inference time. It doesn't seem fair to avoid this step since it can be done before model deployment and takes a fraction of the training time. Leaving it out in the evaluation also nullifies the comparison if this is not fair. The claim that other methods need additional batch norm params and cannot support fine-grained compression level is mostly correct, but the importance of this seems overstated. In practice, it seems more reasonable to chose a smaller subset of model configurations that can be fully tested before deployment, and the batch norm params should be nearly negligible compared to the weights. Also, quantization LCS does of course limit the number of compression levels, and structured pruning LCS limits the number of compression levels to the number of channels (which is similar to US). The existence of gamma and alpha together is confusing to follow. Since the parameterization in linear, it seems like only one of these should be necessary. There should be other works included for building robust compressible models, e.g. Robust Quantization (Neurips20). The writing is clear but repetitive in some areas. For example, I believe there are 5 sentences talking about being inspired by Wortsman in the first few pages. === Questions === In the Related Works, the neural subspace method is described as operating on simplices, but the description in Section 3.1 seems to be on lines. Is this deliberate? Please correct me if I'm wrong but isn't the unstructured compressible point method Dropout? There might be differently weighted probabilities and dynamic dropout probabilities, but they seem fundamentally the same. For quantization, what hardware supports dynamic fine-grained switching from 3-8 bits? How are the pruned channels or pruned individual weights chosen at runtime in an adaptive way? This paper is an interesting proposal that attempts to apply the ideas of neural subspaces to produce a set of compressed models at varying points on the accuracy / efficiency curve. Yet, these methods in the end seem more about learning robust compressible models and stray far from the original neural subspace idea, especially with compressible points. In my current understanding, these networks seem to have no demonstrated advantage to universally slimmable networks, which have a simpler validation process, runtime switching method, and more intuitive training procedure. The comparison against these networks and others needs to be better justified since I currently do not understand why fine-tuning is not allowed. If I misunderstood the method significantly, I would be willing to increase my score, but currently I suggest rejecting the paper. <doc-sep>The paper present a method for learning a compressible subspace of neural networks that contains a fine-grained spectrum of models that range from highly efficient to highly accurate. The proposed method allows choosing the proper point of trade-off between accuracy and efficiency at inference time, according to the available resources. There are also efforts to reduce the runtime tweaking overhead like replacing BatchNorm with GroupNorm. Strengths: the paper is well motivated, as adaption to runtime available resource is important. Weaknesses: * The method is a direct extension of the learning subspace method. * There are important details missing from the paper. E.g., there is no info on how to generate a alpha via the state of hardware in runtime. This creates severe difficulty in understanding and reproducing the method. * There is no material to measure hardware performance. For example, only the accuracy of the classification models are given, but memory bandwidth, latency or FPS are not available to quantitatively measure the advantage. * The paper is not properly peer-compared. For example, the work is not well compared with pruning and quantification methods. Questions for the Author(s): * please elaborate on the definition of the compression function f and the intuition behind? * how to choose the hyper-parameter alpha in a hardware run-time? * What will happen to the arch of a model, if pruning is also performed? * If using this method in a hardware, how to change the quantization meta-parameters(scale and zero-point) accordingly? The paper proposes a method that reasonably extends the learning subspace method to allow performing accuracy-efficiency tradeoff according to runtime available resource. The method has been evaluated on several classification tasks and find to be useful. However, the paper does not clearly explain how the compression is performed, with important details like choice of alpha missing. The measurement of speedup is not that quantitative, lacking realworld test stats. It is very difficult to evaluate the contribution of this paper under these conditions. <doc-sep>This paper proposes to balance the inference accuracy and efficiency by training a subspace of neural networks and then adapting the network within the subspace at inference time. The novelty of this paper looks limited. It is an extension of recent work on learning network subspaces. Meanwhile, compared to existing works with adaptive networks, the advantages of this work are the finer-grained compress level and no need to recalibrate BN. The benefit of such improvements looks trivial. Many vital details are missing in this paper. The detailed form of the compression level function $\\gamma(\\alpha)$ and the compression function $f(\\omega, \\gamma)$ are not provided. These are the core of the algorithm and the authors need to show them. Also, the authors do not explain what determines the dimension n of the stochastic function $\\alpha$. If $\\alpha$ controls the sparsity at the level of each weight, n will be extremely large, and the training overhead of the proposed algorithm is extremely large because they need to do n forward passes and backward passes for each batch. Even $\\alpha$ controls the sparsity at the layer level, the training overhead will still be formidable. I cannot find much information about n in this paper. The authors need to provide more details about the dimension n of $\\alpha$. Many important details are missing in this paper. For example, the finer-grained compression level is a major selling point of this paper, but the authors did not even provide the compression level function $\\gamma(\\alpha)$. <doc-sep>The paper proposes to learn compressive subspaces which can adaptively compress the network during inference. It constructs either linear subspace or a single endpoint for compression. It replaces BN with GroupNorm to avoid re-calibrating during inference or after adjustment. Their method is evaluated in three different scenarios: structured sparsity, unstructured sparsity, and quantization. The paper is well written and easy to follow. The ideas of constructing a linear subspace and using the function $f(w(\\alpha), \\gamma (\\alpha))$ to perform compression during inference are novel. The analysis of BN parameters in adjustment provides quantitative analysis in this area. The experiments are exhaustive and can well support their ideas. However, as the paper claims, they bias the subspace to contain high-accuracy solutions at one end and high-efficiency solutions at the other end. In my understanding, two endpoints are using the same network architecture. How to train a network to obtain $w_1$ and $w_2$ in this case? I think the paper is well written; the method is novel and interesting; the experiment can well support the claims. Just need to clarify some details | This paper proposed a method for adaptive network compression at inference time. However, the paper contains various issues raised by the reviewers that needs to be addressed. |
This paper introduces a method on semi-supervised graph classification. For each graph, the method first constructs another view based on the cosine similarity between nodes' features, and from the two views (topology and feature similarity), GCN and GAT are applied to extract representations. All node representations are further combined via two layers of attentions. A diversity loss that encourages dissimilarity between the learned representations of GCN and GAT is introduced to the cross-entropy loss for joint optimization. The whole framework makes sense in terms of learning meaningful node representations for classification. However, the method lacks novelty, it is an incremental development on the existing graph neural networks. The choice of GCN and GAT as the building blocks are not well justified. It is also possible to try other kinds of GNNs. The statement on GAT "ignores the inherent structure of the graph space" on page 4 is confusing since it learns weights based on the graph structures. The experimental results show the better performance of the proposed method, but are not well analyzed. It may be better to compare with other multi-graph methods such as Yu Shi, Fangqiu Han, Xinwei He, Xinran He, Carl Yang, Jie Luo, and Jiawei Han. "mvn2vec: Preservation and collaboration in multi-view network embedding." arXiv preprint arXiv:1801.06597 (2018). Also, it seems AM-GCN in the experiments also works on multi-view of graphs. The superiority of the proposed method compared to AM-GCN is not clearly described in the paper. <doc-sep>This paper presents a dual complementary network framework for graph representation learning. Two graphs representing topology and features respectively are first constructed. Then, two branches leveraging the two graphs are proposed to explore different aspects of the original graph. Finally, a diversity loss is presented to capture the rich information of node features. To me, the overall presentation is barely satisfactory, with many proposals not well-motivated. Also, the novelty is limited given the large body of existing work on exploring dual aspects of graphs. Moreover, the experiments are not convincing. Detailed comments: * The reason why two branches of GConv nets are used is not clear. I am especially not clear how the proposed DGCN differs from dual-channeled GAT, why the diversity loss is not employed on attention heads, and how does the embedding learnt by GCN supplement the information of that by GAT. More elaborations are needed. * Experiments are not convincing; the result analysis of this paper is rather superficial. * Since DGCN uses two branches of GConv nets, large-scale datasets are necessary to evaluate the performance and efficiency. * Inconsistency between GCN and kNN-GCN. It seems that on UAI2010, BlogCatelog, and Flickr kNN-GCN is significantly better than GCN, but the opposite holds for the other two datasets. It should be noted why the two methods show such different performance on different datasets. * Given the large amount of existing literature regarding dual networks, many related methods are missing. The authors should especially pay attention to network embedding techniques, e.g., [1]. Minor: * Mathematical expressions are in chaotic forms, which makes the readability poor. * Page 5: CDAN -> DGCN? [1] Z. Meng, S. Liang, H. Bao, and X. Zhang, Co-Embedding Attributed Networks, in WSDM, 2019, pp. 393–401.<doc-sep>The paper presents a GNN model to jointly encode both topology and feature graphs to enhance node representations' quality. In particular, the model DGCN uses two GCNs to learn and propagate two different types of node representations on the topology graph, respectively. The model also utilizes two GATs to learn and propagate two different types of node representations on the feature graph, respectively. Finally, the model leverages attention mechanisms on these four types of node representations to produce the final node embeddings. Pros: + The model obtains promising results. Cons: + The motivation in the second paragraph of the introduction makes confusion. The quote sentence - "Most of the traditional GNNs only consider the single connection between nodes and ignore other implicit information" - leads to why not considering multiple connections between nodes as there are GNN works on hyper-graphs such as [2]. + The intuition in the third paragraph of the introduction is not clear as the paper does not have any ablation study for this intuition. "Network performance is largely related to the quality of the graph, which usually emphasizes the relevance of an attribute of instances", so which references for this intuition? What are the attributes of instances? + Using v=1 for A_1 to denote the topology graph and v = 2 for A_2 to denote the feature graph makes the paper harder to read. + The paper is not well written as it does not include any descriptions about model parameters in both the paper and the supplementary material. So it's hard to understand how to train DGCN and the baselines and how to analyze the model and ablation studies. + The most important one is that, regarding the model architecture, DGCN is precisely similar to AM-GCN. In particular, DGCN changes from using two GCNs for the feature graph in AM-GCN [1] to using two GATs. Therefore, DGCN is straightforward and incremental (i.e., lacking novelty). [1] AM-GCN: Adaptive Multi-channel Graph Convolutional Networks. KDD 2020. [2] Hyper-SAGNN: a self-attention based graph neural network for hypergraphs. ICLR 2020. <doc-sep>This work proposes a new method by combining GCN and GAT to perform node semi-supervised classification task. The new model uses node features to build another graph, uses GCN and GAT on the original graph and the new graph, and also adds a loss term to reduce the similarity of the final node representations. 1. The idea is very heuristic without much insight. The paper keeps arguing traditional GNNs only use one-side information, but they are actually leverage both node features and graph structures by propagating nodes features over the graph structure. So the statement is not correct. The paper claims that different node attributes contribute in different ways that should be sufficiently leveraged, but this is a very confusing argument if not paired with empirical justification. In the proposed model, it seems the authors try to resolve the above confusing issue via using another graph structure built only based on only node attributes. There is very unclear connection showing why this method resolves the problem they proposed. 2. The experiment parts are not in a fair comparison too, as the paper does not use the standard way to perform dataset splitting. For this new splitting way, no hyperparameters are report for both the model here and previous models. Some benchmark datasets for semi-supervise learning are also not used, e.g., cora, pubmed,... 3. There are quite a few errors in grammar. I suggest authors to perform some grammar checking. | All four reviewers expressed very significant and consistent concerns on this submission during review. No reviewer is willing to support this submission during discussion. It is clear this submission does not make the bar of ICLR. |
The paper generalizes the recent results (Martins et al. 2020, 2021) on continuous attention mechanisms to sparse multimodal density classes, which referred to as kernel deformed exponential families. The authors analyze theoretically three main aspects of the density families: normalization, approximation ability, and evaluation of context function. Then the authors apply the methods to real data experiments. The paper extends previous results on continuous attention mechanisms to kernel deformed exponential families. The authors make a great effort to apply their methods in practice. My major concerns are as follows: (1) There are many notation and terminologies used without reference or definition. (2) Given the previous results such as Martins et al. (2020,2021,2022), Farinhas etal. (2021), the theoretical novelty is limited. (3) Since numerical integration has to be implemented in practice and no reliable theoretical analysis is given, the empirical performance of the framework is questionable. Yes. Also see weaknesses section. <doc-sep>This paper proposes an extension of the continuous attention formalism presented by Martins et al. (2020). This extension consists in replacing the linear parameterization of the score function ($f$) in continuous attention by a kernel in an RKHS, yielding a kernel exponential family density ($\\tilde{f}$). Using q-exponentials, the authors propose using _kernel deformed exponential families_, resembling the entmax formulation of Martins et al. (2022) for $1 < \\alpha \\leq 2$, but with a special form for $f$. The main challenge in this new formulation is that computing Equation 1 with $\\tilde{f}$ is harder, and thus the authors resort to numerical integration for the forward pass and rely on automatic differentiation for the backward pass. The authors also show that numerical integration can be done efficiently with exponential convergence for kernel exponential attention. The paper presents three carefully chosen experiments to showcase multimodal continuous attention: Time Warping, ECG heartbeat classification, and Automotive symptom detection. The results indicate that kernel multimodal continuous attention is very effective, even when compared to standard continuous softmax and sparsemax attention, and it also performs better than Gaussian mixture, which also yields multimodal densities. This paper has enough content to be appropriately divided into two main views: practical and theoretical. On the practical side, the paper does a great job at casting deformed exponential families within the framework of continuous attention of Martins et al. (2020). The experiments were carefully chosen to illustrate the main advantages of the new formulation, mainly due to the necessity of having multimodal densities. On top of that, the authors also compared their work with related works, including the Gaussian Mixture approach by Farinhas et al. (2021). The results of the experiments are impressive as well as expected, given the importance of multimodality. The paper also presents results for text classification in Appendix G, but I believe this experiment could be in the main paper, as it was a key result reported by Martins et al. (2020). A clear downside of this work is that it requires numerical integration for the forward pass (even though the convergence is fast), which might forbid its application in specific applications. Given this discrepancy to unimodal continuous softmax/sparsemax, the paper can be improved by reporting also the runtime alongside test performance. On the theoretical side, I believe the paper is very dense. Despite being comprehensive, the appendix is overwhelming. Some proofs could point to other works, such as D.1 -> B.3 in Martins et al. (2020). For me, it is not very clear why Proposition 5.1 and Corollary 5.2 is needed in the paper. It seems an orthogonal result to the practical side. Other than this, up to my knowledge, the theoretical aspect of the paper looks sharp. Overall, I believe the authors introduced the problem and the motivation for having multimodal continuous attention very well. The related works section is also well written. I believe that a plot from Figure 9 would further clarify the main point of the paper if it was presented in the introduction (alongside its counterpart plots in Figures 5, 6, 7, 8). The authors have addressed the limitation and potential negative societal impact of their work adequately. <doc-sep>In this paper, the authors proposed a multimodal attention density based on kernel exponential families and kernel deformed exponential families for the continuous attention mechanisms. Furthermore, the authors performed the theoretical analysis about normalization, approximation capabilities and properties. In addition, the authors conducted a set of experiments to evaluate the performance of the proposed multimodal attention density, the experimental results shown that kernel continuous attention often outperforms unimodal continuous attention. ## Strengths: - The concept of kernel deformed exponential families is proposed, which is a sparse multimodal density class. - The overall presentation of this paper is good. ## Weaknesses: 1) The motivation of this work is not well organized or elaborated for using multimodal attention density. 2) This paper tends to be an incremental work. The authors applied kernel methods to the (deformed) exponential families to construct a multimodal attention density for continuous attention. However, the kernel exponential families deformed exponential families, and continuous attention is existing works. 3) Experiments are not enough, why use only two performance metrics and a few datasets in the experiments section? NO <doc-sep>The authors provide sufficient conditions under which the normalising constant of a kernel exponential family exists (proposition 5.1). Furthermore, they introduce so-called deformed kernel exponential families (sec 5.2) and provide a similar normalisation theory (corollary 5.2), and show conditions under which deformed kernel exponential families are dense in a class of deformed exponential families that uses the space of continuous functions vanishing at infinity in place of the RKHS (appendix D.4). The deformed variant allows for modelling (infinite dimensional) densities with finite support (Figure 1). These technical results, which are of broad interest, are then applied to a continuous version of kernel attention. Continuous kernel attention is framed as an expectation over a density that is the solution to a regularised strictly convex functional (2). In this case, the (deformed) kernel exponential families are used as the space from which the solution is drawn. The authors try their new versions of kernel attention on some toy datasets, observing better performance than other methods. Strengths: - I enjoyed this paper. The authors seem to have properly handled normalisation constants for kernel and deformed kernel exponential families. I like that new theory that may be of independent interest is developed. The approach is thought-out and principled, unlike some other attempts to generalise attention. Weaknesses: - I did not like the way equation (1) was presented. It was simply stated "It is (1)". Can you provide a reference for this definition? Or is this your definition? Also see question in Question section below. - In line 235, we use a finite span for a representation of the function in the RKHS. Is this invoking the RKHS representer theorem? It does not apply to the regularised problem (2) because this concerns the density itself not the f \\in RKHS. So where does this finite span representation come from? It seems like Algorithm 1 assumes the coefficients (parameters) of the representer are already known, so presumably one way to justify this is by wrapping algorithm 1 in some regularised empirical risk minimisation problem? I am happy with either a mathematically rigourous justification or a handwavy justification, as long as the handwaving is clearly admitted in the paper. (Not that it affects my evaluation of the paper in any way, but) IMO there is no need to include the last sentence in section 6. Hate speech is one of many potential negative societal impacts, and anyone who has read this far into the paper will already appreciate that this is a largely theoretical work without direct societal application in mind. Hate speech is a non-trivial issue. Singling out hate speech without any context earlier in the paper looks lazier than if it had not been mentioned at all, and does not respect the issue. | This is solid contribution overall. The paper is well written and the notation easy-to-follow. The authors spent a lot of effort to address the clarity issues by reviewer PeFu. Requiring numerical integration is a limitation but it is clearly acknowledged in the paper. We recommend acceptance. Minor additional remarks: - The term "value function" was a bit confusing because it has a different meaning in game theory or optimal control. - Section 4 on time warping does not cite any work. It would be great to connect it better with the existing literature. |
This paper a new self-supervised learning (SSL) paradigm for sequence recommendation by contrastive learning between positive and negative views of sequences based on model augmentation. The model augmentation methods includes neuron masking, layer dropping and encoder complementing. The proposed algorithm is evaluated with several real-world datasets, showing the efficacy of the proposed methods. ### Strong points 1. two model augmentation methods for generating two views of a sequence. 2. The detailed experiments show the effectiveness of the proposed augmentation. 3. The paper is easy to read and follow ### Weak points 1. overclaimed contributions. The authors should reorganize the contributions in the paper. 2. lack technical contributions. Dropping the overclaimed contributions, the contributions are marginal 3. lack significant test, since the improvements are very small, in the order of 0.001 ### Detailed comments 1. The neuron masking model augmentation has been proposed in CL4SRec, but claimed as a contribution in the paper. In this way, this model augmentation can not be also considered as a contribution. 2. Even though overclaimed contributions are included, the contributions may not sufficient. The proposed methods look a little straightforward and motivations are not supported by analysis. 3. The detailed setting of baselinesa are not given. 4. Regarding encoder complements, the authors should compare the baselines with GRU encoders as additional module. This is for illustrating the effectiveness of self supervised learning compared to simple ensemble . 5. "‘SRMA w/o D’ outperforms other baselines on the Sports dataset and has comparable performance to ‘CL4S.’, which indicates the model augmentation is of more impact in the SSL paradigm compared with data augmentation". The authors should discuss more about why the superority of SRMA w/o D to CL4S can indicate that the model augmentation is of more impact in the SSL paradigm compared with data augmentation. <doc-sep>This paper proposes three levels of model augmentation methods: neuron masking, layer dropping, and encoder complementing. This work opens up a novel direction in constructing views for contrastive SSL and does experiments to verify the efficacy of model augmentation for the SSL in the sequential recommendation. Strengths: 1. This paper put forwards three kinds of model augmentation methods: neuron masking, layer dropping, and encoder complementing. 2. This work proposes an idea for constructing views for constrastive SSL. Weaknesses: 1. What’s the motivation for using neuron masking/layer dropping/encoder complementing for model augmentation? Is there any theoretical analysis or intuition explanations? 2. Although the authors claim the method is model augmentation, I still consider that the method is a kind of data augmentation, because this paper only uses the three levels of model augmentation to construct view pairs for training the model. This paper proposes three levels of model augmentation methods: neuron masking, layer dropping, and encoder complementing. But the novelty and contributions are limited. Besides, it fails to explain the motivation of the proposed augmentation methods. <doc-sep>The paper is trying to address the sequential recommendation problem in which the goal is to predict the next items in user behavior. The paper proposes 3 levels of model augmentation methods: neuron masking, layer dropping and encoder complementing. Strengths: 1. The paper shows the effectiveness of model augmentation over data augmentation. 2. The paper shows that the effectiveness of model augmentation can help the model to achieve better performances. Weakness: 1. The paper only considers model augmentation, but the model augmentation is closely related to model regularization. As shown in the paper "SSE-PT: Sequential Recommendation Via Personalized", regularization methods like SSE can help with the better performances too. Have the authors given much thoughts about the differences between model augmentation and model regularization? Are they similar concepts? If so, can we do some comparisons in experiments show the differences in terms of effectiveness? The paper addresses an important research problem and shows that a few model augmentation techniques can help with the sequential recommendation performances. My main concerns about the paper is the lack of explanation for the differences between model augmentation and regularization techniques. Addressing the concerns will help us understand us better if the proposed methods are general enough to be applied to other research problems. | This paper proposed a self-supervised learning view for sequential recommendation with different forms of model augmentation: neuron masking, layer dropping, and encoder complementing. Overall the scores are negative. The reviewers raised concerns mostly around the motivation of the proposed approach (which wasn't fully supported by the experimental results) as well as the limited contribution (especially considering some of the augmentation strategies have been proposed in the past). One reviewer also brought out an interesting connection between model augmentation and model regularization. The authors responded that they will keep improving the paper and hopefully we will see a much improved version in the next submission. |
This paper deals with a class of cooperative MARL problems with permutation invariance. It first shows that, for such problems, there exists an optimal policy that is permutation invariant, and the value function can be characterized as a function of the local state of one agent and the empirical state distribution over the rest of agents. Based on these observations, the authors introduce the mean-field MDP as the limit of the MARL problem with infinitely many homogeneous agents and design a mean-field proximal policy optimization (MF-PPO) algorithm to solve it. It shows with permutation invariance, the search space of the actor/critic network polynomially depends on the number of agents $N$ and establishes the global convergence of MF-PPO. Some numerical results show better performance compared with some existing algorithms. Multi-agent cooperative systems are important research topics and using mean-field approximation to design algorithms is an interesting research direction. This paper gives some theoretical analysis on the motivation of the mean-field approximation and proposes algorithm MF-PPO to solve mean-field MDP, which is new in related fields. However, there are also some major issues, as listed below. 1. This paper provide some analysis on multi-agent cooperative systems with permutation invariance property, to motivate mean-field MDP. However, I have some concerns with respect to the motivation. * In proposition 2.2, function $g_\\nu$ should have some conditions: for example, it is possible that it depends on the number of agents $N$ in the system (when $r=\\sum_{i=1}^N s_i$, it is permutation invariant and $g_\\nu$ depends on $N$). In such cases, it’s not clear to see why the corresponding mean-field MDP exists when $N$ goes to infinity. There seems to be some additional assumptions needed other than homogeneity and permutation invariance. Besides, the proof of Proposition 2.2 is hard to follow. The authors should make it clearer why Theorem 11 in [Bloem-Reddy and Teh (2019)] can be adapted. * In the finite-agent MDP, the policies considered are randomized policies (which I infer from the notation $a_t\\sim \\nu(s_t)$). However in the mean-field MDP, the policy for each agent $\\bar a$ becomes deterministic (as it says $\\bar a:\\mathcal{S}\\rightarrow\\mathcal{A}$). The authors should give some explanations here why the space of the agent’s policies changes. In sum, it would be nice if the authors can provide more explanations on why Definition 2.1 can be viewed as the corresponding limit model, given that they claim “the mean-field MDP has a step-to-step correspondence with the finite-agent MDP with homogeneous agents”. 2. Proposition 3.1 seems standard for any symmetric game. See for example [A]. This does not rely on mean-field approximation or the actor critic framework. The benefit of using mean-field approximation is not clear, and the authors should elaborate more on that. Especially the permutation invariance idea for MARL has already been explored in [Liu et al. (2019b)]. 3. Given that the authors already cited [Gu et al. (2019), Gu et al. (2020)] and they also proposed mean-field MARL algorithms, it is not clear why they are not compared in the numerical experiments. 4. Some notations in the paper are not rigorous. For example, the space of $\\nu$ and $\\pi$ are not specified. On top of page 5, $\\sigma_k=\\nu_k\\pi_k$ seems problematic. [A] Computing equilibria in multi-player games. Christos H. Papadimitriou and Tim Roughgarden. This paper proposes to combine mean-field approximation with the permutation invariance idea in [Liu et al., 2019b], which is an interesting direction. Both theoretical analyses and numerical experiments are provided. However, the theoretical justification of the mean-field approximation is very unclear. Also, some closely related algorithms are not compared. <doc-sep>The authors present a principled method of solving problems with multiple homogenous agents. I am shared about this paper. It seems to be a valid theoretical contribution, but since it is beyond the scope of my expertise, it is hard for me to fully validate/appreciate. As to the experimental part, I find it to be relatively weak, which might still be ok, if the theoretical part is strong enough. I raise some concrete questions, which hopefully will clarify some of my doubts during the rebuttal. 1. Why are energy models used for policy training, is it matter of convenience, or is there a deeper reason? 2. Why there is restriction to $B(\\theta_0, R_\\theta)$? Shouldn't it be $B(\\theta_{k-1}, R_\\theta)$ in the equation (3.1). 3. Why the mean-field MDP uses $(s, \\text{d}_\\mathcal{S})$. It should be enough to just take $\\text{d}_\\mathcal{S}$ (i.e. none of the agents is special)? [The same question applies to Prop 2.2]. 4. Is the proof a novel one? 5. What is the exact experimental setup: 6. 1. What are the details about the environments used? What are the interactions between agents? How intensive are they? 2. What is the algorithmic setup? I am confused as it suggests using DDPG, which is somewhat detached from the theoretical analysis. As for the moment, I find the contribution below the standard of a top-tier conference like ICLR. It is not clear what the exact theoretical contributions are and how they are related to the experimental part. <doc-sep>This paper proposes a mean-field proximal policy optimization algorithm (MF-PPO) for MARL with a large population of homogenuous agents. The sample complexity is derived based on a two-layer neural network approximation and the proposed algorithm is tested in several detailed experiments. The paper is nicely written and easy to follow. I like very much the way that the authors motivate and demonstrate how the permutation invariance property helps to reduce the complexity of the MARL problem with homogeneous agents. The techniques used to show the sample complexity is based on (1) the neural policy gradient paper (Wang, Cai, Yang, and Wang, 2019) for the single-agent case and (2) the law of large number used in the mean-field approximation. I am confident that the main results are correct. I have several minor suggestions that may help to further improve the exposition: (1) The paragraph starting with "To scale MARL algorithms ..." on page 2: The authors gave an overview on mean-field MARL in this paragraph. I view mean-field game (MFG) and mean-field control (MFC) are two different sets of problems with the MFG solving for Nash equilibria and the MFC solving for social optimal solutions. The authors did not distinguish these two when providing the overview. I suggest the authors divide the literature review into two parts and treat the references for MFG and MFC separately. A reference that might be relevant to include: Mean-Field Multi-Agent Reinforcement Learning: A Decentralized Network Approach (Gu et al., 2021) (2) Permutation mapping $\\kappa(\\cdot)$: I understand what the authors mean by "permutation mapping" and it can be any function of the empirical distribution (e.g, first moment and second moment). But I suggest the authors provide a more rigorous (mathematical) definition of permutation mapping. (3) Factorization $\\nu(a|s)$ on page 3: The authors may add more discussions on $o_i$ which is the informaton observed by agent $i$. (4) Generality of the definition for mean-field MDP: Is it possible to include $(a,d_{\\mathcal{A}})$ in the formulation (reward and transition) instead of only the average action ($\\bar{a}$)? (5) Theorem 4.1: The authors may add a discussion on the comparison to the single-agent paper (Wang, Cai, Yang, and Wang, 2019) and highlight the additional technical differences (i.e., handling the mean-field part). How does the term $M$ depend on N? (6) Lemma 4.1: The authors may add a discussion on why $\\sqrt{1/N}$ appears in the upper bound (I believe it's from the law of large numbers). The submission is well written and provides a good contribution to the MARL literature. | This paper proposes a new multi-agent RL algorithm, based on the PPO algorithm, that uses a mean-field approximation, which results in a a permutation- invariant actor-critic neural architecture. The paper includes a detailed theoretical analysis that shows that the algorithm finds a globally optimal policy at a sub-linear rate of convergence, and that its sample complexity is independent of the number of agents. The paper include some experiments that validate the proposed algorithm. The reviews of this paper are mixed. Most of the reviewers appreciate the theoretical analysis, but one reviewer does not find the theoretical justification of the mean-field approximation clear. The reviewer also points out to the absence of comparisons to relevant competing algorithms. These concerns are addressed by the authors in their rebuttal. A key issue with this work is the weakness of the empirical evaluation. The proposed method is tested on only two simple tasks, and the results on the second task do not show a considerable advantage of the proposed algorithm. This paper can be strengthened by adding experiments that clearly indicate the advantage of the proposed technique. |
The authors study the problem of doing inference on a logistic regression model with an L1 penalty in high dimensions, where the number of features in the dataset is at least as large as the size of the dataset. For this problem, they develop a variant (that they call CRT-logit) of the distilled conditional randomization test (dCRT) with higher power than the latter. Their innovation is in introducing a decorrelation step that brings the null distribution of the test statistic closer its assumed distribution - a standard normal. An asymptotic analysis of the performance of CRT-logit is given. I think this is a very nice paper. It focuses on an important and ubiquitous inference problem, and is generally of a high quality, largely due to its clarity and thoroughness. The central technical innovation, namely, the decorrelation procedure discussed in Eqs. 8-11, is well-motivated both analytically and in Figure 1. The new algorithm that is introduced and tested, CRT-logit, will be of interest to the broader machine learning community. Its relationship to existing algorithms is discussed; a number of experiments comparing it with existing algorithms are also presented. Yes. <doc-sep>This paper aims to extend the growing literature on identifying relevant features in a machine learning model. The authors focus on the setting of classification in a high-dimensional setting where the number of relevant features is sparse (i.e., less than n^{1/2}), where n is the number of measurements and p is the number of features. To do so, they extend the algorithm of distilled conditional randomization test (dCRT), which itself is an extension of the conditional randomization test (CRT) to make CRT more computationally feasible. A key step of the dCRT algorithm is to take residuals from two regressions (one regression to see if feature j lies in the linear span of the other features, and the second regression to see if the response variable lies in the linear space of the other features) and use that to compute the test statistic. In essence the key innovation of this paper is to extend the way the residuals are taken to fit a logistic model rather than a linear model. This estimator is called CRT-logic. Given this new test statistic and an assumption of sparsity along with other regularity conditions, they prove pointwise gaussian approximation and asymptotic validity of the CRT-logit estimator. They then corroborate their results with simulations and real-world data. Strengths - The paper tackles an important problem of understanding feature relevance in a high-dimensional classification setting with sparse features. - The empirical results support their claim - The paper is relatively straightforward to parse through. Weaknesses - From a theoretical standpoint, more work can be done in explaining the technical novelty in extending the method of dCRT to the setting of classification. Right now the revised estimator, which used the second derivative of the logistic function instead, has minimal intuition for how it is derived, and what makes this problem a technical challenge. - It is unclear which part of the estimator / results are to do with increasing computational efficiency vs. novel statistical results for the high-dimensional classification setting. This also makes it hard to tease out how to evaluate the estimator and the results. N/A <doc-sep>The authors study the case of high-dimensional logistic regression when the number of features p is much greater than the number of sample n. They propose the CRT-logit algorithm that combines a variable-distillation step and decorrelation step to keep the sparsity in l1-penalized logistic regression. The authors provide theoretical analysis of their approach and show it is effectiveness on simulations and experiments on real-world brain-imaging and genomics data. The main contribution is the proposal of the CRT-logit method which is shown to be effective and inference cost not prohibitively high. Strengths - The authors propose CRTlogit and provide thorough theoretical and experimental results - Highly relevant problem since logistic regression is still one of the most commonly used methods as well as l1-penalty in machine and deep learning - Valid and thorough theoretical results - Show empirical validation of theoretical results - Also show experiments on average inference runtime in addition to performance results - Tests on real-world brain and genomics data. - Very nice qualitative plots e.g. Figure 1 showing the improvement on the theoretical quantiles of the proposed CRT-Logit over dCRT. Weaknesses - Some of the derivations in Eqn, 8-11 can be moved to the appendix. - Limited to only looking at logistic regression - can this extend to other models? - The authors should emphasize their novelty and contribution as more than just an extension of CRT. Yes the authors have addressed the limitations of their work. <doc-sep>This paper presents a method for performing hypothesis testing for lasso-penalized logistic regression in the high dimensional (but sparse) setting. The main result relies on the asymptotic normality of a test statistic that is essentially how correlated a given feature is The proposed approach is interesting and theoretically well-motivated. That it gives (asymptotically) valid p-values is a compelling advantage over the knockoff framework that only allows for FDR control. The asymptotics are also nice because then one can avoid needing to do any resampling. One aspect that I found missing from the paper is a discussion of (the lack of) finite-sample guarantees. The lack of finite sample guarantees is not a concern to me, but given that that's one of the compelling aspects of some of the related works (e.g., knockoffs) it would be good to at least discuss. One minor weakness is that the clarity of the manuscript could be improved in some places. It is quite dense (which is understandable due to space constraints). While some parts of the paper have really nice intuitive explanations, other parts do not (e.g., equations 4 and 5). Furthermore, the notation seems to be shifting throughout (see some of my comments in the "Questions" section) which can make it somewhat difficult to follow. I believe the authors have adequately addressed the limitations of their work, and I am not aware of any potential negative societal impact. | The decision is to accept this paper. The paper presents a method for producing asymptotically valid p-values when testing the null hypothesis of conditional randomization tests in sparse logistic regression. The method builds on a previous distillation method that examines correlations between residuals for the label y and the focal covariate x_j when they are projected onto the remaining covariates. The method corrects a bias that arises in this distillation method due to the non-linearity in penalized logistic regression. The authors prove the asymptotic validity of the resulting p-values and study the power and FDR of the procedure. The reviewers agreed that this is a strong method and a clearly written paper. The authors answered all major questions from the reviewers and made changes in response to reviewer feedback. |
The use of PAC-Bayes theory for NLP tasks is rare. Although I know little on NLP, the paper proposition to leverage on PAC-Bayes for evaluating the benefit of various incidental supervision signals seems promising. However, even if the empirical results are good, the connection between PAC-Bayes and the proposed informativeness measure (named PABI) is vague. The paper needs to better situate the proposed analysis compared to classical PAC-Bayesian generalization risk bounds. **Section 2 contains many assertions that are questionable.** 1. *"The training samples [are] generated i.i.d."*: It is the case for most PAC-Bayes analysis, but I wonder to which extent this assumption holds for the NLP problems studied as experiments. In a sentence, words are highly dependent on each other. 2. *"In the common supervised learning setting, we usually assume the concept that generates data comes from the concept class"*: This is a surprising claim as the **PAC**-Bayes framework differs from the Bayesian one namely by the fact that we usually don't need to make assumptions about the data-generating distribution other than being i.i.d. In particular, the model does not need to be well specified. This makes me wonder if PABI would not better fit in the purely Bayesian framework (see other comments below). 3. *"the generalization bounds in both PAC-Bayesian and PAC frameworks have the square root function"* : There exist several forms of the PAC-Bayes theorem in the literature, not only the square root ones (e.g., Seeger 2002). In fact, the square root bounds are not the tightest, particularly when the model is well specified. **Is PABI really backed by PAC-Bayes theory?** As far as I understand, the procedure PABI is only remotely inspired by the PAC-Bayes bound, but is not truly justified by it. No PAC-Bayes bounds are fully optimized; PABI borrows from PAC-Bayes the sole idea of relying on the KL between distribution. For this reason, I think that the introduction sentence "Previous attempts are either not practical or too heuristic" is harsh, because the proposed method turns out to be a heuristic too. **Is PABI a more Bayesian method than a PAC-Bayes one?** I wonder if one could not do the same analysis in a fully Bayesian setting, maximizing a Bayesian information criterion. This should be appropriate since PABI and the Bayesian setting assume that the model is well specified. Note that there is a direct link between the Bayesian Marginal Likelihood and the PAC-Bayes generalization bound (e.g., Germain, et al., 2016: "PAC-Bayesian Theory Meets Bayesian Inference.") Overall, I think that the paper explores a new and exciting territory, but needs a deeper analysis to support the connection with the PAC-Bayes theory. <doc-sep> #### Summary This paper proposes a unified measure for the informativeness of incidental signals (ie, not standard ground truth supervised labels) derived from the PAC-Bayesian theoretical framework. Instantiations of the score are derived for a variety of these signals, and experiments show good agreement between the measure and true performance improvements. #### Strong and weak points This problem setting is well-positioned as complementary to the growth in "alternative supervision" in both research and industry. Besides the directions identified in the paper, one could easily imagine using this kind of a measure in ML applications as a tool to help guide economic decisions about what kinds of datasets or annotations to pursue. The explanatory potential of this approach with respect to observed gains using incidental signals is exciting as well, especially the agreement with empirical findings from Mayhew 2019. I found Figure 1 to give helpful context, and I found the core technical content in Section 2 to be clear and precise. The experimental results were a bit intricate to follow. A key result is Figure 2f and the associated correlations, which show strong correlation between the PABI scores and true performance improvements, this could perhaps be higlighted or emphasized more. Likewise the meaning of Figure 3 is a bit obscured by the poor correlations of the baselines. One weakness of the evaluation was that, while the Related Work coverage seemed sufficient, only Gururangan 2020 is included in the experiments. Of course the other approaches have the limitations well-captured in Table 1, but it would have been nice to have some restricted experiments crafted in order to give direct comparisons. The supplemental appendix was comprehensive with respect to theoretical derivations and experimental details. #### Recommendation (accept or reject) with one or two key reasons for this choice. I would recommend to accept, the work represents an advance across both theory and practice on an important problem. #### Supporting arguments The work leverages a well-studied framework to answer important questions about understanding the utility of non-standard supervision signals, enabling us to reason in a unified way about varied kinds of these signals as well as their combinations. Experimental results #### Questions to clarify / additional evidence required The approximation in Definition 2.2 was a little strange for me, and seemed kind of circular: for calculating our approximate PABI, we are approximating the target (gold) distribution with our approximatively improved prior ($\\tilde{pi_0}$)? Is there anything we can say about how good/accurate this approximation is? Section 3.2: "much cheaper" - how or why would we say this is true, can we quantify it? Or are there cites to see? Is it possible to frame the PABI measures in terms of testable hypotheses about true generalization error, or are the bounds too loose in practice to say anything meaningful here? #### Additional feedback to improve Space permitting, a small diagram of the mappings between different domains and the restrction trick would make Section 3.2 much clearer. Another possibility is some symbol table to keep straight which versions of $c()$ correspond to gold vs silver, incidental, etc. The code was great to see as well but is missing dependencies: - seqeval - tqdm - transformers I might suggest adding a requirements.txt or similar. <doc-sep>This paper proposes PABI (PAC-Bayesian Informativeness?), a way of measuring and predicting the usefulness of “incidental supervision signal” for a downstream classification task. In particular, when labeled data is only available in noisy or partial form, or over a different domain than the target test domain, this data may still be used to improve a classifier, but it’s unclear how to tell which forms of incidental supervision will be most useful. Having a measure which allows us to compare different types of such supervision enables us to make intelligent tradeoffs. PABI is proposed as a very general framework. The most general form of the measure, dealing with updates to the concept class prior, seems that it could capture any kind of incidental supervision. However, this means most of the work is in understanding how to apply and approximate it. This paper provides several such methods, particularly focusing on “inductive” learning (from constraints or partial/noisy gold labels) or “transductive” learning (from complete gold labels on different input domains). Mathematical developments of PABI are given for these cases, and experiments show that PABI is nicely positively correlated with the relative improvement that comes with various methods for integrating incidental supervision signal (including one which is developed as a side note by the authors). Computing PABI may be challenging in some cases. In the case of transductive learning, it seems that a model needs to be trained on the incidental signal, although this is better than the combinatorial explosion of jointly trained models that would be required to test relative improvements directly. However, it’s not clear if efficient approximations for PABI will be feasible in all cases. This and other questions about the breadth of application of PABI are left for future work. ### Strengths I think this paper is very well-motivated, situates itself well with respect to previous work, and presents clear advantages. Having a unified framework for comparing the utility of different kinds of incidental supervision signals seems potentially very useful, especially these days when incidental supervision of various sorts is instrumental in state-of-the-art models. It is also extremely relevant for data annotation and task design, which often has to make tradeoffs between these factors (i.e., noise versus partial annotation or dataset size). There is a lot of content in this paper, including mathematical developments, algorithms, and experimental results. While I did not carefully check the proofs in the appendix, and I am not familiar with PAC-Bayesian theory or the associated literature, the paper seems technically sound to me. ### Weaknesses While the generality of the proposed PABI framework is great and improves over existing work, I think this paper could be scoped more carefully and the scope could be clarified better. As proposed, the PABI framework seems very general—which is good. But the paper only shows how to realize the framework in a couple specific cases, for “inductive” and “transductive” learning independently. This is still more general than previous work, but from the first few pages of the paper I was expecting something even more general. * It seems to me that the combination of inductive and transductive learning may be possible using something close to the paper's proposed methods , but this isn’t addressed by the paper except a glancing mention in Footnote 6. * It also is not clear to me from the paper’s text whether something close to the PABI framework can apply in broader settings like language modeling style pretraining, where the input-output format of the incidental supervision signal is different than that of the target task. In particular, it seems that in this case the approximation method proposed for transductive learning would indeed have to reduce to training a combined model. Related issues were finally mentioned briefly in the last paragraph of the paper, and something along these lines appears in appendix A.3, but I think a more up-front clarification of the limitations is warranted. More broadly, the question in the back of my head when I began reading the paper was if this would help explain why and when language model pretraining (and other more flexible related-task pretraining) works well. The paper points to related work in this area, such as Gururangan et al 2020 (“Don’t Stop Pretraining”), leading me to think this paper would shed light on the issue, but in the end the issue was not mentioned and seems perhaps out of scope. This is fine. All I would ask of the authors is to be more explicit about the limitations of PABI (or the proposed realizations of it) from the beginning, laying out the scope of this work and stating the limitations outright instead of only pointing to the appendix. It seems to me like PABI is more of a foundational framework which is ideal for future work to build into, rather than already being a general solution in itself. I think it would be best to pitch the paper this way. ### Recommendation Accept. Important problem, lots of solid content, clear benefits over previous work and directions for the future. Great work. ### More comments/questions I think the point of the formulation in Section 2.2 can be made a bit more explicit. It seems like the point is for applying PABI to partial labels. If that’s true (or there’s more to it) then might as well just say it there, or at least give this case as a motivating example. Regarding the cross-domain results: why are the incidental supervision sets so small? It seems that there is a ton more incidental supervision available for NER, and in both cases the incidental supervision data is even smaller than the test set. Why not use more? It seems to me that the use case here is when a large amount of incidental supervision is available anyway. It also seems like the low-data setting is not totally fair to the vocabulary overlap baseline. ### Typos, style, etc. When describing your experiments, I think it’s worth mentioning that they are on English text. Figure 3: I don’t understand which numbers correspond to which model in the caption. This would be much easier to read in a table. * P. 7: something’s wrong with “twitter(Strauss et al., 2016)” * P. 7: The FitzGerald et al 2018 dataset is called “QA-SRL Bank 2.0”. * P. 7: servers -> serves * P. 7: “the lower bound for is” <doc-sep>########################################################################## Summary: This paper proposes a unified PAC-Bayesian-based informativeness measure (PABI) to quantify the value of incidental signals. PABI can measure various types of incidental signals such as partial labels, noisy labels, constraints, auxiliary signals, cross-domain signals, and their combinations. In NER and QA tasks, they showed the strong correlation signals between PABI and the relative improvements for various incidental signals. ########################################################################## Reasons for score: Overall, my score is marginally below than acceptance threshold. Pros: 1. I enjoyed reading the paper, and I like the idea of covering various types of supervision signals at one unified measure. 2. The definition and approximation of PABI and its generalization to different inductive signals look sound to me. Cons: 1. My biggest concern about this paper is the lack of clarity and presentation. In the introduction, I do understand how conceptually PABI is different from others, but do not know what it is. It would be better describing how PABI works in the introduction, Also, it would be better understanding the Section 2 and 3, if authors provide high-level insights of why each part of PABI’s description is important. Similarly, in the experiment, it was quite difficult to follow the text and capture the main claim. For instance, it would be better to understand if how Figure 2 and 3 should look like first and what trends of the points support the main claim of PABI, etc. Similarly, visual interpretation without specific guidelines make Figure 3 really difficult to understand. I guess some quantitative numbers would be very helpful like the linear regression slope, etc. 2. Besides the presentation, I don’t quite understand how PABI can be used as a practical measure for other applications. Does the strong correlation with relative improvement mean that it can be used as an alternative measure of mutual information and further applied to other applications using such information measures in their optimization? If so, it would be nice to describe potential applications of these measures and other benefits of PABI in general. This also requires additional experiments that show its effectiveness in other applications. ######################################################################### Some typos: a widely used measure for for noisy signals -> a widely used measure for noisy signals the SQuAD dataset servers as the main dataset -> the SQuAD dataset serves as the main dataset | This paper first makes the observation that incidental supervisory data can be used to define a new prior from which to calculate a PAC-Bayes generalization guarantee. This observation can be applied to any setting where there is unsupervised or semi-supervised pre-training followed by fine-tuning on labeled data. The PAC-Bayes bound is valid when applied to the fine-tuning. For example, one could use an L2 bound (derived from PAC-Bayes) on the difference between the fine-tuned parameters and pre-trained parameters. But the paper proposes evaluating the value of pre-training before looking at any labeled data. Let $\\pi_0$ be the prior before unsupervised or semi-supervised training and let $\\tilde{\\pi}$ be the prior after pre-training. The paper proposes using the entropy ratio $H(\\pi_0)/H(\\tilde{\\pi})$ as a measure of the value of the pre-training. As the reviewers note, this is not really related to PAC-Bayes bounds. Furthermore, it is clearly possible that the pre-training greatly focuses the prior but in a way that is detrimental to learning the task at hand. I have to side with the reviewers that feel that this is below threshold. |
- Despite the limited novelty of the work, it address a relevant problem for the medical society - A novel dataset of about 1000 labelled images has been proposed - The paper is well written and easy to read - An Ablation study helps validate the efficacy of the proposed method - Despite a dataset is presentend, the amount of frames is limited and the cliam of the authors is misleading - The improvements to the space-time network are not clearly presented - The literature review on the spatio-temporal methods is limited - The purpose of the domain adaptation is not clearly introduced - User evaluation is very limited and does not provide enough evidence to validate the results <doc-sep>- the paper is reasonable well written and clearly presented - the combination of known approaches is sound and the results show good improvements - a new dataset is introduced comprising of 60 hours of videos including about 1000 labelled images - a user evaluation is provided - dataset probably not public - methodological insights are limited, not going far beyond the addition of more input dimensions and more data with adversarial domain adaption - no ablation study provided <doc-sep>1. The idea of adding the Spatio-temporal positional encoding and unsupervised Adversarial Domain Adaptation to STM seems to be novel. 2. Besides metric-based evaluation, the authors also conduct user evaluations for their system. 3. The authors conduct a good literature review for prior work from the methodological development aspect. 1. Not enough details are provided for the dataset. This has a negative effect on the evaluation part of the paper. Please check on my detailed comments on this. 2. One big challenge for a Video-based Computer-aided Laparoscopic Bleeding Management system is to distinguish bleeding from blood. However, the paper did not address enough on this issue. Please check on my detailed comments on this. 3. The current method seems to be evaluated in an offline setting instead of an online setting. Please check on my detailed comments on this. | According to the reviewers, the quality of the paper improved alot during the rebuttal phase (2x strong accept, 1x weak accept). I also recommend acceptance of the paper. The authors explained the data set split in more detail and provided other very important information, that helps in reproducing the work. The method was evaluated with SOTA methods and performance was improved by a reasonable margin. |
Summary Learning disentangled representation is often considered an important step to achieve human-like generalization. This paper studies how the degree of disentanglement affects various forms of generalization. Variational autoencoders (VAEs) is trained with different levels of disentanglement on an unsupervised task by excluding combinations of generative factors during training. At test time the models are used to reconstruct the missing combinations in order to measure generalization performance. The paper shows that the models support only weak combinatorial generalization. The paper also tests the models in a more complex task which explicitly required independent generative factors to be controlled. The paper concludes that learning disentanglement representation is not sufficient for supporting more difficult forms of generalization. Strengths The paper studies 4 types of generalization, interpolation, recombination to element, Recombination to range, extrapolation. It shows beta-VAE can achieve reasonable generalization by interpolation, not the other three types. Weaknesses The paper's study is limited to beta-VAE and dSprites dataset. However, it makes broad claims on the role of disentanglement in generalization. Beta-VAE has limitations in disentanglement. It is not clear other disentanglement approaches such as Wasserstein auto-encoder, InfoGAN-CR (ICML'20) would not generalize much better. The study is on unsupervised disentanglement. Unsupervised disentanglement has inherent limitations, see "Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations" ICML'19. The paper should conduct experimental studies on other datasets, e.g. those in the above reference. For image composition tasks, it states "concatenating input representations with the actions and linearly combining the resultant vectors". It will be great to explain the insights. Decision The paper has some interesting results on the role of disentanglement in generalization. However, the paper's study is very limited to specific model and a single dataset. Therefore, it is below acceptance threshold. ----Post-revision update--- The authors have provided results on a second dataset 3DShapes and two more models – Factor-VAE and a perfectly disentangled model. However, the construction results of the GT decoder is much worse that other models, see Figure 2; it does not reconstruct the details of the "heart" shape even for training and the edges of the "square" are not straight. This begs the question how good the GT decoder is. The open question is, what is the generalization capability of a GT decoder that can both reconstruct and disentangle perfectly? Wasserstein auto-encoder has been shown to disentangle better and the regularization term is on the aggregate posterior instead of individual samples. Without results on WAE, the paper should refrain from making broad claims on disentanglement. Furthermore, it would be interesting to investigate GAN based approach such as InfoGAN-CR as well. For an experimentation paper, it should be more thorough and go beyond just two shape datasets. I applaud the additional results the authors provided. I still think the paper is borderline (more toward 6 now). If it fixes the aforementioned weaknesses, I would recommend accept. <doc-sep> Summary --- A large body of work creates disentangled representations to improve combinatorial generalization. This paper distinguishes between 4 types of generalization and shows that existing unsupervised disentanglement approaches generalize worse to some and better to others. (introduction) There are 3 types of combinatorial generalization. Each requires a learner to generalize to a set of instances where 0, 1, or 2 dimensions have been completely held out. Previous work has not distinguished between these kinds of generalization when testing how disentangled representations generalize. This work does that to understand the relationship between disentanglement and combinatorial generalization in a more fine grained manner. (approach) Throughout this paper, beta-VAE and a recent variant are trained with varrying levels of disentanglement (controlled by beta) to reconstruct d-sprites images. These images contain simple shapes and are generated using 5 ground truth latent factors. The ground truth latent factors allow disentanglement to be measured (using Eastwood and Williams 2018), essentially by checking whether the ground truth latent factors are linearly separable in the learned latent space. (experiment - plain reconstruction) * reconstruction error differs for different types of combinatorial generalization (holding out fewer dimensions is easier) * reconstruction error is not highly correlated with disentanglement (experiment - compositional reconstruction) Instead of reconstructing the input, a version of the input with one attribute changed is generated. * generation error differs for different types of combinatorial generalization (holding out fewer dimensions is easier) (conclusion) Usually disentanglement is encouraged to achieve combinatorial generalization, but this paper presents a simple experiment where it doesn't do that. Strengths --- The central claim of the paper may help clarify the disentanglement literature. It seems very useful to taxonomize generalization in this way. The writing and motivation is generally very clear. The figures are easy to understand and help demonstrate the narrative. This paper aims to characterize an existing line of work in detail rather than proposing a new approach/dataset/etc. I like work of this nature and would like to see more like it. Weaknesses --- 1. The relationship between disentanglement and generalization is clearly or quantitatively demonstrated: The most interesting claim in this paper is that disentanglement is not necessarily correlated with combinatorial generalization, but this claim is not clearly supported by the data. * The main support comes from table 1. Here higher D-score does not necessarily mean lower test NLL. This observation should be made quantitative, probably just by measuring correllation between D-score and test NLL. * Table 2 seems to contradict this claim. In that case higher D-score does mean lower test NLL. 2. The taxonomy of generalization is a bit too specific to be useful and a bit incoherent: The difference between "Interpolation" and "Recombination to element" generalization is not clear to me. Each of the purple and red cubes in figure 1a represents a combinations of rotation, shape, and translation factors. It may be that it makes a difference when some dimensions are categorial and others are continuous, as in the Interpolation example, but this doesn't seem to really solve the factor because continuous latent variables are still latent variables. I see some vague intuition behind this distinction, but the paper does correctly identify the precise distinction. Furthermore, this taxonomy of generalization seems limited to me. It seems like "Recombination to element", "Recombination to range", and "Extrapolation" just hold out a different number of dimensions (e.g., "none", "rotation", and "shape and rotation", respectively). This begs the question of what happens when there are 4 generative dimensions? Is generalization when 3 of those are held out also called "Extrapolation"? I think more work needs to be done to create a taxonomy which precisely and clearly generalizes to N latent factors and creates a more coherent distinction between combinatorial and non-combinatorial generalization. However, I think it's possible to create a better taxonomy and that it will probably be very useful to do so. 3. The paper should test the idea more thoroughly, on more datasets and on more disentanglement approaches. For example, it could include other datasets or tasks with different ground truth factors of variation (e.g., 3D chairs [1]). It could also include more disentanglement approaches like [2]. [1]: M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based 2d-3d alignment using a large dataset of cad models. In CVPR, 2014. [2]: Esmaeili, B. et al. “Structured Disentangled Representations.” AISTATS (2019). Comments / Suggestions --- Describe the disentanglement metric in more detail. From the beginning disentanglement is treated differently from combinatorial generalization. It's not immediately clear what disentanglement is that makes it different and why that's interesting to study. For example, initially one might think that beta-VAE is inherently disentangled. Can this taxonomy of generalization be generalized to continuous domains? For example, can it be generalized to any (typically continuous) hidden layer a neural net learns? Preliminary Evaluation --- Clarity - The presentation is quite clear. Quality - The claims are not quite well enough supported. The experiments that were run don't support a clear conclusion and more experiments should have been run to support a more general conclusion. Novelty - I don't think anyone has catalogued the performance of disentanglement methods in terms of a generalization taxonomy. Significance - This paper might help clarify the disentanglement literature and more broadly help people think about combinatorial generalization. I like this paper because of its clarity, novelty, and significance. However, I think the quality concerns are significant enough that it shouldn't be accepted at this stage. Final Evaluation (Post Rebuttal) --- The author response and accompanying paper revision clearly and effectively addressed each of the 3 main weaknesses I pointed out, so I raised my rating.<doc-sep>Summary: This paper studies the performance of models producing disentangled representations in the downstream task of combinatorial generalization. The experiments suggest that models producing disentangled representations do not generalize well enough. Pros: - The paper is well-written and easy to follow. - The authors propose four novel benchmarks to systematically study the ability of a model to generalize. Concerns: - The key concern is that the paper does not present enough experiments to support the authors' claims. The study was conducted only for one dataset; I would suggest to include several other datasets in your study, e.g., MPI 3D, Shapes 3D, Cars 3D datasets. Also, the results would be stronger if the paper presented the assessment of other disentanglement specific metrics; see, for example, MIG [1], Modularity [2], etc. Comments/questions: - The combinatorial generalization task looks similar to the abstract reasoning task; it was shown that disentangled representations help in this downstream task [3]. How do you think, why it does not hold as well for combinatorial generalization? - Perhaps it would be interesting to vary random seeds in addition to $\\beta$ values; it was shown in Locatello [4] that random seeds sometimes have a stronger influence on disentanglement scores than model hyperparameters. Minor comments: - In some places, you write "generalization", in other -- "generalisation". UPD: The authors addressed my concerns and added additional experiments. The paper is improved, therefore, I increase the rating. References: [1] Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disen- tanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 2610–2620, 2018. [2] Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the f-statistic loss. In Advances in Neural Information Processing Systems, pp. 185–194, 2018. [3] van Steenkiste, Sjoerd, et al. "Are Disentangled Representations Helpful for Abstract Visual Reasoning?." Advances in Neural Information Processing Systems. 2019. [4] Locatello, Francesco, et al. "Challenging common assumptions in the unsupervised learning of disentangled representations." international conference on machine learning. 2019. <doc-sep>Post-revision update ------------------------ Thanks to the authors, I think that the revision provided by the authors makes the paper substantially stronger. The inclusion of the more complex Shapes3D dataset substantially improves the experiments, and I think the discussion has improved. I have revised my rating to a clear accept in accordance. Original Review ---------------------- This paper evaluates the role of disentanglement in generalization. The authors begin by articulating a useful distinction between different kinds of generalization by interpolation, recombination or extrapolation. They then train VAEs (and variations thereof) on a controlled, synthetic dataset, using two different training paradigms. They show that the models only generalize well to one of the most elementary types (recombination to element), but do not extrapolate well to the more difficult kinds of generalization. They also show that disentanglement does not seem to correlate with better generalization. I find this paper to be marginally above the acceptance threshold, but it has room for improvement (see below). Strengths: * Generalization and the role of compositionality and disentanglement therein are very important issues. * I like the articulation of the different kinds of generalization, and the clear illustration thereof. Areas for improvement: * The relationship of these results to Locatello et al. (2019) would be worth discussing further. They also showed that disentanglement did not necessarily lead to better generalization, with a wider range of experiments (though therefore perhaps less deep in evaluating different types of generalization). * The experiments are very narrow. The paper uses a single dataset (though with two different tasks), and it is very toy. As noted by Locatello et al. (2019), the inferences drawn from a single dataset may be very biased. While simple synthetic datasets can be useful for allowing more carefully controlled experiments, it would be useful to explore the same experiment s using different datasets with different features (e.g. color and texture). It would also be useful to understand generaliza tion in richer, more realistic datasets (see below). * It would be useful to show some of the major claims in a less opaque way. For example, to show the (non)relationship between disentanglement and generalization, the authors could make a plot with D-score on the x-axis, and generalization performance on the y-axis (with different plot panels for the different types of generalization, perhaps). * One reason that exploring other datasets is important is that the particular inductive biases of the models may facilitate generalization along certain feature dimensions. For example, the fact that convolutions are generally (relatively) translation-invariant but not rotation-invariant mean that the model might more easily extrapolate to unseen translations. In order to draw broad conclusions, it would be useful to both explore more diverse datasets and quantitatively analyze in more detail t he generalization along different dimensions. * Increasing realism can produce qualitative improvements in compositional generalization in some settings. For example, Hill et al. (2020) showed that e.g. generalization was better in a 3D setting than 2D setting, and that an RL agent showed 100% compositional generalization in a setting where a classifier only showed ~80%, for instance (although this generalization was recombination, not extrapolation). Thus, the poverty of the stimuli may alter the paper's conclusions. Even if your study is useful, it's worth discussing this limitation more explicitly. * The paper raises the question of why disentangled representations are not more effective in supporting compositional generalization, but it's worth asking why we assume that they would. Disentanglement $\\neq$ compositional representations $\\neq$ systematic generalization. Fodor & Pylyshyn suggest that compositional representations are necessary for systematic generalization, but they don't certainly don't provide an empirical definition of how to evaluate compositionality of representations. Indeed, it's hard to define such a notion: "The question of whether a model [generalizes] according to compositional task structure is distinct from the question of whether the model’s representations exhibit compositional structure. Because the mapping from [...] representations to behavior is highly non-linear, it is difficult to craft a definition of compositional representations that is either necessary or sufficient for generalization" (Lampinen & McClelland, 2020). Your results, along with those of Locatello et al (2019) and others, lend support for this argument. Disentanglement in some middle layer of the network does not seem to show a causal role in generalization, presumably in part because the processes intervening between that representation and the output are nonlinear, because that nonlinear decoder is also capable of failing for some combinations of latent representations even if the representations themselves are compositional, and/or because disentanglement is not a sufficient notion of compositionality. Given these difficulties, I think you could potentially extrapolate further than you do, to ask whether we should be worrying about representations at all, rather than just evaluating (and improving) behavioral performance on the different types of generalization you articulate. * However, even empirical evaluation is challenging in more naturalistic settings. The notion of "disentanglement" or "composition" may be harder to define in realistic datasets. It's not clear what the appropriate decomposition of a complex naturalistic image is — objects are a natural place to start, which is why Higgins and others have focused on this type of decomposition. But what counts as disentangled in a visual scene of e.g. a forest? Is each leaf an object that must be disentangled in its own right? Should the color of each piece of bark on each tree be represented by its own dimension, since *in principle* it could vary independently? This seems unreasonable, which is perhaps why disentanglement is usually demonstrated on very simplistic datasets. Yet a human can of course *attend* to any particular aspect of the scene to disentangle that dimension as needed. In the real world, human-like performance might require the ability to construct *new decompositions on the fly,* because the appropriate decompositions may change as the task or data shifts. That is, the idea of seeking a priori disentanglement with respect to fixed dimensions might not be the right way to go about achieving human-like generalization, especially if we want that generalization to extrapolate to new data and new tasks. (C.f. Lampinen & McClelland, 2020 for some other related discussion.) * It also seems likely that the processes that allow humans to exhibit strong generalization may require extended or additional processing, rather than a single feed-forward pass as in a VAE. This would be necessary to allow the sort of attentive disentanglement described in the previous point. For example, the Stroop effect in cognition seems to me to illustrate feature entanglement, which requires higher level control processes to resolve the appropriate response. The paper does discuss the idea that other mechanisms or architectures might be involved in the discussion, but it seems it bears more elaboration, especially w.r.t. the above points about the definability of disentanglement with real world data. References ----------- Hill, Felix, et al. "Environmental drivers of systematicity and generalization in a situated agent." International Conference on Learning Representations. 2020. Lampinen, Andrew K., and James L. McClelland. "Transforming task representations to allow deep learning models to perform novel tasks." arXiv preprint arXiv:2005.04318 2020. Locatello, Francesco, et al. "Challenging common assumptions in the unsupervised learning of disentangled representations." international conference on machine learning. 2019. | The paper seeks to empirically study and highlight how disentanglement of latent representations relates to combinatorial generalization. In particular, the main argument is to show that models fail to perform combinatorial generalization or extrapolation while succeeding in other ways. This is a borderline paper. For empirical studies it is also less agreed upon in general where one should draw the line about sufficient coverage of experiments, i.e., the burden of proof for primarily empirically derived insights. The initial submission clearly did not meet the necessary standard as the analysis was based on a single dataset and studied only two methods (VAE and beta-VAE). The revised version of the manuscript now includes additional experiments (an additional dataset and two new methods), still offering largely consistent pattern of observations, raising the paper to its current borderline status. Some questions remain about the new results (esp the decoder). |
This paper presents a new approach for video action recognition by casting the problem as an image recognition task. The video clips are rearranged into a super image according to a pre-defined spatial layout. In this paper, the authors propose a simple but effective approach for video action recognition by casting the problem as an image recognition task. Different from modeling temporal information, it provides a different perspective to think about the action recognition task. It provides solid experiments and ablation studies to evaluate the effectiveness of the proposed approach. Transformer-based and CNN-based models are both tested on public benchmarks and good experimental results are reported. This paper provides a novel idea for video action recognition. Their claims are well supported by solid experiments and ablation studies. I believe it would inspire others in this research field. <doc-sep>The paper deals with action recognition in videos, i.e. detecting to which class a given sequence of frames belongs to. However, the paper proposes to explore whether an image classifier (instead of a video or spatiotemporal-based classifier) would already be enough to accomplish this task. In order to do so, the authors organize the frames from a video into a single image by organizing them into a grid, then proceed to learn them using Swin Transformers (Swin-B) image classification models. The authors report surprising results which are indeed on-par or higher than the SotA in Kinetics400, MiT, Jester and Diving48 datasets. # Strengths The paper reads well and has almost no typos. The method described is simple and achieves surprising results. The authors have provided extra ablation experiments to evaluate the importance of the grid layout, APE and temporal order of the frames in the grid, as well as activation visualizations using CAM. The experimental setup is clear and results seem convincing. # Weaknesses a) It is not entirely clear how the frames are sampled before they are organized in a grid. For example, in other methods such as I3D, consecutive frames are sampled 10 frames apart [A, sections 2.3 and 2.5]. However, the current paper states uniform sampling is used to generate the video input for the models [p.5]. Does it mean the frames are uniformly sampled from the start and ending frames of the entire video, or are they sampled considering a fixed skip with a random starting frame as in I3D? b) It would seem to me that both the super image representation and the use of a transformer-based architecture are essential in order to achieve such good results due the self-attention mechanism. Have the authors experimented with non-transformer architectures to evaluate the efficacity of the super image representation by itself? minor) "Our" should be capitalized in second paragraph of page 5, or merged with the previous paragraph. [A]: Carreira et al, Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset, CVPR 2017. The paper presents a simple yet effective idea to transform simpler image classification models into video classification models. Even being simple, the approach manages to achieve surprisingly good results when compared to SotA video-classification models which explicitly handle the temporal dimension. The paper may thus contain findings that should be of interest for the ICLR community. <doc-sep>This paper includes two parts. 1) A video frame re-arrangement strategy that transforms a video clip to an super image such that the video can be processed by image model like 2D-CNN; 2) A slightly modified Swin-Transformer that is more suitable to the proposed super image; The proposed method is evaluated on five benchmark datasets to show its effectiveness and efficiency. Strength: 1) Transforming a video to an image for video recognition is a good direction to explore and has potential application field in the future; 2) This paper is well written and the experiment is extensive; 3) The visualization provides interesting insight of this task; Weakness: 1) The novelty of the proposed method is limitted. The main contribution of this paper is frame re-arrangement strategy, which provides little inspiration to the community; 2) The size of the introduced super image is larger than the video frame. There are already some works focusing on transforming a video to an image, like [1]. Their generated images bear the same resolution as the video frame; 3) The involvement of the Swin-Transformer seems unreasonable. The local operation can only extract the boundary information of the consecutive frames in the super image. For example the information from the right boundary of frame 1 and the left boundary of frame 2, which can not be stated as the temporal dependency. In my understanding, modeling the temporal dependency is to caputure the variation of the similar region (for example the same object or human) across frames.The only part in the modified Swin-Transformer that is able to model temporal dependncy is the kernel with the same size of the image in the last layer; 4) The reference is not complete. The works like AWSD [2], AVD [3] are missing, which also considering to treat a video clip as an image; Reference: 1) Qiu, Zhaofan, et al. "Condensing a Sequence to One Informative Frame for Video Recognition." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. 2) Tavakolian, Mohammad, Hamed R. Tavakoli, and Abdenour Hadid. "Awsd: Adaptive weighted spatiotemporal distillation for video representation." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. 3) Tavakolian, Mohammad, Mohammad Sabokrou, and Abdenour Hadid. "AVD: Adversarial Video Distillation." arXiv preprint arXiv:1907.05640 (2019). Based on the comments in the Main Review part, I tend to reject this paper. The main reasons are: 1) Limittd novelty; 2) Lack of considering the efficiency of the proposed method, i.e. the input image size it too large; 3) Unreasonable model design; <doc-sep>The paper proposes to perform action recognition by first rearranging the frames from a video into a 3x3 or 4x4 grid to form a "super image", and then giving the super image to a standard image classifier to perform action recognition. Given that this super image will be a larger image, the paper leverages the more memory efficient Swin Transformer [1] as an image classifier to perform action recognition. Experiments on Kinetics400, Moments In Time, Something-Something V2 (SSV2), Jester and Diving48 show that the proposed method is on par or exceeds SOTA in terms of accuracy. On Kinetics400, the method not only is SOTA in terms of accuracy, but also is the most FLOPs-efficient method given a specific accuracy. The strong performance suggests that a deep network's ability to model spatial relationships could also be applied to model temporal relationships across frames in a video, which is an orthogonal direction to having explicit components in the network modeling temporal relationships. Furthermore, being able to connect action recognition with image classification enables existing image classification techniques to be applied to action recognition, which could potentially accelerate the field. [1]: Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv.org, March 2021. ### Strengths - The proposed method (Super Image for Action Recognition, SIFAR) is extremely simple and easy to implement (one liner in pytorch). Its accuracy is on par with SOTA, and its speed is also SOTA given a fixed accuracy threshold. - Taking deep networks' ability to model spatial relationships and then using it to model temporal relationships is a very interesting direction. This is an orthogonal approach than having explicit components for modeling temporal relationships. I was surprised that it worked so well. - Being able to connect image classification and action recognition can enable image classification techniques to be applied to action recognition, which could potentially accelerate the field. - The ablation studies on layout and ordering are interesting. - The writing is very clear. The writing also clearly mentions the limitations, such as "*Training SIFAR models with more than 16 frames still remains computationally challenging, especially for models like SIFAR-B-14 and SIFAR-L-14†, which need a larger sliding window size.*" ### Weakness - From the analysis done on SSV2, it seems like the bigger limitations of SIFAR are (1) it has difficulty taking in more than 16 images at once, and (2) it is unclear what is the limit of SIFAR in terms of doing fine-grained temporal modeling. However, I don't think these limitations prevent SIFAR from being an interesting idea. - "*Note that the joint-space-time attention in TimeSformer (Bertasius et al., 2021a) is a special case of our approach since their method can be considered as flattening all tokens into one plane and then performing self-attention over all tokens. However, the memory complexity of such an approach is prohibitively high...*" I am not sure I agree with this claim. In figure 3 of (Bertasius et al., 2021a, https://arxiv.org/pdf/2102.05095.pdf), the method can use up to 96 frames with the "Divided Space-time" approach, which is less memory intensive than SIFAR, which has difficulty going more than 16 frames at once. - In Table 9, SSV2 in reverse order leads to the worst performance, which is surprising. I would expect that normal and reverse ordering should lead to the same performance, as whether the first image is at the top left or bottom right should not affect the network’s ability to learn as long as if the network is given consistently ordered input. Can the authors elaborate on this? Thank you! ### Typos - P5: “o”ur approach has linear computational complexity… (capitalize) - P5: We demonstrate later in Sec. 4 “thata” larger window Overall, I think the proposed method, though simple, leads to surprisingly good results. It not only provides a new way of thinking about modeling temporal relationships, but also better connects action recognition and image classification. Therefore, even though there may not be as much technical novelty in the paper, I still vote for acceptance of the paper. | This paper regards video understanding as an image classification task, and reports promising performance against state of the arts on several standard benchmarks. Though the method is quite simple, it achieves good results. The visualization in this paper also provides good insight. All reviewers give positive recommendations for this paper. |
This paper studies predicting multi-agent behavior using a proposed neural network architecture. The architecture, called a relational forward model (RFM) is the same graph network proposed by Battaglia et al., 2018, but adds a recurrent component. Two tasks are define: predict the next action of each agent, and predict the sum of future rewards. The paper demonstrates that RFMs outperform two baselines and two ablations. The authors also show that edge activation magnitudes are correlated with certain phenomenons (e.g. an agent walking towards an entity, or an entity being “on” or “off”). The authors also show that appending the output of a pre-trained RFM to the state of a policy can help it learn faster. Overall, this paper presents some interesting ideas and is easy to follow, but the significance of the paper is not clear. The architecture is a rather straightforward extensions of previous work, and using graph networks for predictive modeling in multi-agent settings has been examined in the past, making the technical contributions not particularly novel. Examining the correlation between edge activation magnitudes and certain events is intriguing and perhaps the most novel aspect of this paper, but it is not clear how or why this information would be useful. There a few unsubstantiated claims that are concerning. There are also some odd experimental decisions and results that should be addressed. For specific comments: 1. Why would using a recurrent network help (i.e. RFM vs Feedforward)? Unless the policies are non-Markovian, the entire prediction problem should Markovian. I suspect that most of the gains are coming from the fact that the RFM method simply has more parameters than the Feedforward method (e.g. it can amortize some of the computation into the recurrent part of the network). Suggestion: train a Feedforward model that has more parameters (with appropriate hyperparameter sweeps) to see if this is the cause. If not, provide some analysis for why “memories of the relations between entities” would be any more beneficial than simply recomputing those relations. 2. The other potential reason that the recurrent method did better is that policy actions are highly correlated (e.g. because agents move in straight lines to locations). If so, then recurrent methods can outperform feedforward methods without having to learn anything about what actually causes policies to move in certain directions. Suggestion: measure the correlation between consecutive actions. If there is non-trivial correlation, than this suggests that RFM does better than Feedforward (which is basically prior work of Battaglia et. al.) is for the wrong reasons. 3. If I understand the evaluation metric correctly, for each rollout, it counts how many steps from the beginning of the rollout match perfectly before the first error occurs. Then it averages this “minimum time to failure” across all evaluation rollouts. If this is correct, why was this evaluation metric chosen? A much more natural metric would be to just compute the average number of errors on a test data-set (and if this is what is actually reported, please update the description to disambiguate the two). The current metric could be very deceptive: Methods that do very well on states around the initial-state distribution but poorly near the end of trajectories (e.g. perfectly predicts the actions in the first 10 steps, but then resorts to random guessing for the last 99999 time steps) will outperform methods that have lower average error rate (e.g. a model that is correct 50% of the time). Suggestion: change the metrics to average number of errors, or report both, or provide a convincing argument why this metric is meaningful. 4. Unless I misunderstood, the results in Section 2.2.3 seem spurious and the claims seem unsubstantiated. For one, if we look at Equations (1) and (2), when we average over s_a1 and s_a2, they should both give the same average for R_a1. Put another way: the prune graph should (in theory) marginalize out s_a2. On average, its expected output should be the same as the output of the full graph (after marginalizing out s_a1 and s_a2). Obviously, it is possible to find specific rollouts where the full graph has higher value than the prune graph (and it seems Figure 4 does this), but it should equally be possible to find rollouts where the opposite is true. I’m hoping I misunderstood this section, but otherwise this seems to invalidate all the claims made in this section. 5. Even if concern #4 is addressed, the following sentence would still seem false: “This figure shows that teammates’ influence on each other during this time is beneficial to their return.” The figure simply shows predictions of the RFM, and not of the ground truth. Moreover, it’s not clear what “teammates’ influence” actually means. 6. The comparison to NRI seems rather odd, since that method uses strictly less information than RFM. 7. For Section 3, is the RFM module pretrained and then fine-tuned with the new policy? If so, this gives the “RFM + A2C” agent extra information indirectly via the pretrained weights of the RFM module. 8. I’m not sure what to make of the correlation analysis. It is not too surprising that there is some correlation (in fact, it’d be quite an interesting paper if the findings were that there wasn’t a correlation!), and it’s not clear to me how this could be used for debugging, visualizations, etc. If someone wanted to analyze the correlation between two entities and a policy’s action, it seems like they could directly model this correlation. Some minor comments: - In Figure 3C, right, why isn’t the magnitude 0 at time=1? Based on the other plots in Figure 3c, it seems like it should be 0. - The month/year in many of the citations seems odd. - The use of the word “valence” seems unnecessarily flowery and distracting. My main concern with this paper is that it is not particularly novel and the contribution seems questionable. I have some concerns over the experimental metric and Section 2.2.3, but even if that is clarified, it is not clear how impactful this paper would be. The use of a recurrent network seems unnecessary, unjustified, and not analyzed. The analysis of correlations is interesting, but not particularly compelling or surprising. And lastly, the RFM-augmented results are not very strong. -- Edit: After discussing with the authors, I have changed my rating. The authors have adjusted some of the language, which I previously thought overstated the contributions and was misleading. They have added a number of experiments which valid the claim that their method is proposing a reasonable way of measuring collaboration. I also realized that I misunderstood one of the sections, and I encourage the authors to improve the presentation to (1) present the significance of the experiments more clear, (2) not overstate the results, and (3) emphasize the contribution more clearly. Overall, the paper presents convincing evidence that factors in a graph neural networks do capture some notion of collaboration. I do not feel that the paper is particularly novel, but the experiments are thorough. Furthermore, their experiments show that adding an RFM module to an agent consistently helps (albeit not by much). Given that the multi-agent community is still trying to decide how to best quantify and use metrics for collaboration, I find it difficult to access the long-term impact of this paper. However, given the thoroughness of the experiments and analysis, I suspect that this will be valuable for the community and deserves some visibility.<doc-sep>This paper proposes to use graph neural networks in the scenario of multi-agent reinforcement learning (MARL). It tackles two current challenges, learning coordinated behaviours and measuring such coordination. At the core of the approach are graph neural networks (a cite to Scarselli 2009 would be reasonable): acting and non-acting entities are represented by a graph (with (binary) edges between acting-acting and acting-nonacting entities) and the graph network produces a graph where these edges are transformed into a vectorial representation, which then can be used by a downstream task, e.g. a policy algorithm (as in this paper) that uses it to coordinate behavour. Because the output of the graph network is a structurally identical graph to the input, it is possible to interpret this output. The paper is well written, the main ideas are clearly described. I'm uncertain about the novelty of the approach, at least the way the RFM is utilized in the policy is a nice idea (albeit, a-posteriori, sounds straight forward in the context of MARL). Similarly, using the graph output for interpretations is an obvious choice). Nevertheless, showing empirically that the ideas actually work gives the paper a lot of credibility for being a stepping stone in the area of MARL.<doc-sep> This paper used graph neural networks to do relational reasoning of multi-agent systems to predict the actions and returns of MARL agents that they call Relational Forward Modeling. They used RFM to analyze and assess the coordination between agents in three different multi-agent environments. They then constructed an RFM-aumented RL agent and showed improved training speeds over non relational reasoning baseline methods. I think the overall approach is interesting and a novel way to address the growing concern of how to access coordination between agents in multi-agent systems. I also like how they authors immediately incorporated the relational reasoning approach to improve the training of the MARL agents. I wonder how dependent this approach is to the semantic representation of the environment. These semantic descriptions are similar to hand crafted features and thus will require some prior knowledge about the environment or task and will be harder to obtain on more difficult environment and tasks. Will this approach work on continuous tasks? For example, the continuous state and action space of the predator-prey tasks that use the multi-agent particle environment from OpenAi. I think one of the biggest selling points from this paper is using this method to assess the coordination/collaboration between agents (i.e. the social influence amongst agents). I would have liked to see more visualizations or analysis into these learned representations. The bottom row of Figure 3 shows that "when stags become available, agents care about each other more than just before that happens". While this is very interesting and an important result, i think that this allows one to see what features of the environment (including other agents) are important to a particular agents decision making but it doesn't really answer whether the agents are truly coordinated, i.e. whether there are any causal dependencies between agents. For the RFM augmented agents, I like that you are able to train the policy as well as the RFM simultaneously from scratch, however, it seems that this requires you to only train a single agent in the multi-agent environment. If I understand correctly, for a given multi-agent environment, you first pre-trained A2C agents to play the three MARL games and then you paired one of the pre-trained (expert) agents with the RFM-augmented learning agents during training. This seems to limit the practicality and usability of this method as it requires you to have pre-trained agents that have already solved the task. I would like to know why the authors didn't try to train two (or four) RFM-augmented agents from scratch together. When you use one of the agents as a pre-trained agent, this might make the training of the RFM module a bit easier since you have at least one agent with a fixed policy to predict actions from. It could be challenging when trying to train both RFM modules on two learning agents as the behaviors of learning agents are changing over time and thus the learning might be unstable. Overall, i think this is an interesting approach and especially for probing what information drives agents' behaviors. However, I don't see the benefit of the RFM-augmented agent provides. It's clearly shown to learn faster than non RFM-augmented agents (which is good), however, unless I'm mistaken, the RFM-augmented agent requires a pre-trained agent to be able to learn in the first place. --edit: The authors have sufficiently addressed my questions and concerns and have performed additional analysis. My biggest concern of weather or not the RFM-augmented agent was capable of learning without a pre-trained agent has been addressed with additional experiments and analysis (Figure 8). Based on this, i have adjusted my rating to a 7. <doc-sep>RELATIONAL FORWARD MODELS FOR MULTI-AGENT LEARNING Summary: Model free learning is hard, especially in multi-agent systems. The authors consider a way of reducing variance which is to have an explicit model of actions that other agents will take. The model uses a graphical structure and the authors argue it is a) interpretable, b) predicts actions better and further forward than competing models, c) can increase learning speed. Strong Points: - The main innovation here is that the model uses a graph conv net-like architecture which also allows for interpretable outputs of “what is going on” in a game. - The authors show that the RFM increases learning speed in several games - The authors show that the RFM does somewhat better at forward action prediction than a naïve LSTM+MLP setup and other competing models Weak Point - The RFM is compared to other models in predicting forwards actions but is not compared to other models in Figure 5, so it is not clear that the graphical structure is actually required to speed up learning. I would like to see these experiments added before we can say that the RFM is adding to performance. - Related: The authors argue that an advantage of the RFM is that it is interpretable, but I thought a main argument of Rabinowitz et. al. was that simple forward models similar to the LSTM+MLP here were also interpretable? If the RFM does not improve learning above and beyond the LSTM+MLP then the argument comes down to more accurate action prediction (ok) and more interpretability (maybe) which is less compelling. Clarifying Questions - How does the 4 player Stag Hunt work? Do all 4 agents have to step on the Stag together or just 2 of them? How are rewards distributed? Is there a negative payoff for Hunting the stag alone as in the Peysakhovich & Lerer paper? - Related: In the Stag Hunt there are multiple equilibria, either agents learn to get plants (which is safe but low payoff) or they learn to Hunt (which is risky but high payoff). Is the RFM leading to more convergence to the Hunting state or is it simple leading agents to learn the safe but low payoff strategies faster? - The choice of metric in Figure 2 (# exactly correct prediction) is non-standard (not saying it is wrong). I think it would be good to also see a plot of a more standard metric such as loglikelihood of the model's for each of X possible steps ahead. It would help to clarify where the RFM is doing better (is it better at any horizon or is it just able to look further forward more accurately than the competitors?) | pros: - interesting application of graph networks for relational inference in MARL, allowing interpretability and, as the results show, increasing performance - better learning curves in several games - somewhat better forward prediction than baselines cons: - perhaps some lingering confusion about the amount of improvement over the LSTM+MLP baseline Many of the reviewer's other issues have been addressed in revision and I recommend acceptance. |
This work focuses on the worst group optimization (distributional robust optimization) problem with few spurious attributes are available as a validation set. They aim to achieve similar performance with methods that use spurious attributes. The main contribution is an adaptive thresholding method to ensure balanced sample from each group. Experiments show the proposed pseudo labeling and thresholding are effective in various tasks. ## Long summary Previous methods try to identify and upweight samples from minority groups (spurious attribute joint with class label). Then a validation set of annotated samples are used to tune hyperparameters. The performance of such models are very sensitive to these hyperparameters/validation set, and therefore, fail to perform comparably with methods that use the spurious attribute. This work aims to resolve this issue to achieve performance similar to methods with annotations. SSA first does pseudo labeling and then does robust training. It trains a domain index predictor and uses a threshold to solve the confirmation bias issue by balancing each group. Then pseudo labels are used on the training set. Then group DRO is used for robust training. The confidence of the majority group increases faster than the minority group, which is even more severe when the label is pseudo label. This is so-called confirmation bias. In experiments, it uses the datasets widely used in this field: Waterbirds, CelebA, MultiNLI and CivilComments-WILDS. They show SSA improve the worst group accuracy significantly over existing methods without spurious attribute even with much less valiation data. They also analyze the influence of the validation set size. Finally, they analyze the pseudo labeling process of SSA to verify the adaptive thresholding is useful. They also show SSA works with supervised contrastive learning loss and can be useful in semi supervised learning with imbalanced classes. ## Strength: 1. Very comprehensive experiments. 2. Relatively simple and reproducible method. ## Weakness: 1. Lack of very strong theoretical justification. It is hard to have a straightforward interpretation of why this works, especially based on pseudo labels. 2. Some experiments seem not relevant to the problem this paper aims to solve. ## details and questions 1. Table 6 is kind of redundant since most of it overlaps with Table 1. 2. It is not clear why the supervised contrastive learning part is needed in this work. The same to the semi-supervised learning experiments. 3. How are the supervised and unsupervised loss combined (eq.3)? 4. Why do we need to split the training data in 4.1? They don't have spurious attributes anyway and the method only infers them on training data. 5. I believe it can also be compared with EIIL [1]. [1] Creager, Elliot, Jörn-Henrik Jacobsen, and Richard Zemel. "Environment inference for invariant learning." International Conference on Machine Learning. PMLR, 2021. ## Strength: 1. Very comprehensive experiments. 2. Relatively simple and reproducible method. ## Weakness: 1. Lack of very strong theoretical justification. It is hard to have a straightforward interpretation of why this works, especially based on pseudo labels. 2. Some experiments seem not relevant to the problem this paper aims to solve. <doc-sep>This paper presents a technique, spread spurious attribute, for inferring the group annotation for training samples in a dataset. The inferred group information is then used as part of a group DRO minimization scheme or some other worst case scheme. The key insight in this work is to use a small validation set (1-5 percent) of data that has annotations to learn annotations for the entire training set via pseudo-labelling. The labelling scheme presented uses different thresholds for the different spurious groups. This new dataset is then used as part of a new DRO pipeline. This paper tackles an important problem and provides a convincing demonstration that current SOTA models are susceptible to reliance on spurious correlation in the training set. I detail key strengths, weaknesses, and additional questions below. ### Strengths - Reducing the amount of annotations required worst-group loss minimization is an important problem, and this paper presents a simple procedure to help with that problem. - The empirical portion of the paper seems quite thorough, and the work compares against several recent work in this line. - This paper includes ablations of different components of the proposed formulation to help understand what each one contributes. - It seems like the SSA approach might be useful when applied with robust training approaches in general. ### Weaknesses, Concerns, & Additional Questions - My biggest issue is that the paper has several moving parts in Sec. 4.1 and 4.2, which are really the key contributions of this work. I am not sure how to improve the writing here, but there are several pieces like the different validation and training sets, the two different losses, and then the different group thresholds. Even though it seems clear, it was challenging to be sure that all the pieces fit together. - I am surprised that the approach works so well given the data size disparity between D_train and D_val. I am not that familiar with the performance of standard unsupervised methods. Is such size disparity usually the case? - In section 4.2, what does "pseudo-group population ratio of the highly-confident samples" mean? - It seems like the whole point of the group specific thresholding is to help make sure that the model for learning the spurious/group attributes perform well on the small sample groups. First, I don't think I would call this 'confirmation bias' as this paper does, unless this is a standard term in the unsupervised learning literature. Second, it is unclear to me why the approach used for setting this threshold should be effective. The minimum threshold is initially set on the basis of the size of the smallest group, however, consider the unlikely case where that small group is just n copies of the same sample or somehow homogeneous and low variance. This small group would be more easy to learn than a larger group with larger 'variance' or more 'diversity'. Essentially, I am hoping the authors can clarify why this scheme should be effective on the basis of intuition or a toy example. To be more specific, equation 6 seems critical here. I don't understand why this requirement was chosen, and I worry that the dependence on size of the groups is somewhat unusual. - To clarify, pseudo-group is $\\hat{a}(x)$ for the points in the training set? - There is an important yet simple experiment that I think is missing. Why can't the authors check the accuracy of their SSA scheme for a dataset where we have the ground truth spurious attributes like the waterbirds and the other datasets that they use? Is this what Table 4 is showing? If yes, then it might help to clarify the caption and add more clarification. Post Rebuttal Satisfied with the author response, so recommending that the accept. Overall, this paper conducts thorough empirical analysis of the SSA approach, which can be incorporated with robust training methods. I recommend a weak accept because there are portions of the scheme that I think needs better justification. <doc-sep>This paper studies the group robustness problem in the setting of spurious correlations, when only a small number of samples have spurious attribute annotations. They present a pseudolabeling algorithm based on FixMatch to pseudolabel the remaining examples, and then use worst-case loss minimization algorithms such as Group DRO to train a more robust model. Strengths: - The empirical results on worst-group accuracy are strong, even with a small number of attribute-annotated examples. - The proposed semi-supervised approach seems promising, both for the spurious attribute setting and even the general class-imbalanced SSL setting, as shown in Section 5.4. Weaknesses: - The language of training and validation is confusing and imprecise, as the "validation" samples are actually used for training as well. The authors should instead clarify that some training examples have attribute annotations and some do not, and similarly for the validation set. - The methods description is hard to understand in some parts. Specifically, why are predictions not made in D_{train}^{circ} (as mentioned at the end of Section 4.1)? In principle, the model could make predictions for these, but no clear reason is given why this is not done. - Related to the previous point, one would expect the proposed algorithm to be very computationally intensive. First, the semi-supervised learning stage will likely be expensive. Next, since this is repeated K=3 times, that makes it even more expensive. (And finally, the Group DRO model still needs to be trained after that.) Computational cost / runtime results are not provided. - The paper is unclear about some hyperparameter details such as the number of epochs for semi-supervised learning. - It would be great to better understand why the proposed approach performs so well. Based on Table 3 there is almost no drop in performance until we reach 5% of the original val set size, and even then the drop is small. In this case, on Waterbirds, we would only have 3 annotated examples for worst group for training the spurious attribute predictor (and perhaps fewer if any of these are actually used for validation, rather than training itself?). What is the accuracy of the spurious attribute predictor on the worst group - if it is high, how and why might this be the case (despite the extremely low number of annotations)? If it is low, why does the robust model still attain good worst-group performance? Questions: In Table 6, are the SupCon results actually vanilla SupCon or the procedure from Zhang et al. (2021)? Based on the writing, it seems the latter, but the table suggests the former. Overall, this paper has strong empirical results. The paper could be improved with more precise / clear descriptions of the methods and design decisions, and additional ablation experiments or metrics that help explain how the methods actually perform so well even in the extremely low-annotation setting (in lieu of theoretical analysis, which the paper does not have). Thus, my current score is a weak acceptance. <doc-sep>This paper proposes a pseudo-attribute-based algorithm, coined Spread Spurious Attribute (SSA), for improving the worst-group accuracy. The proposed method leverages samples both with and without spurious attribute annotations to train a model to predict the spurious attribute, then uses the pseudo-attribute predicted by the trained model as supervision on the spurious attribute to train a new robust model having minimal worst-group loss. Experimental results on various benchmark datasets show that the algorithm consistently outperforms the baseline methods using the same number of validation samples with spurious attribute annotations. Strengths: 1. This paper is well-written and well-motivated. The proposed spread spurious attribute method is novel. 2. Comprehensive experimental results demonstrate the effectiveness of the proposed approach. 3. The proposed approach is a general yet effective framework that can be applied to lots of tasks. Overall, this paper proposes a novel and general yet effective framework. The paper is well-motivated and well-written. Moreover, comprehensive experiments have been conducted to demonstrate the effectiveness of the proposed approach. | This paper presents a new method to decrease the supervision cost for learning spurious attributes using worst-group loss minimization. Their method uses samples both with and without spurious attribute annotations to train a model to predict the spurious attribute, then use the pseudo-attribute predicted by the trained model as supervision on the spurious attribute to train a new robust model having minimal worst-group loss. The experiments show promising results in this domain for reducing annotation cost. The reviewers vote to accept the paper, and some of them increased their scores during the discussions since the authors have addressed their concerns. |
#### Goal - This work presents an experimental investigation that shows the impact of training selection bias in GNNs (bias with respect to the test data). It also proposes a decorrelation approach to eliminate the spurious correlation in the node representations that come from this training bias. - I like the experimental investigation (since I think the problem is very relevant). I am skeptical about the method and the results. #### Quality - I have many doubts about the validity of the claims. - Assumption 1 gives us a statistical model not a causal model. In order to claim “causal effects”, one needs to give a structural causal model. The linear models presented later are not linear on the observed variables. “Specifically, for both training and test environment, E(Y∣S = s, V = v) = E(Y∣S = s).” is a covariate shift question, not exactly a counterfactual question if not given with a specific structural causal model. - “Assumption 2. The true generation process of target variable Y contains not only the linear combination of stable variables S, but also the nonlinear transformation of stable variables.” it is unclear what the authors mean by this statement. That the transformation over S is arbitrary? - Equation (1): Since the node embeddings can be arbitrary, why do we need an extra function g()? Could that be incorporated into \\mathcal{G}(X, A; θg)_S β_S? - “However, limited by the nonlinear power of GNNs (Xu et al., 2019), it is reasonable to assume that there is a nonlinear term g(G (X, A; θg)_S) ≠ 0 that cannot be fitted by the GNNs.” This is not at all what (Xu et al., 2019) says. It says that there are some topologies that cannot be represented exactly. While other topologies can be represented exactly. It is entirely dependent on the input data, not a broad general statement for any graph dataset. - “Hence the parameters of both stable variables and unstable variables would be biased.” This is a strong claim that requires a formal proof. - The entire procedure is predicated on the linear models of Kuang et al., 2020, since the X in Kuang et al. (in, say, Eq (8) of Kuang et al.) is the observed data (not some learned representation). In this paper, the corresponding variables are H representations obtained by a GNN. The model is no longer linear on the input. The distinction between H_S and H_V is, hence, hypothetical, non-existent, and changes during training, since it depends on the GNN parameters. - How do we know that the decorrelation of H is not restricted to the training data? How do we know that in the test data, the same decorrelation holds? Any statement of decorrelation carrying over to the test data must be formally shown. - In section 3.2, all results are for linear models. Then, at some point, the observables X suddenly become hidden H and, it is stated (without proof) that the results carry over?!?!? If this were true, why are most works in the literature limiting their counterfactual evaluations to linear models? I highly doubt one can prove this is true for this scenario. #### Clarity - The paper uses very convoluted reasoning to arrive at conclusions that are not at all supported by theory. It uses a lot of results from linear models into a proposed nonlinear model. How can the results in the literature possibly carry over? It is hard to believe any of the claims. - How can the distinction between S and V be clear in the model, since they are the outputs of a GNN we have not been trained yet? - Can we precisely define the assumed covariate shift between train and test? No method can work for any covariate shift. Typos & Overall fixes: - “Even transfer learning is able to solve the distribution shift problem, however, it still needs the prior of test distribution, which actually cannot be obtained beforehand.” => “Even tough transfer learning is able to solve the distribution shift problem, it still needs the prior of test distribution, which cannot be obtained beforehand” #### Originality - Removing GNN training sampling bias with counterfactual inference would be new. #### Significance - The task of removing GNN training sampling bias is very important. #### Pros - Important task. - Nice demonstration of the issues with biased training data. #### Cons - See “Quality”. I am unconvinced by the method. ---- After rebuttal: My main concerns about (a) no counterfactual model and (b) the linear / nonlinear requirements of the method remain. Generally, bias assumptions are made about the data, not the output of a representation learning procedure. "The nonlinear relationship between raw input with the outcome can be encoded into the learned embedding" yes, but it does not mean H_S and H_V will meaningfully encode anything related to the input bias in any meaningful way. The method needs to precisely describe the structural causal model to be properly evaluated. <doc-sep>Summary: The authors propose two different regularization terms to help mitigate the effect of label selection bias. The regularizers are well motivated and can be applied to different GNN models. Reasons for score: Overall, I recommend a weak reject. While the theoretical analysis is interesting, the experimental evaluation is inconclusive and the performance improvement is marginal (see weak points). If the authors show stronger empirical evidence (see questions) I will consider increasing the score. Moreover, it is not clear whether the type of selection bias studied in the paper is actually relevant in practice. Strong points: * The proposed regularizers are well motivated, theoretically supported and at the same time simple and easy to implement. * The causal view analysis of the proposed regularizers is insightful. * The regularized have a reasonably small computational complexity. Weak points: * There are no results in the paper which show how the proposed method performs for a standard (non biased) labeling scenario. It is not clear whether the performance of GCN/GAT-VD/DVD in the standard setting is worse, roughly the same or better, and whether there are any trade-off which are incurred by the proposed regularizers. In other words, while the proposed method helps when there is a difference in the distribution of labels between the train and validation/test nodes it is not clear how it performs when there is no difference. * It is not clear whether the highlighted label selection bias is actually present in practice. From the definition of r_i we see that this captures the notion of heterophily, i.e. neighboring nodes have dissimilar labels. In most real-world graphs however, we tend to observe homophily (opposite of heterophily), i.e. neighboring nodes tend to have the same labels. Homophily is often either explicitly or implicitly assumed in many GNN models, so it is not surprising that the performance drops when it is not present. Moreover, it is reasonable to assume that in practice the nodes for labeling are selected uniformly at random (or using active learning) in which case we would likely not observe heavy bias (due to underlying homophily). * The performance improvement in most cases is marginal and does not seem to effectively mitigate the highlighted issue. In most cases the improvement is between 1% and 2%, and the results for heavy bias are still significantly worse (e.g. >5%) compared to the results for light bias (or no bias, not shown). The two outliers corresponding to 14% and 17% gain might be due to using a fixed data split (see next point). * The paper uses the Planetoid data splits from [1] to form the validation/test set. Previous work [2] strongly argues against using a fixed split to evaluate the performance of GNNs since considering different splits of the data leads to dramatically different rankings of models. As far as I understood the results in the paper are averaged over 10 random seeds for selecting the training set, but the validation/test set is keep fixed. For a robust evaluation results show be reported as average of several different random validation/test splits. Question for the authors: 1. How does the results change if we consider the average over a larger number (e.g. 10) of random validation/test splits? 2. What is the performance of GCN/GAT-VD/DVD using uniformly sampled training nodes? Are there any trade-offs? 3. What is the empirical selection bias for standard (uniform sampling) train/validation/test splits, i.e. how large is the difference between the distributions of r_i scores? (For Cora, Citeseer, Pubmed we expect the difference to be small) 4. How well do the proposed methods perform in the transductive setting? 5. How does the performance gain of GCN/GAT-VD/DVD over GCN/GAT on the NELL dataset change as we increase the number of labeled nodes from 1 to some large number? Additional feedback that did not affect the decision: * The paper could benefit from a discussion of how the specified notion of label selection bias is similar or different from the notion of homophily/heterophily (see also weak points). * Another potential selection bias is related to the degree of labeled nodes. For simpler models such as Label Propagation previous work has shown that different variants perform better depending on whether we label high or low degree nodes (see e.g. [3]). It would be interesting to discuss whether this also affects GNNs and whether the proposed approach can help mitigate such bias. * It would be insightful to investigate whether recent GNN models which can handle heterophily [4, 5] can deal with the label selection bias studied in the paper. The reviewer acknowledges that these papers were made public after the ICLR submission deadline. * Evaluating the effect of small sample selection bias on massive graphs from the Open Graph Benchmark (https://ogb.stanford.edu/) would be insightful. * It would be interesting to evalute whether we still observe a strong drop in performance if the training set is chosen in a standard fashion (i.e. the training nodes have high homophily) but the test nodes are selected using e.g. the heavy bias sampling. ## After Rebuttal Thank you for addressing my questions. Since the performance improvement is still marginal and based on the other reviews I have decided to keep the same score. References: 1. Yang, Zhilin, William Cohen, and Ruslan Salakhudinov. "Revisiting semi-supervised learning with graph embeddings." 2. Shchur, Oleksandr, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. "Pitfalls of graph neural network evaluation." 3. Avrachenkov, Konstantin, Alexey Mishenin, Paulo Gonçalves, and Marina Sokol. "Generalized optimization framework for graph-based semi-supervised learning." 4. Zhu, Jiong, Ryan A. Rossi, Anup Rao, Tung Mai, Nedim Lipka, Nesreen K. Ahmed, and Danai Koutra. "Graph Neural Networks with Heterophily." 5. Zhu, Jiong, Yujun Yan, Lingxiao Zhao, Mark Heimann, Leman Akoglu, and Danai Koutra. "Generalizing graph neural networks beyond homophily." <doc-sep>This paper presents a novel method to remove the selection bias of graph data, which is neglected by previous methods. Specifically, the authors suspect that all variables observed by GNNs can be decomposed into two parts, stable variables and unstable variables. Then, DGNN, a differentiable decorrelation regularization is proposed to reweight each variable pair to eliminate estimation bias. Experiments on three datasets confirm its effectiveness. Pros: + The studied problem is general and also practical for real-world applications. Cons: + The novelty of this work is limited. Although the authors claim it is the first work to solve agnostic label selection bias problem, I in person believe this work can be regarded as a special case of DWR [1]. Therefore, on the basis of DWR, this paper presents not much theoretical contribution to this problem. + The presentation of this paper is somewhat confusing and not well-motivated. For example, it is not clear to understand the connection between the example presented in Section 2.1 and the proposed method. Also, why does this paper consider the Newton-Raphson update rule in Equation (9)? Besides, how do you efficiently compute the inversion of matrices? + The studied datasets are known to have unstable performance and are also of small scales. Even so, the performance improvement seems to be marginal with new baselines missing. Larger datasets such as OGB are strongly encouraged. Reference: [1] Stable Prediction with Model Misspecification and Agnostic Distribution Shift, AAAI 2020.<doc-sep>The paper proposes an important and unexplored problem in GNNs, i.e., the inconsistent distribution between the training set with test set caused by agnostic label selection bias. I believe that studying this problem is very important for generalizing GNNs on unseen test nodes. The paper first conducts an investigated experiment to show the great impact of agnostic selection bias on test performance. Moreover, the theoretical analysis is provided to identify how the label selection bias leads to the estimation bias in GNN parameters. To remove the estimation bias in parameter estimation, the paper proposes a novel DGNN framework by jointly optimizing a differentiated decorrelation regularizer (DVD) and a weighted GNNs model. The DVD regularizer is designed based on the causal view of variable decorrelation terms. I personally like the idea of analyzing variable decorrelation by the casual view. Furthermore, the paper theoretically proves that how to combine variable decorrelation terms with GNNs would be a more flexible framework for most GNNs and how to extend the theory to the multi-classification scenario. Overall, the proposed method is theoretical sound, where some basic claims are all supported by the clear and sound theoretical analysis. The paper conducts extensive experiments on four benchmark datasets with two kinds of selection bias, well showing the effectiveness of the proposed model. Basically, the paper is well motivated and well-organized. Strong points: 1. The agnostic label selection bias problem in GNNs proposed by this paper is very important but seldom studied. And the paper shows the effect of label selection bias on the generalization of GNN in both experimental and theoretical way. In practice, the selection bias widely exists, I think this work may attract more attention in this direction, which makes GNNs more robust and stable in unseen environments. 2. The technique of the proposed method is sound. The differentiated variable decorrelation is well motivated. This is a general framework for enhancing most existing GNNs under label selection bias setting. The idea of analysis and design model is novel, for example, analyze the estimation bias with stable learning theory, differentiated variable decorrelation in causal view, prove how to combine DVD with GNNs is more flexible, and extend the method to the multi-classification setting. I think these ideas are instructive. 3. The experiment part is comprehensive and convincing. The experiments are conducted on two kinds of selection bias data, i.e., label selection bias and small sample selection bias. These two kinds of selection bias usually happen in real-world scenarios. And the results clearly show that the proposed methods make larger improvements with heavier bias. Question for rebuttal: 1. In section 3.3, the variable weight \\alpha is computed from Var(W^(K−1), axis = 1), and \\alpha_i can only be a positive value, however, in linear regression, the coefficients could also be a negative value. Hence, how to keep the \\alpha computed from Var(W^(K−1), axis = 1) has the same meaning of linear regression coefficients? 2. Although we can find the hyperparameters for each method from the experiment part and the corresponding paper of baselines, it is better to list all the hyperparameters used in the paper in the Appendix to improve reproducibility. | This paper explores a very challenging problem of biased label selection and its effect on graph neural networks. It highlights that GNNs are indeed vulnerable to this issue, and then proposes a regularizer to reduce the learning of spurious correlations from the node embeddings. All of the reviews agree that the problem is relevant and important, but that there are still some outstanding issues. It’s unclear the degree to which this problem occurs in the real world. It is also important to establish the effectiveness of the method across a range of datasets. The four datasets presented in the paper (and the rebuttal) are a good start, but the reviewers feel that more is still needed to present a convincing argument. On the theory side, the reviewers are concerned about the linearity assumptions in the theory, and how this will translate into the more realistic nonlinear setting. Even though the authors state that they do not rely on a causal model, the paper and their responses really do seem to point in that direction. This could simply be a clarity issue, in which case I would encourage the authors to revisit this framing this to avoid confusion. Overall, the paper is promising, but the reviewers feel that more work is needed to provide a comprehensive and convincing case. |
This paper proposes Confident Adaptive Language Modeling (CALM) to early-exit dynamically during the decoding process. The authors provide a theoretical guarantee that local confidence measures could lead to the confident generation of global outputs. CALM is practically useful to speed up the inference where autoregressive decoding with large language models is quite expensive. The method is principled based on the learn-then-test framework. N/A <doc-sep>This paper introduces the "Confident Adaptive Language Modeling" (CALM) framework, which contains very detailed study about the source of errors of doing early exiting during the decoding process of Transformer based language model, and proposes a method to calibrate well about local and global predictions to overcome these errors. Several empirical results, on text summarization, machine translation and Squad demonstrated the usefulness of the proposed method. Strengths: - This paper is technically very rigorous. It tackles the problem in a step-by-step manner with clear very sound logic chain. Also, it is not a common practice now to pay attention to the statistical testing criteria and this paper does a good job here, by adopting a principled way to address that. The writing, including the mathematical notation system, is also very accurate. - The technical works, especially in section 3.3, are very detailed and (hence) insightful. For example, the headroom analysis of state propagation and local errors are comprehensive and convincing. Weakness: - See the below question 2 in the "Questions" part: it seems good to discuss whether the technical proposal in section 4, to solve this problem, is really necessary to be so complicated. See above. N/A for the "negative societal impact of their work" part. <doc-sep>UPDATE: The authors have answered my concerned so I increased my score to an 8. Transformer language models such as T5 are the most important tool in the current NLP world. Typically the decoder has a set number of layers and all of these layers are executed for each token. In the past few years we've seen many papers that do 'early exit' for tokens for which the output is easy to compute- during these timesteps, the model exists early and does not compute all the layers. This paper first analyzes these types of model with an oracle to show what the best case speedups could be, and then proposes a few new ways to do early exit, for the T5 model, evaluated on a group of interesting downstream tasks such as summarization and translation. I felt like the paper wasn't very clear in comparing the final performance to the many prior works such as Elbayad et al or Schwartz et al. In fact Table 2 which is the main table that presents results only contains the 3 proposed models, it does not present the full model (no early exit) baseline and it does not present prior models such as Elbayad et al or Schwartz et al. In addition, while these models have been proposed for the past couple of years, none of them have been adopted or used in production. I feel like this might be because of many issues with these methods that none of these papers really discuss. For example, you point out the "potential FLOP speedup", but what is the *actual* wall clock speedups of your methods? When doing batched inference, these methods can't really be used because every token in the batch might require a different amount of compute. It would be good to discuss these potential issues in the next version of the paper. Strengths: 1) The analysis in this paper is super interesting, the oracle analysis shows that these methods might have big potential. Weaknesses: 1) No wallclock time. Sure- you might save some compute by not computing some layers. But you are also using more compute to decide when to stop running the layers. So what is the total running time? Providing wallclock times is the only way to show if these methods are actually efficient or not. 2) Can this method be used in batched inference? Is there any downside to that? 3) Table 2 should have comparisons with the full (no early-exit) model baseline and with models from previous papers. It's not clear where you stand in comparison to the prior literature. The authors adequately addressed the limitations and potential negative societal impact of their work. <doc-sep>**The research problem:** The authors study the problem of accelerating transformer-based autoregressive inference from the perspective of early-exit strategies - approaches that terminate the forward pass before the final layer is reached based on whether a confidence criterion is satisfied. The primary goal of the proposed framework, CALM, is to find a method that allows for exiting the forward passes as early as possible while guaranteeing performance bounds on the consistency between the full model and early-exit model. **Analysis and proposed method:** After analyzing the sources of error that arise when running early-exit models, the authors propose a decaying confidence threshold for early exit that allow for earlier exits closer to the end of sequences. **Theoretical guarantees on performance:** The authors use tools from multiple hypothesis testing literature to pick confidence thresholds that provably satisfy performance bounds. **Empirical Justification:** The proposed method is shown to (with some caveats) result in potential speedup of up to 3x. **POST REBUTTAL UPDATE:** I thank the authors for their responses. I've raised my score. **STRENGTHS** * Provable performance bounds is appealing: The distribution free approach to proving performance bounds is appealing. The fact that this approach doesn't yield too-conservative bounds is promising. * Approach considers local and global performance metrics: The proposed method considers the deterioration of quality both locally and globally when one employs early-exit strategies. * Error analysis provides insight about transformer inner-workings: Experiments in Section 3.3 are quite interesting - that one can get away with using only 1.53 layers per forward pass on average in a 8-layer transformer while (almost) not incurring a significant performance deterioration is surprising. * No labelled data is needed: The proposed method, when using textual consistency, doesn't require any labelled data. * Comparison of different confidence measures provides insight into early-exit strategies: The comparisons between softmax, state and classifier based early-exit confidence measures is quite interesting * Clear writing: The paper is easy to follow and well written, with very few typos. **WEAKNESSES** * Comparison to baselines: While the contribution of this work is to facilitate early-exit while providing performance guarantees, it'd still be interesting to see comparisons with baselines to see how much (if any) cost one has to pay to obtain the provable guarantees. * Computational cost of some confidence measures: The best performing confidence measure (softmax-based one) is computationally heavy, potentially erasing the efficiency gains of early exit. (The authors do note that further parallelization can alleviate this issue, and other confidence measures also bring nontrivial improvements in efficiency) * Practical benefit of performance guarantees in early-exit strategies: I believe the authors could spend some time justifying why one would be interested in provable guarantees on early-exit performance deterioration. What are some applications where this type of guarantee is desired/required? We already don't expect to have performance guarantees on the full model - what's the marginal benefit of using an early exit strategy that have theoretical guarantees instead of simply doing empirical validation to pick $\\lambda$? This will probably be especially true in cases where full model (i.e. model without early-exit) performance is sub-optimal which will be the case for many difficult NLP tasks with even today's strongest models). (It's possible I'm missing something here). The authors are upfront about the computational cost of using the softmax-based confidence measure (the best performing metric explored in the paper), which could potentially surpass the gains of the proposed method. | This paper studies the error of early exit in decoding Transformer Language models, and proposes a method CALM to calibrate and accelerate model inference. Experiments on a variety of tasks (summarization, MT, QA) show effectiveness of the proposed method. All reviewers find the paper solid and the author feedback convincing. |
The paper studies a fundamental and important research approach in network pruning: iterative magnitude pruning (IMP). Previously IMP is criticized to be time-consuming, layer-independent and sub-optimal in performance. In this paper, extensive empirical studies are conducted to show that under proper learning rates, IMP can have close performance with more advanced pruning approaches, with little training time increased. Strength: 1. The paper revisits IMP, a basic yet important pruning approach. The empirical findings are promising: IMP can be trained on par with stable pruning approaches using reasonable training time under proper learning rate schemes. 2. Extensive studies with detailed experimental setups are provided. Code is provided for reproducibility. Weakness: 1. It is not explained why SLR is combined. I wonder if IMP can be improved when combined with other training tricks (e.g., CLR) w.r.t. the discussed aspects. And will there be any improvement if other baselines (e.g., Uniform/ERK/LAMP in Figure 2) are combined with SLR? 2. The writing is not very focused, making it kind of hard to follow by the general audience, especially for Section 2. More technical details can be introduced for IMP and SLR, both of which are studied throughout the experiments. 3. While the paper provides empirical studies for IMP, the technical contribution can be minor, as most components are existing methodologies. Detailed comments: - The title can be misleading? The empirical results show that only under proper learning rate schemes (SLR) can IMP achieve better performance with less training time. However, SLR is not a basic technique. - While the authors draw the conclusions based on the improved accuracy with SLR, the reasons for improvement are still unclear nor well explained. It would be more inspiring to explicitly study why previous IMP are sub-optimal, and how SLR can improve IMP. - Are the rest baselines in Figure 1-3 trained with the same learning rate scheme (FT or SLR)? Otherwise, it can be unfair to compare with IMP+SLR. - Reference formats are not formal and inconsistent. - It can be restrictive to study unstructured pruning as well as theoretical speed-up with FLOPs reduction in practice. The empirical findings are promising. However, it is still not clear in what way SLR can improve IMP, and the reasons behind the improved accuracy / reduced training time. === Post rebuttal === Thanks for the authors' response and paper revision, which addresses some of my concerns. While the paper introduces inspiring findings on how SLR (or CLR) help IMP, most components are from existing techniques. I choose to keep my score, and encourage the authors to provide more in-depth analysis behind the improvement by SLR, via either new methodologies or perspectives. <doc-sep>This paper investigated several recently proposed pruning stable approaches and compared them with the basic iterative magnitude based pruning (IMP), and observe that IMP actually performs on-par or even better in the experiments. The authors investigated the retraining approaches, the computation cost, the importance of sparsity allocation (including layer-collapse) and compared IMP (+retraining) to different pruning methods. This paper is motivated by a simple question: whether the basic IMP method is enough for weights pruning? Through the whole paper, the authors conducted lots of experiments to show that IMP with a good retraining is as good as many recently proposed pruning methods. The authors also show that using IMP we can also directly determine the layerwise sparsity through global ranking, instead of using more complex method to determine the sparsity for each layer. Although the authors did lots of experiments validating the claim, the experiments are kind of limited. All the experiments are in image classification, with larger size models. I am wondering how is the result on compact models like mobilenet. Besides CNNs, weights pruning are actually used for other architectures too, e.g., LSTMs and transformers. How are the comparison on those cases? I admit that this paper already conducted a comprehensive comparison, while it's hard to say IMP is enough for pruning with only a few models on Image classifications showing that. In addition, many new pruning algorithms are actually built for NLP and speech tasks. I like the author's view and the motivation to make a realizable and unified baseline for weights pruning. However, I think it's better to have more evidence on different models and tasks to validate this conclusion. If the proposed claim can be further validated, I think it will be a significant contribution to the community. <doc-sep>This work focuses on highlighting the strengths of Iterative Magnitude Pruning (IMP). Specifically, that it is capable of achieving strong performance when compared to more complex pruning approaches. The work explores the common arguments against IMP like, a) it reaches sub-optimal states since training doesn't compensate for sparse structures, b) it fails to identify optimal layer-wise pruning ratios and c) it is expensive, slow and non-competitive. The critical outcome shown is that IMP, with a global selection criterion and extremely small overhead, remains highly competitive with common state-of-the-art pruning approaches, both in sparsity, performance and theoretical speedup. Strengths - The clarity in writing throughout most of the paper helps the reader quickly assimilate the contents and understand the intent of the paper. - The level of detail, across the experiments as well as the figures, provided is of high quality and much appreciated. - Explanation of the different approaches to pruning, their background and clearly advocating for the missing link is commendable. Weaknesses - The first paragraph on the arguments against IMP (Pg. 2, bullet point 1), reads slightly incoherently, like a mix of multiple ideas. A more clear and point by point description of sub-optimal states, re-training epochs and others would be extremely helpful. - Additionally, "Many proposed improvements..." is followed by a single citation in the same bullet point 1. Adding multiple citations would be more appropriate with that statement. - Pg. 4 Paragraph of Pruning Approaches: The nomenclature of "pruning approaches" is slightly counterintuitive when discussing ideas related to how pruning methods limit the overall compression of different layers. An alternative heading would summarize the contents more appropriately. - While the intention of Section 3.1 is to highlight the low computational overhead of IMP in matching baseline performance, the intermediate outcome of SLR being better than FT traces the takeaways from Renda et. al (2020) and slightly weakens the contribution. Further, it isn't exactly clear what is the overhead when comparing the final set of results for IMP vs. SOTA pruning approaches. Comparison to a potentially weak baseline isn't quite enough and I encourage the authors to provide those results in a bid to strengthen their argument. - Some of the key takeaways from Section 3.2 are very similar to that of Tanak et al. (2020) and Lee et al.(2020). Specifically, that of layer-collapse and the SLR vs.FT performance comparison. Leaning on these outcomes, in text and across image captions, weakens the novelty of the content. I encourage the authors to emphasize on IMP-specific behavior and insights. - In Section 3.3, the last paragraph, the statement of IMP can be applied to already trained models while other methods in the experiment cannot isn't fair since the experiment uses pruning-during training approaches and not the general set of pruning methods. After Rebuttal I would like to thank the authors for their timely and pertinent discussions and revisions to the manuscript. As I described previously the consistent references and experimental structure borrowed from existing work hinder the novelty of the work. The authors have consistently clarified the difference in setup and mentioned that the exact outcomes differ from already existing work. While the implementation and proposed work are focused on resurrecting IMP as a valid SOTA baseline, more in-depth work in alternative settings (unseen in existing literature) would be one possible way to help further highlight the novelty aspect of IMP. The proposed work does an extremely good job of explaining the landscape of pruning and setting up the missing link of IMP not being explored sufficiently. However, the experimental takeaways follow similar patterns to Tanaka et al.(2020) and Renda et al.(2020), which weaken the novelty of the work. The work sounds more like an extension of IMP into existing experimental frameworks and results than a purely novel instance. | The paper provides an analysis of the well known method of Iterative Magnitude Pruning (IMP) for DNN compression. The problem tackled is undoubtably an important one, and IMP is likely one of the most known solutions for DNN compression. As such, there is no doubt that the paper is well motivated. In addition to the motivated task, the reviews indicate that the paper is well written and provides a thorough review of the related literature, making the paper easy to read and follow. The main weakness of the paper seems to its novelty, as it seems that similar analyses have been done in the past. This issue was raised by the reviews and remained after the correspondence with the authors: WMeJ: “As I described previously the consistent references and experimental structure borrowed from existing work hinder the novelty of the work”, dL1d: “While the paper introduces inspiring findings on how SLR (or CLR) help IMP, most components are from existing techniques”. Given the discussion and concerns related to the novelty of the paper, I feel that the paper requires too major of a revision to be accepted, either improving its core analysis, or presenting it in a better way that clearly distinguishes it from previous art. |
The paper empirically evaluates distillation approach for privileged features. Teacher is trained with privileged features that are not available during inference, and student then aims to replicate performance of the teacher without these features. Empirical results on public and proprietary datasets show that this approach achieves better performance than baselines. Authors further analyse theoretical properties of this distillation approach in the case of linear models and show that it has desirable properties. Strengths Paper is well written and easy to follow. Empirical evaluation is quite thorough and I particularly enjoyed the section on the Amazon dataset although these results are not reproducible. Theoretical results provide some insight into properties of PFD and could be used as a stepping stone for more analysis. Weaknesses I find that the paper has limited novelty. Section 4 is all about empirical evaluation and while it has useful insights the novelty limited. Theoretical analysis in Section 5 is probably the most novel part but it only analyses linear models, and is of limited utility for the complex gradient boosting or deep learning models that are typically used for ranking. Moreover, in most cases it should be possible to use privileged features directly as additional targets. This is cheaper than distillation since it doesn't require training teacher models. Pretrain + finetune results in Table 3 perform similarly to distillation with one teacher, and call into question whether distillation is necessary here at all. I suspect that by carefully tuning weights in a multi-target loss it should be possible to recover multi-teacher performance with just one model. NA <doc-sep>In this work, the authors studied about the Privileged Features Distillation (PFD), where there are indicative features available in training but missing in serving, so as to leverage these privileged features, distillation of a teacher model trained with privileged features is deployed. In specific, in the PFD proposed in the work, the teacher model leverages both privileged features and regular features available in serving. The proposed setting is shown to be better than all baselines on the public and industrial datasets. The authors also provided emprical explaination on why and when PFD works by ablation study and theory on linear models. The main contributions mainly include a practically applicable method PFD and the theoretical understanding of the method. Strengths 1. The paper is overall well-written, with all main contributions listed and sufficient ablation study. 2. First work gives reasonable understanding of why using privileged feature distillation works. Weaknesses 1. Figure layouts look a bit wierd. Authors may have used some special template. Usually, a figure will take a full column instead of just a floating panel. 2. Some notation might be better explained in the main text, for example, RankBCE is short for binary cross entropy loss. It's not very interpretable without checking the appendix. N.A. <doc-sep>The authors provide an empirical study of Privileged Features Distillation (PFD) for Learning-to-Rank problems (LTR), applied on 3 public datasets and one private industrial dataset (Amazon search logs). The principle of PFD is based on two models: 1) one, which learns with all the features available (including the privileged ones) and will play the role of a « teacher » to a 2) second model, the « student » model which is trained using only the regular features and into which teacher information is transferred via distillation. PFD is compared against 4 other baselines: - no distillation (training only on regular features, no teacher, only one model) - pre-training on privileged features followed by fine-tuning with only regular features - self-distillation (the teacher model is trained only on non-privileged features) - and generalized distillation (GenD, the teacher model is trained only on privileged features) Experiments show PFD performs better or as good as the baselines. An ablation study and theoretical analysis focused on linear models finally help to understand when and why PFD works. Strengths: S1 (clarity, quality): The paper is well-written and its clarity makes it easy to follow and enjoyable to read. S2 (significance): This paper is a first attempt to bridge the gap of lack of performance understanding of PFD. The extensive experiments and theoretical analysis conducted in this paper help to better understand PFD and support some intuitions (ex: PFD cannot do miracles if the most discriminative features for the task at hand are privileged - this shows the superiority of PFD over GenD) and less intuitive results (teacher loss should dominate distillation loss, PFD works better with sparser labels, PFD reduces estimation variance). Supplemental work on the use of a teacher model with imputed privileged features during inference is also interesting. Weaknesses: W1 (clarity, minor): This is minor but as the concept of privileged feature is not limited to learning-to-rank problems, the first sentence of the abstract can be misleading. Typos: -row 54: « PDF » -row 178: indicator function mentioned for the first time, always helpful for the reader to name it (and define it)! Yes, limitations are enunciated. This is even the purpose of the paper to determine in which cases PFD performs or does not perform. Potential negative societal impact of the work is not mentioned as this is more a generic LTR problem. <doc-sep>This paper studies the privileged feature distillation (PFD) problem. The paper consists of two parts - empirical evaluation on public datasets and an industry dataset, followed by some theoretical analysis on linear models. The paper focuses on understanding an existing method instead of proposing new methods. The empirical part confirms that PFD is effective on several datasets. Some ablations are provided in terms of label sparsity, etc. On the three public datasets, the setting is controlled - binary labels are generated and privileged features are manually selected. The evaluation on the industry dataset looks more standard. On the theoretical part, the analysis is done on linear models. The insights found include 1) PFD works by reducing estimation variance. 2) Why too discriminative privileged features can hurt. Overall the reviewer finds this paper well written in general. The reviewer feels the empirical study meets the bar by performing on multiple datasets and comparing with sensible baselines. The theoretical part looks reasonable but not surprising. Strength The reviewer has personal interest in the topic (though not sure about the interest from a wider group). The paper is generally well written. The reviewer feels the empirical evaluation meets the bar, by performing on multiple datasets and comparing with sensible baselines. Some ablations look interesting. The theoretical part is clear and focus on important aspects. Weakness The theoretical analysis is not particularly deep. The conclusions are intuitive and nothing is surprising. Focusing on linear models is ok but may not be very impressive. The controlled setting on public datasets seem a bit artificial and may bias towards the concerned methods. For example, the most correlated features are used as privileged features. Considering other options could be more comprehensive. The paper does not show any online experiments, which is the major motivation of PFD. NA | There is a consensus that the insights on the distillation of privileged information presented in the paper are interesting (e.g., possibility of distillation even if the privileged information is independent from x, non-monotonicity of the impact of privileged information vs correlation with the target feature), which is why the paper is recommended for acceptance. Note that even after the rebuttal, several of the main weaknesses remain, - it is not clear why the paper focuses "learning to rank" (apart from the original motivation of the authors), since the claims seem to hold as well in classification or regression - the value of the theoretical analysis is limited, because it seems the authors considered the easiest setup where the phenomena illustrated in the experiment could be proved. In particular, they study linear least-square regression, which doesn't match any of their experiments. - no novelty in terms of methods In the end, the paper is borderline on the side of acceptance because the insights are significant enough. |
Post rebuttal and discussion ======================== Several reviewers have pointed out that the paper needs more comparisons/ablations with existing models (e.g. Paranet/Fastnet). To this end, I think we at least need a comparison with Paranet, which is a 'comparable' non-autoregressive CNN based VAE based model with a few other components such as attention distillation. There are components in the paper that could do with more ablation studies - argmax with straight through estimator - some guidelines on BVAE blocks and tuning In light of these points, together with the fact that we don't have any theoretical novelties in this paper, I reduce my score to 6. Even so, I feel that the paper would be a valuable contribution because a) A generative model (GAN/VAE/VQVAE/Flow based models/score matching based models) might add extra benefit in the synthesis problem, as compared with a supervised model without a similar generative component such as Tacotron. The NVAE has been shown to significantly outperform the regular VAE in image generation tasks. It stands to reason that it would do well in speech generation also. b) Speed, robustness and ease of implementation (although this remains to be demonstrated). Initial Review =========== This paper proposes a non-autoregressive (non AR) way to perform text to speech synthesis. It uses a VAE based setup - adapted from the recent image paper NVAE to build two stacks of hierarchical VAE blocks (in priors), one going bottom up and the other, top down. The key claims are that it results in improved speed, and reduced model footprint from using a non AR architecture, with excellent quality comparable to the best autoregressive/recurrent methods in Tacotron2 [2] and non AR glow-TTS[3]. The work contains many interesting ideas for TTS, and I am very interested in seeing how this work pans out in practical speech synthesis applications. Key ideas: 1. The bidirectional stack, which they call BVAE is adapted from the recent NVAE work which has produced stellar image generations. The model uses 1D convolutions under the hood, in contrast with the fashionable, but slow autoregressive flows or recurrent models. If one can get such a model to work, it could be advantageous in effecting savings in computational time and model size. During training, at the top of the bottom-up stack, text features are inflated to the size of the mel spectrogram features, and reconstructed with the top down BVAE stack. For inference, text is inflated to an expanded text matching audio mels, and then sent down the top-down stack to give a mel sample. 2. Attention modeling: An important consideration here is to align text and mel, commonly done with an attention mechanism. In this work, the attention alignment shows up as a duration model, which is rather interesting, and seemingly gives additional flexibility. After aligning text and mel (using dot product), the alignment can be reinterpreted as a duration model by comparing phoneme and mel frame alignments. Furthermore, they use a discrete match with argmax rather than a sum over all attention alignments as is generally done. This also necessitates the use of the straight-through estimator while backpropagating since the durations are rounded entities. This type of modeling seems also to be used in the Glow-TTS work but with alignments determined through dynamic programming. I found the result that the model is not very sensitive to alignment mismatches to be quite remarkable. 3. Fittings for robustness during inference: They use several instructive ideas - jittering text, adding positional embeddings, diagonal penalty (since alignment is mostly diagonal) and KLD annealing. 4. Analyses - ablations to see which of the VAE blocks affect the result by varying temperature (from Glow [3]). My thoughts: Generally, the paper made for fascinating reading. Having worked with Tacotron, I have always felt that adding a VAE to that (RNN based) setup would improve its generative capabilities by giving it additional regularization qualities, among other things. That we can see the model perform better when we add jitter and can also respond to the duration specified seems to corroborate that in a loose way (figure 10). - Could the authors clarify how the duration modeling results in 'monotonic' alignments? As far as I can see, the argmax guarantees a unique match, but is monotonicity necessary? From section 5.3.2: "Since the text is forced to be used monotonically in the duration-based generation, it makes the model more robust to the attentionerrors while making fewer pronouncing mistakes." - A comparison with an equivalent soft attention implementation might be insightful. - Multi Speaker TTS: I am wondering how this model would perform in a multispeaker dataset, say libritts. One aspect that the paper does not touch in detail is in its capabilities as a generative model. It would be interesting, for instance, to see if this model can in any way separate speaker style from content with a multispeaker model. Overall, I think this paper would be a good addition to the body of speech synthesis work, and recommend that it is accepted. [1] NVAE: https://arxiv.org/pdf/2007.03898.pdf [2]: Tacotron2: https://arxiv.org/pdf/1712.05884.pdf [3] Glow-TTS: https://arxiv.org/pdf/2005.11129.pdf [4]: Glow: https://arxiv.org/pdf/1807.03039.pdf <doc-sep>Summary: This paper presents BVAE-TTS, which applies hierarchical VAEs (using an approach motivated by NVAE and Ladder VAEs) to the problem of parallel TTS. The main components of the system are a dot product-based attention mechanism that is used during training to produce phoneme duration targets for the parallel duration predictor (that is used during synthesis) and the hierarchical VAE that converts duration-replicated phoneme features into mel spectrogram frames (which are converted to waveform samples using a pre-trained WaveGlow vocoder). The system is compared to Glow-TTS (a similar parallel system that uses flows instead of VAEs) and Tacotron 2 (a non-parallel autoregressive system) in terms of MOS naturalness, synthesis speed, and parameter efficiency. Reasons for score: Overall, I think the system presented in this paper could be a valuable contribution to the field of end-to-end TTS; however, from a machine learning perspective, the contributions are incremental and quite specific to TTS. In addition, I have some slight concerns about the clarity of the presentation that made it harder to understand the (fairly simple) approach and its motivation than I’d expect from an ICLR paper. Finally, the quality of the speech produced by the system is only evaluated on a single dataset and uses only 50 synthesized examples in the subjective ratings. For these reasons, I feel this paper would be a better fit for a speech conference or journal after addressing the evaluation and presentation issues, but I would still support acceptance if other reviewers push for it and my concerns are addressed. High-level Comments: * The speed, parameter efficiency, and MOS results are quite promising. However, when considering the Glow-TTS paper (which this seems like a direct followup to), the system improvements seem quite incremental (replace flows with HVAEs and replace the monotonic alignment search with soft attention plus argmax). * Incremental system improvements are great if they result in significant improvements that are demonstrated through rigorous experiments, however, compared to Glow-TTS, the experiments are not nearly as comprehensive and convincing. Listening to a few of the audio examples provided in the supplemental materials, I don’t get the sense that the audio quality is significantly better than that of Glow-TTS as is suggested by the MOS numbers (BVAE-TTS sounds a bit muffled to my ears relative to Glow-TTS). * Since this system uses the same deterministic duration prediction paradigm as Glow-TTS (and other parallel TTS systems), it suffers from the same duration averaging effects and inability to sample from the full distribution of prosodic realizations. * The motivation would be made clearer if you were more specific early on about the potential advantage of VAE's relative to flows however you want to describe it (parameter efficiency, more flexible layer architectures, more powerful transformations per layer, etc.). * I'd recommend providing similar motivation for using dot-product soft attention plus straight-through argmax instead of Glow-TTS's alignment search or other competing approaches. Is it because it's a superior approach or just because it's different from existing approaches? Detailed Comments: * Section 2: I don’t believe Tacotron is actually the *first* end-to-end TTS system. Maybe it was the first to gain widespread attention, but I know that char2wav (if you count that as e2e TTS) preceded it chronologically in terms of first arxiv submission date. * Section 2: The Related Work section is fairly redundant with information that is already presented in the introduction. It might be worth combining the two sections. This should free up space for additional experiments, explanations, or analysis. * Section 4.1: The first paragraph here was quite confusing upon a first reading. I had to read the second sentence (“Via the attention network…”) many times to understand what was being described. * Section 5.2: I’m curious how you arrived at a sample temperature of 0.333. Was this empirically tuned for BVAE-TTS or in response to Glow-TTS’s findings? * Section 5.2, “Inference Time”: It seems important to include details about the hardware platform used to gather the speed results. * There are minor English style and grammar issues throughout the paper that make the paper slightly more difficult to read. Please have the paper proofread to improve readability. Update (Nov 24, 2020): After reading through the author responses and the updated version of the paper, I feel like a sufficient number of my concerns have been addressed to increase my score to 6. Specifically, the motivation has been made clearer, the related work section is no longer redundant with the intro, and the authors gave an adequate explanation about the necessity of their attention-based alignment method. <doc-sep>This paper combined fastspeech with a hierarchical VAE (or ladder VAE? in their paper it called bidirectional VAE) to achieve parallel and high quality text-to-mel syntheisis. The paper claims these contributions: (1) Introducing an online fashion for duration prediction, instead of distillation in FastSpeech and ParaNet. So the model is more e2e. (2) Introducing an BVAE, which extract features hierarchically to better capture prosody (overcome one-to-many) problem. During inference, can use the prior directly. This is directly than previous VAE application in TTS, which is only use to capture residual information. (3) it's faster and with same quality as autoregressive Tacotron and with better quality than other published non-autoregressive model. The key strength of this paper is the architecture is new. I think using a hierarchical VAE here is reasonable. My concerns mostly from the conclusion and experiments. (1) The paper claims compare to previous non-autoregressive model, they are more e2e, since both FastSpeech (also use duration predictor) and ParaNet (without VAE) rely on distillation. However, there is another paper called FastSpeech 2 (https://arxiv.org/abs/2006.04558, published on June 8th), the model also claim " 1) removing the teacher-student distillation to simplify the training pipeline". Can the author explain the difference? Also i think need to cite that paper because it published in June and very related. (2) As mentioned in (1), ParaNet and FastSpeech1/2 are very related to this paper. But why only compare with waveglow? (3) The paper has an ablation study section, but it missing couple very simple baseline. 1) remove VAE, purely predict mel-features based on duration and phoneme embeddings. 2) using a simple VAE instead of hierachical one. How it affect the performance. (4) One key claim of this paper is that it is as good as Tacotron 2. However, for the in-domain test, the 0.2 behind. By listening the audio samples provided by the author, it indeed significantly worse. The out of domain looks better, I suspect the reason is Tacotron 2 has some attention failures due to it not robust as duration based model. A proper baseline here, is a FastSpeech model. Could you also provide OOD samples? It's really hard to believe such prosody gap can be filled by switch domain. (5) Back to the original motivation, why we need non-autoregressive model for TTS? For neural based TTS system, most of time is in vocoder. Even we assume the speed for mel-to-spec is important, I don't think measure speed with batch size = 1 is important, because non-autoregressive model can not be streaming. A proper comparison is measure FLOPS and throughput. This might make more sense for offline TTS. This is a minor concern, as long as the quality are good enough. (6) The paper claims their model is more compact, but there is no comparison for a smaller Taco2 model or other non-autoregressive model. In summary, based on my understanding, this paper proposed a new non-autoregressive based text-to-mel model with quality regression but possible better robustness. My opinion is that it's a borderline for ICLR, since the importance of the proposed VAE was not well justified, and the quality was not as good as autoregressive model. <doc-sep>Summary: Neural models that autoregressively generate mel spectrograms from text (or phonemes), such as Tacotron, have been used to generate high quality synthetic speech. However, they suffer from slow inference speed due to their autoregressive nature. To alleviate this, non-autoregressive models have been proposed, such as FastSpeech and Glow-TTS. The proposed model, BVAE-TTS, is yet another non-autoregressive speech synthesis model (outputting spectrograms), with two key advantages over the aforementioned models: (a) no autoregressive teacher model is required, as in FastSpeech, which simplifies training, and (b) fewer parameters are needed than in Glow-TTS, since there is no bijectivity constraint (allowing a more expressive architecture to be used). Models are compared with inference speed and MOS, and BVAE-TTS compares favorably on both both metrics when compared to Glow-TTS. Pros: 1. The evaluation of the model is done well, in a clear way. LJSpeech is used, a dataset which is commonly used and easily accessible. MOS and inference speech are provided, and error bars are provided for MOS values. BVAE-TTS is compared to Glow-TTS and Tacotron 2 (one other non-autoregressive model, and one well-known AR baseline), and hyperparameters are provided. A single vocoder (pretrained WaveGlow) is used on all models, isolating the effect of the spectrogram prediction model used. 2. Section 4.3, pertaining to using attention distributions to learn a duration predictor, is interesting and novel. Using positional encodings is standard and using a loss guide is unsurprising. However, while jitter and straight-through estimators are not uncommon, all of these things together make a compelling and novel approach to using attention to infer discretized durations and compensate for that train-test mismatch well. I believe that a similar technique could be used in other models as well. 3. The model is an application of similar ideas from image synthesis, which is interesting, in that it demonstrates that some of those techniques work equally well for spectrogram synthesis. This sort of cross-modal result points to the strength of the method being used, which is a valuable data point for the research community. Cons: 1. The biggest weakness of this paper, in my view, is that deciphering the model itself is quite difficult. Although the model bears resemblance to NVAE (for which code is released), understanding the fine details is tricky, and the paper does little to aid in that effort. In particular, understanding the exact layer inputs and outputs and parameters of the normal distributions being used is difficult, and I believe the paper would benefit significantly from a pseudocode explanation of the network. For example, I did not understand why the generative model produced both $\\mu_l$ and $\\Delta \\mu_l$, and whether $\\mu_l$ was predicted with a dense layer or was the accumulation of the prior BVAE stacks' $\\Delta \\mu_l$ values (and similar for $\\Sigma$). I also wonder why the output of the attention layer is not provided to the encoder; perhaps there is a fundamental reason for this which I am missing, or perhaps this is simply an architecture choice. A very clear explanation of the method itself, perhaps as psuedocode for where the means and variances come from and which features they interact with and what it sampled when, would in my view make this among the top papers. Recommendation: Accept. The paper is well written and results are strong, although I would prefer if the method itself were explained more clearly. | Non autoregressive modelling for text to speech (TTS) is an important and challenging problem. This paper proposes a deep VAE approach and show promising results. Both the reviewers and the authors have engaged in a constructive discussion on the merits and claims of the paper. This paper will not be the final VAE contribution to TTS but represents a significant enough contribution to the field to warrant publication. It is highly recommended that the authors take into account the reviewers' comments. |
The contribution of the paper is to set up an automaton from scTLTL formulas, then corresponding MDP that satisfies the formulas is obtained by augmenting the state space with the automaton state and zeroing out transitions that do not satisfy the formula. This approach seems really useful for establishing safety properties or ensuring that constraints are satisfied, and it is a really nice algorithmic framework. The RL algorithm for solving the problem is entropy-regularized MDPs. The approach “stitches” policies using AND and OR operators, obtaining the overall optimal policy over the aggregate. Proofs just follow definitions, so they are straightforward, but I think this is a quality. The approach is quite appealing because it provides composition automatically. The paper is very well written. The main problem I see with the work is that composition can explode the number of states in the new automaton and hence the new MDP. It would be interesting in future work to do “soft” ruling out of transitions rather than the "hard" approach used in the paper. The manipulation task provided is quite appealing, as the robot arm is of high dimensionality but the FSAs obtainedare discrete. Overall, the paper provides a very good contribution. Small comments: Equation equation in Def 3 also proof of Theorem 2 In section, -> In this section are it has -> and it has<doc-sep>This work proposed using temporal logic formulas to augment RL learning via the composition of previously learned skills. This work was very difficult to follow, so it is somewhat unclear what were the main contributions (since much of this seems to be covered by other works as referenced within the paper and as related to similar unreferenced works below). Moreover, regarding the experiments, many things were unclear (some of the issues are outlined below). While the overall idea of using logic in this way to help with skill composition is interesting and exciting, I believe several things must be addressed with this work. This includes: situating this work more clearly against existing similar works which use logic in this way, clearly defining the novel contributions of this work as compared to those and others, overall making the methodology more clear and specific (including experimental methodology), and comparing/contrasting against (or at least discussing differences with) methods with similar motivations (e.g., HRL multi-task learning, meta-learning) to emphasize the need/importance of this work — I am aware that at least 1 HRL work is mentioned, but this work is not really contrasted against it to help situate it. Questions/Concerns about Experiments: + Does Figure 5 show the averaged return over 5 runs, sum of discounted rewards averaged over 5 episodes per update step, or 5 episodes, each from a separate run averaged together? It is a bit unclear especially because the main text and the figure caption slightly differ. Also, average discounted return is somewhat different than average return, suggest updating the label to be clear also with the discount factor used. + What were the standard deviations for this across experiments? Even with averaging it seems that these runs are very high variance, would be good to understand what variance bounds to expect if using this method. + Why were average discounted returns reported in Figure 5 and not in Table 1? + What were the standard deviations on success rate and training time? Also what about sample complexity? + To my understanding the benefit here is reusability of learned skills via the automata methods described here. It would have made sense to compare against other HRL or multi-task learning methods in addition to just SQL or learning from scratch. For example how would MAML compare to this? + It is also unclear whether the presented results in Table 1 and Figure 5 are on the real robot or in simulation. The main text says, “All of our training is performed in simulation and the policy is able to transfer to the real robot without further fine-tuning.” So does this mean that Figure 5 is simulated results and Table 1 is on the real robot? Citations that should likely be made: + Giuseppe, Luca Iocchi, Marco Favorito, and Fabio Patrizi. "Reinforcement Learning for LTLf/LDLf Goals." arXiv preprint arXiv:1807.06333 (2018). + Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. "Decision-making with non-markovian rewards: From LTL to automata-based reward shaping." In Proceedings of the Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM), pp. 279-283. 2017. + Camacho, Alberto, Oscar Chen, Scott Sanner, and Sheila A. McIlraith. "Non-Markovian Rewards Expressed in LTL: Guiding Search Via Reward Shaping." In Proceedings of the Tenth International Symposium on Combinatorial Search (SoCS), pp. 159-160. 2017. Typos/Suggested grammar edits: “Skills learned through (deep) reinforcement learning often generalizes poorly across tasks and re-training is necessary when presented with a new task.” —> Often generalize poorly “We present a framework that combines techniques in formal methods with reinforcement learning (RL) that allows for convenient specification of complex temporal dependent tasks with logical expressions and construction of new skills from existing ones with no additional exploration.” —> Sentence kind of difficult to parse and is a run-on “Policies learned using reinforcement learning aim to maximize the given reward function and is often difficult to transfer to other problem domains.” —> ..and are often.. “by authors of (Todorov, 2009) and (Da Silva et al., 2009)” —> by Todorov (2009) and Da Silva et al. (2009) Also several other places where you can use \\citet instead of \\cite<doc-sep>This paper mainly focuses on combining RL tasks with linear temporal logic formulas and proposed a method that helps to construct policy from learned subtasks. This method provides a structured solution for reusing learned skills (with scTLTL formulas), and can also help when new skills need to be involved in original tasks. The topic of the composition of skills is interesting. However, the joining of LTL and RL has been developed previously. The main contribution of this work is limited to the application of the previous techniques. The proposed approach also has some limitations. Will this method work on composing scTLTL formula with temporal operators other than disjunction and conjunction? Can this approach deal with continuous state space and actions? This paper describes a discretization way, which, however, can introduce inaccuracies. The design of the skills is by hand, which restricts badly its usability. The experiments results show that the composition method does better than soft Q-learning on composing learned policies, but how it performed compared to earlier hierarchical reinforcement learning algorithms? <doc-sep>This paper presents a way use using FSA-augmented MDPs to perform AND and OR of learned policies. This idea is motivated by the desirability of compositional policies. I find the idea compelling, but I am not sure the proposed method is a useful solution. Overall, the description of the method is difficult to follow. With more explanations (perhaps an algorithm box?), I would consider increasing my score. The experiments demonstrate that this method can outperform SQL at skill composition. However, it is unclear how much prior knowledge is used to define the automaton. If prior knowledge is used to construct the FSA, then a missing comparison would be to first find the optimal path through the FSA and then optimize a controller to accomplish it. As the paper is not very clear, that might be the method in the paper. Questions: - How do you obtain the number of automaton states? - In Figure 1, are the state transitions learned or handcoded? Are they part of the policy's action space? - In section 3.2, you state s_{t:t+k} |= f(s)<c ⇔ f(s_t)<c What does s without a timestep subscript refer to? Why does this statement hold? Can you specify more clearly what you assume known in the experiments? What is learned in the automata? In Figure 5, does SQL have access to the same information as Automata Guided Composition? | The authors present an interesting approach for combining finite state automata to compose new policies using temporal logic. The reviewers found this contribution interesting but had several questions that suggests that the current paper presentation could be significantly clarified and situated with respect to other literature. Given the strong pool of papers, this paper was borderline and the authors are encouraged to revise their paper to address the reviewers’ feedback. |
This paper proposes a new data distillation approach based on neural feature regression that is similar to a truncated backgprop through time using a pool of models. The approach sets a new state-of-the-art results both in terms of accuracy and training efficiency. Ample experiments showcase the advantages of the proposed approach. Strengths: - State-of-the-art results while being significantly more efficient - Well written and structured with in depth ablation studies Weaknesses & Q: - L173, why no augmentation is applied during training? Shouldn't this prevent overfitting too? - How does this approach fair in comparison with state of the art when combined with few shot learning approaches (that were targeted for real data)? - Since the paper mentions the computational advantages, perhaps this information could be added in the tables too. Thought figure 3 addresses this in part. - Given the difficulty on modeling similar classes, it would be interesting to see how such method fair on fine grained classification (e.g. on CUB-200) It would be helpful if the authors would have specified in the form where each content is addressed instead of simply stating Yes/No. <doc-sep>This paper proposes a new method of dataset distillation called “Neural Feature Regression with Pooling.” This work avoids using a surrogate objective like previous works (DSAm DM, MTT) while also circumventing the computational constraints of other methods that optimize the true objective (DD, KIP). Like other methods, FRePo trains teacher networks from which to obtain “meta-gradients.” Unlike other methods, FRePo obtains the meta gradient by finding the closed form solution of the Kernel Ridge Regression problem posed by the final layer (linear classifier) of the teacher network. This method shows state-of-the-art results, outperforming previous methods in nearly every setting. Two other applications in continual learning and membership inference defense are also explored. Strengths The paper is very well written with very few grammatical errors (e.g., subject-verb agreement). All figures and tables are formatted well and clearly communicate the authors’ ideas. As for the method itself, FRePo clearly outperforms the other methods in the explored settings. Another dataset (ImageNet1k resized) is also introduced and will likely be a new evaluation metric for future dataset distillation works. Extensive visualizations are also included in the appendix. -------- Weaknesses I was disappointed to see that the fact that a new model was used (with respect to previous dataset distillation works) was not mentioned until deep into the appendix. While advocating for the adoption of a new backbone model is fine, this should be made very clear in the body of the paper. I understand that the authors re-evaluated the previous methods using this new architecture in Table 1, but the fact that a new architecture was used was not clear at all from the body alone. The authors should have also included results from their method on the original architecture. If this yields poor results, then this is a significant limitation of the method and should be addressed as such (but by no means detracts from the merit of this new contribution). Furthermore, even given the appendix, it still remains unclear if these re-evaluations of previous methods use an architecture identical to the new one used by FRePo. The methods should have been re-evaluated with batch-norm, and results for FRePo using instance norm should have also been included. It is unclear how much of the improvement over previous state of the art is due to the algorithm, normalization type, or number of channels. Similarly, Table 2 should also \\textit{at least} include an additional row for FRePo trained using Conv-IN. Also, if FRePo is faster and uses less memory than MTT, DSA, and DM, why were results for CIFAR100 and T-ImageNet not included here when they were included in previous works? One other tiny thing: your labels for “Samoyed” and “Golden Retriever” seem to be swapped in all of your ImageWoof figures :P The limitations on the size of the teacher model seem unique to this method among dataset distillation works and should be clearly addressed. <doc-sep>This paper proposes to use Kernel Ridge Regression on the last layer of a slowly updated feature extractor to perform Dataset Distillation. There are several key contributions in this paper, including demonstrating KRR can work well in DD, using model pools, fast training speed and state-of-the-art results on DD benchmarks. The experiment section shows results from both DD and other applications including continual learning and membership inference attacks. Overall, this paper presents a practical method that performs well both in terms of performance and training time. Strengths: + The paper contributed a new algorithm that uses KRR on the last layer and slowly updates the feature backbone to perform dataset distillation. + The paper well presents the algorithm and the related techniques and tricks for training + More stable performance on cross-architecture transfer is shown in table 2 + The authors show that the algorithm can be used for higher-resolution training such as 128x128 images and 64x64 ImageNet dataset. Weakness - I'm curious how the authors handled the kernel computation in equation 2. If not mistaken, it seems the algorithm needs to compute the kernel for the full real dataset? Do you use minibatches instead? Minor (no need to compare) There are two recent new works on the same task, it might be worth adding them to the discussion/related work part. 1. Dataset Condensation via Efficient Synthetic-Data Parameterization 2. Remember the Past: Distilling Datasets into Addressable Memories for Neural Networks This work has potential benefits in improving privacy and understanding how neural networks perceive a task. <doc-sep>This paper presents an efficient method for dataset distillation -- the goal is to extract/synthesize a small set of images that represent a dataset, so that deep learning models trained on the small set can achieve high (as high as possible) classification accuracy. The core contribution of this ppaer is to simplify and accelerate the conventional two-stage optimization procedure with TBPTT, where the gradients need to be propagated through a long chain. Instead, a kernel is used for computing the loss function, minimizing which leads to a single-step optimization approach which is highly efficient. The method is evaluated over a series of popular datasets including ImageNet which was never studied before. Strengths 1. The studied problem is important and interesting. 2. The proposed approach is sound and effective. The idea is reasonable and the experimental results are good. 3. The paper is well presented. Weaknesses I did not see significant weaknesses of this paper. Although the solution is still preliminary (in the context of the large CV community) and difficult to scale up to larger datasets, the proposed method is indeed a very good progress on this direction. There are some limitations that I hope the authors to address or explain, which I will write down in the following part. Please see the above comments. | The paper proposes a new algorithm for dataset distillation, based on two key ideas: (1) train a linear layer given the fixed feature extractor, and (2) use a diverse set of modes as feature extractors. The paper has received overwhelmingly positive reviews. Many reviewers find the algorithm effective, the paper well-written, and the results compelling. The rebuttal further addressed the concerns regarding the backbone models and missing experiments as well as provided additional clarifications. The AC agreed with the reviewers’ consensus and recommended accepting the paper. |
The paper proposed a post-training pruning framework for Transformers, without retraining. It prunes of both heads in MHA and filters in FFN layers in a structured way. The process is done by applying a lightweight Fisher based mask search along with a Fisher mask rearrangement and mask tuning. The results are comparable or even better FLOPs-accuracy trade-off than prior methods. Strengths: The paper is well written, and the authors proposed the method in quite a detail. The post-training pruning framework does not require retraining, which is very good. Adding latency-constrained in consideration is also good. The experiments are quite sufficient and are able to support their claims and conclusions. The results show the effectiveness of the proposed methods. The paper also compared existing structured pruning works on GLUE Please see the above comments. I will consider changing my ratings based on the author's rebuttal. <doc-sep>This paper proposes three techniques to obtain a high accuracy transformer without retraining. A search algorithm to find which heads and filters are need to prune based on the Fisher information. An algorithm which rearrange the mask that complements the search algorithm. And tuning the mask which reconstructs the output activations for each layer. The experiments show the author can get better results than compare methods. Strengths: This paper is well-written, well-motivated, and clear presentation. The proposed algorithm improves transformer throughout efficiency with competitive accuracy and small latency, outperforming prior pruning and distillation approaches. Formulating the hardware-aware structural pruning as a knapsack problem is interesting. Weaknesses: This paper should compare their algorithms with some SOTA pruning methods on transformer such as COFI[1] [1]Xia M, Zhong Z, Chen D. Structured pruning learns compact and accurate models[J]. arXiv preprint arXiv:2204.00408, 2022. Please see my weaknesses <doc-sep>In this paper, the authors proposed a fast post-training pruning framework for transformer-based language models which does not require retraining. The algorithm takes a model, sample dataset, and compression constraint to generate the compressed model. It introduces three techniques to retain high accuracy: 1. mask search; 2. mask rearrangement; 3. mask tuning. Experiments show that the proposed method achieves 2x FLOPs reduction and 1.6x speed up within 1% accuracy drop. Strengths: 1. Firstly, the paper is well written and easy to follow. 2. The proposed method solves a complex optimization problem by introducing several approximations and using a multi-step approach. The process is introduced clearly. The effectiveness of each step is demonstrated by ablation studies. 3. The experimental results are solid. The proposed method achieves similar performance compared to existing work without a large training cost, which could be valuable to real-life applications. 4. I like the discussion of latency-aware compression, where the authors used a piece-wise linear function to approximate the latency LUT, which is integrated into the optimization objective. It is a smart design to fit both settings under the same optimization framework. Weakness: 1. Firstly, it seems that the proposed method is not limited to pruning transformers; it can be also applied to other models like CNNs by just using the channel mask. Is there any consideration why the work is limited to transformers? Will the algorithm perform well on other models like CNNs? 2. There are some other works on post-training channel pruning (e.g., [a]). It would be better if the authors can show the proposed method can also outperform general data-free pruning methods when applied to transformers. But I also agree the current experimental results are already solid. [a] Lazarevich et al., Post-training deep neural network pruning via layer-wise calibration, ICCVW. There lacks a discussion of potential negative societal impacts. It might be acceptable due to the nature of the work; but a discussion is still encouraged. <doc-sep>This paper proposes a three-stage post-training pruning framework for Transformers. It first uses Fisher information to search the binary mask, i.e, layer-wise pruning rate. Then, the framework modifies the binary mask in a layer-wise manner. Lastly, it tunes the non-zero valued mask for minimizing the layer-wise reconstruction error. Such a framework could retain the performance of the model without retraining, thus it can finish the pruning in less than 3 minutes on a single GPU and can obtain actual speedup in inference because of structured pruning. Extensive experiments show that the proposed post-training pruning framework has a comparable performance with prior methods. 1. To my knowledge, this is the first work to use post-training pruning in Transformer. I recognize the contribution of applying technologies to new areas. 2. Proposing simple mask search solutions based on FLOPs and Latency, which avoids user intervention. 3. Experiments show the efficacy of the proposed framework, which could retain high accuracy without retraining. No societal impact discussion needed, in my opinion. | The authors deliver on what they promise: a fast post-training pruning framework for transformers. It reduces the inference costs of deploying transformers while preserving much or all of their accuracy on the standard range of academic downstream tasks. Moreover, it does so without the hefty costs that typically come with prune-and-retrain cycles. The paper is clearly written and well-presented, and the technique seems to work quite well. The authors seemed to satisfactorily address all reviewer concerns, and those concerns were minor at best. What more can you ask for? I look forward to visiting the poster at NeurIPS and trying this technique myself. The authors are to be especially commended for focusing on real-world speedup on real hardware. That's (sadly) still a rarity in pruning papers. This is something that appears genuinely useful, today, by practitioners. |
This paper addresses the problem of doing unsupervised learning of visual representations, including clustering these representations to infer the existence of new categories. The paper focuses on more "un-curated" settings in which the data comes from samples of an environment that a real agent is passing through. This gives object distributions that are very different from typical, curated data sets, like ImageNet without labels. The authors compare their method with other popular self-supervised learning techniques like SimCLR. First of all, I am very enthusiastic about the general topic that is being addressed: unsupervised (or self-supervised) learning in more realistic environments--specifically those in which we may see a large number of examples of a small set of objects, and perhaps a few examples of some other classes. Thus, the topic is highly relevant and important at this time. Despite trying my best to understand the specifics of the experimental settings, I found it very difficult to tell exactly what was going on. I found several aspects of the paper difficult to follow. One major issue was trying to understand the exact setting of the RoamingRooms data set task. Referring to the experimental results from figure 4, here are a few comments: - Only when I finally looked at the title of the figure in figure 4 was I able to discern some of the parameters of the experiments on the RoamingRoom data set such as the 5000 images that retrieval was done across. - I could not ascertain whether the bounding box of the object was given at test time, or only at training time. If it is provided at test time, this makes the task much much easier. Either way, it is a critical detail, and should be described clearly and unambiguously. - How is the query image related to the “5000 images” which I assume you are searching? How close can they be in time? Could it actually be at the exact same time? If it is just a frame or two away from the current frame, we would expect the similarity of the same object across these close views in time to be very very similar. If this is the case, then I would expect to see other types of baselines that leverage this kind of information, such as simple nearest neighbor methods. In other words, if I give a query, just return the top-9 nearest neighbors using some pre-trained representation, centered on the provided bounding box. If such an approach is not reasonable, it is hard to tell from the current text. If it is reasonable, then it would be useful to compare against it to get a sense of what exactly is being learned. - I couldn't even tell if SimCLR was trained on the same data, or trained on some other data. If it is trained on the current data, could you give details about how this training is done? It is possible that if I had read all of the referenced papers I would understand these details, but a reviewer should not have to wade back through papers to understand these details. - "we use our online clustering procedure to readout from these learned representations". Why not just do nearest neighbor retrieval using the learned representations, and see how that works? SimCLR and other such “generic” representation-learning approaches are solving a significantly different problem: they are typically trying to learn representations which have a high degree of generalizability so that they can be the backbone for a system trained to recognize general categories, with very high variability. Here, since the goal is to learn to recognize the exact same object, it makes more sense to take a prototype style approach, since this is very similar to say, k-nearest neighbors. What I’m trying to say is perhaps that SimCLR and other such methods are not the right baselines. A more reasonable baseline is one that just looks at distances to the training sets. The authors may feel that I have completely misunderstood the paper. That may be true, but it is not for lack of trying. I have worked with many of the cited self-supervised methods, and have published in self-supervised learning as well. I found a general lack of detail about the exact settings made it very hard to evaluate what was going on. Other issues: 1) “we make two changes on the model inference procedure defined above.” Why did the authors describe the changes to the model in the Experiments section? Isn’t the right place to discuss this in Section 3, the section where the model and methods are described? This makes it sound like a last second change….Or, perhaps like it was a difficult-to-justify method that would be difficult to explain. What is the motivation for putting it in experiments? 2) adjusted MI. While I follow the argument for using a max_alpha in the adjusted mutual information, I’m not sure I agree with it. For one, this is not something one can do in practice; therefore it gives an over-optimistic view of the output. Second, it is not clear if this procedure favors the current algorithm. By knowing that you are going to optimize over the number of clusters, one can adjust one’s algorithm to take advantage of this. I do not claim to understand whether this procedure favors your algorithm, but in general, such a procedure is not “neutral” to two different algorithms. 3) The function g() is introduced with no definition. 4) How is K the number of classes chosen? Despite being enthusiastic about this general area, I found it too difficult to follow the experiments. Not enough detail was given about: - data sets used and how they were sampled to produce "training" and test data - exactly what the experimental setting was. What problem, exactly, are these systems trying to solve? I think, but am not sure, that given many instances of objects with unique ids at "training time", they are simply trying to retrieve objects with the same ID as a query image at test time. However, not enough details are given about the details of this problem. - Given the nature of the problem, it appears as though a variety of nearest-neighbor style baselines would be appropriate. In my assessment about "confidence", I have said that I am fairly confident of my review. Ironically, I didn't understand the paper well, but this is the problem with the paper. It should be easier to understand what was done. Thus, I am confident in my assessment that the paper should have been more understandable. In particular, I am not someone coming from far outside the area; thus, the paper should have laid out the details more clearly so that someone with my background would understand it. <doc-sep>This work studies an online version of self-supervised representation learning. It uses Gaussian mixture of constant isotropic variance as a prototype memory as well as a uniform distribution to handle new unknown classes. The overall distribution evolves using an online EM algorithm that supports adding and removing prototypes. Using this memory, representation learning follows standard unsupervised losses, primalrily a distillation loss over predicted prototype assignment probabilities. Why is the standard deviation $\\sigma$ constant and not learned per component (cluster) of the mixture? Since $\\sigma$ is a scalar (isotropic), this wouldn't add much complexity to the model and it would help in deciding when to add or remove prototypes, potentially getting rid of (or learning) hyperparameters $u_0$ and $\\mu$. The uniform prior distribution for a new cluster is an improper distribution over continuous space. In fact, the overall distribution is a Gaussian-uniform mixture (GUM) (Lathuilière et al. 2018, "DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model"). In the original GUM model, the prior probability of an inlier is learned like all mixing coefficients (weights), whereas here $z_0$ is a fixed hyperparameter and $\\alpha$ is yet another unnecessary threshold hyperparameter (the new cluster probability could be simply compared to existing cluster probabilities). The RHS of (2) is incorrect. Softmax is mapping a vector to a vector, while here only a scalar is shown (the numerator of the LHS). One should map the whole vector through softmax, then take the $k$-th element. Eq. (5) is unclear unless one reads the Appendices. Overall, section 3.1 is written as a summary of sections A.1 and A.2, which is hard to follow. For example, $y_{t,k}$ and $u_{t,k}$ in (9,10) are not defined. One should provide a clear motivation for the model and all choices (which is missing), definitions all quantities (some of which are missing) and clear pointers to Appendices for all missing steps in the derivation. The formulation is overall very similar to (Ren et al., 2021), so the derivation is not new (e.g. compare (3) of Ren et al. with (5) of this work.) The connection is missing, both in section 3.1 and in A.1, A.2. What does inverse variance have to do with the count of examples per cluster? Why estimate variance since variance is assumed constant? This remains unclear even after reading section A.2. Shouldn't symbol $\\sigma_{t,d}$ (e.g. in (7,8) be $\\sigma_{t,k}$? What is really missing is citing and discussing Bottou and Bengio 1995, "Convergence Properties of the K-Means Algorithms," which defines an online $k$-means algorithm by means of gradient descent on an error function. One would expect a generalization to a Gaussian mixture basically by preserving the same formulation and adding mixing coefficients and variances per component (cluster), but this is not done. Bottou and Bengio have a similar definition of count of examples per cluster (Eq. (7)), but which has nothing to do with variance. Why use decay (hyperparameter $\\rho$)? Well, there is the argument of nonstationary environments but how about forgetting? The entire problem of incremental and online learning is to learn without forgetting. Here, not only nothing is done to avoid forgetting; decay is actually explicit forgetting. Is this justified by the unsupervised setting? Or is there another reason? "Popping out" the prototype with the least weight is too simplistic, because it does not consider interaction or redundancy among clusters. An little populated, isolated cluster may be more important than a highly populated cluster with many neighboring/overlapping clusters. Avrithis and Kalantidis 2012, "Approximate Gaussian Mixtures for Large Scale Vocabularies" develop an offline but dynamic (in terms of components) EM algorithm that removes mixture components based on pairwise interaction. Tao et al. 2020, "Few-Shot Class-Incremental Learning" also uses a graph over prototypes and has a similar online update of prototypes (Eq. (3)). In Eq. (13), it is not clear if target labels are soft, as is standard in distillation. They should be, since there is a temperature parameter, but (13) should have a sum over $k$ that is missing. Similarly, (14) should have a sum over $k$ if this expression is meant to be an entropy. Eq. (15) is too abstract and unclear. In computing AMI, is the number of classes known? What is $T$? Sweeping over hyperparameter $\\alpha$ is not realistic, especially if $\\alpha$ is unnecessary (see above). "Since none of [SimCLR, SwaV] are designed to output classes with a few examples, we use our online clustering procedure to readout from these learned representations:" This is unclear. As a result, the protocol of comparisons to offline methods it is unclear. A formal description is in order. The entire argument against methods assuming iid batch sampling is reliance on iid for contrastive learning (SimCLR) or prototypes (SwaV). How about self-supervised representation learning without prototypes (e.g. BYOL: Grill et al., 2020) and without negatives (e.g. SimSiam: Chen & He, 2021)? Whouldn't such simple alternatives be much easier to adapt to online? Wouldn't those online versions be more suitable as baselines to compare with? The following works should be discussed. Although I believe they are not directly comparable, one may still consider additional baselines, e.g. self-supervised offline pretraining followed by online learning (supervised or not): 1. Zhu et al. 2021, "Prototype Augmentation and Self-Supervision for Incremental Learning" 2. Gallardo et al. 2021, "Self-Supervised Training Enhances Online Continual Learning" 3. Zhang et al. 2021, "Self-Supervised Learning Aided Class-Incremental Lifelong Learning" 4. Cha et al. 2021, "Co$^2$L: Contrastive Continual Learning" The list of hyperparameters (e.g. Tables 4-6) is daunting. One should at least separate the method-specific hyperparameters (e.g. $K, \\rho, \\alpha$) from standard hyperparameters (like backbone and learning rate). From Tables 8-14, there are at least 7 hyperparameters in the ablation. What are the default values of remaining hyperparameters when studying each one? Is this search optimal, what is the cost, and to what extent could same values be used across datasets? Merging Tables 4-6 into one would help compare. The networks used are too small, more inline with few-shot learning and unlike most work on either self=supervised representation learning or continual learning. Online self-supervised learning is a very interesting and realistic setting. The online EM algorithm that maintains a set of prototypes makes a lot of sense. There are a number of papers that are very related, most notably Bottou and Bengio 1995 (online $k$-means) and Lathuilière et al. 2018 (Gaussian-uniform mixture). Not only one would expect them to be cited and discussed, but rather the method formulation should be based on the formulation of this prior work. This would improve positioning of this work, make the formulation more elegant/better motivated and dispense the need for certain hyperparameters. The related work section needs improvement in general. Writing is good in general, but the formulation in unclear at several points, relying on Appendices. Pointers to prior work are also missing in the formulation, primarily (Ren et al., 2021). The protocol of comparisons to non-online methods is unclear. The choice of competitors is not well justified: there are simpler methods that are easier to adapt to online. There may be more baselines to consider e.g. on pretraining. There are too many hyperparameters. The networks used are too small. <doc-sep>This paper claims that in real world the data distribution is nonstationary, which is different from the current standard machine learning formulation. To solve this problem, the authors design an online unsupervised learning algorithm with Gaussian mixture model and EM algorithm. The experiment shows that the method can learn the online stream of visual input data. Part of the successes of deep learning comes from the well collected dataset, e.g., ImageNet10K. To study some area, someone should first to collect enough high quality data. However, this is different from the cognitive process of human being. This paper questions the standard machine learning formulation of drawing i.i.d. samples and claims that the real world scenario should be nonstationary. This is very interesting and encourging. However, my main concern is that should we train our models in an online stream scenario just because the real world scenario nonstationary? You see, the tranditional learning formulation has its assumption that all samples are drawn in an i.i.d. way. But it doesn't violate the eventual performance. It works quite well. I could understand that the author tries to mimic the learning process of human kind, just like lifelong learning. But the authors are encouraged to concentrate the necessity of this behaviour. Besides, the language, the equations, are all above the thresold of the acceptance. In this paper, the author assumes a new scenario of nonstationary distribution. To solve the unsupervised and online setting, they design a novel online unsupervised prototypical networks (OUPN). By introducting the Gaussian mixture model and EM, the method could add or drop the cluster in an online way. The idea is interesting and encouring, but the necessity is not well claimed. The difference between this method and life-long learning / incremental learning is encouraged. Also, the hyper-parameter of maximum number of K clusters should pay much more attention. <doc-sep>The authors propose a method for unsupervised learning of instance-based clustering which is more robust to data imbalance and non-iid distributions than existing approaches, such as SwAW. In particular, they propose an Expectation Maximization algorithm that operates in temporal episodes (supposed to correspond to an agent moving through an environment). At every step an observation (image with an overplayed instance segmentation mask) is first encoded with a CNN and the resulting vector is assigned to one of the existing clusters based on the distance in the feature space, or a new cluster is created (E-step). This is formulated in a probabilistic framework. The cluster prototypes are then updated accordingly (M-step). The loss, computed after a sequence of such steps, consists of a standard contrastive loss (encourages assigning similar images to the same clusters) + entropy (encouraging confident assignments) + a term encouraging the creation of new clusters. The method is mainly evaluated on RoamingRooms - a dataset with an agent moving through indoor environments with instance labels assigned to objects. The task is then to cluster different view points of the same object together. The main challenge is that the distribution of examples is non-iid, since in every episode mostly the same objects are seen. Additional evaluation on online variants of Omniglot and ImageNet is also provided. When compared to offline and online contrastive learning algorithms the proposed approach demonstrates stronger performance especially with low batch size and non-iid distribution of examples. The proposed approach is novel to the best of my knowledge. It also seems sound. The experimental evaluation on the task of unsupervised, instance-based clustering in the online, non-iid setting clearly demonstrates the superiority of the proposed method compared to the baselines. However, I have two main concerns with this paper. Firstly, although it is well written, the presentation is quite dense and many details have to be inferred from the context. I understand that it's hard to pack so much content into 9 pages, but I would argue that the authors could make significantly more effort to better explain the results. The content could be densified by putting some equations in the text, for example, and the whole Omniglot evaluation can be moved to the supplementary material, since it's not that important for the story. In fact, the main issue with the presentation is that no ablation analysis is provided. There are several tables in the supplementary material which compare the values of the many hyper parameters in the method, but the accompanying text literally just takes 9 lines. This is not sufficient to understand what actually makes this relatively complex method work better than the baselines. After all, the goal of a research paper in not to report strong numbers on a benchmark, but to provided some knowledge about the problem being studied. The ablation analysis needs to be moved to the main text and the discussion needs to be expanded to 1-2 pages with clear conclusions about the importance of various components of the approach. Secondly, and more importantly, the claims in the introduction are not supported by the experimental results. The proposed approach learns a representation that clusters different views of the same instance in an online manner (character classes are also more akin to instances than semantic categories), and this is the task on which it is evaluated (aside from Table 3). But is this an actual task that many people care about? I would argue that it's not, and the authors agree with me, since all the claims are made in terms of semantic classification/categorization. But instance ids are not categories. In fact, its natural that a method trained with a contrastive learning objective will cluster different views of the same object together, but does it learn a semantic representation? Or is the learned representation at least useful for downstream semantic tasks? The only evidence for that provided in the paper are in Table 3 with experiments on an online version of ImageNet, but the explanation of these results is not sufficient and no qualitative examples of discovered clusters are provided. To address these issues, the authors should either demonstrate that their method is capable of discovering categories, or show that the learned representation is superior to the baselines on downstream semantic tasks (supervised image classification, object detection, semantic segmentation) or remove any claims on learning category information, and focus on the instance classification story, providing some justification for its importance. Finally, it would be interesting to see which batch size is sufficient for the baselines to reach the performance of the proposed approach both in iid and non-iid setting, extrapolating the curves in Figures 3 and 6. The proposed approach is original and clearly shows strong results for the task of unsupervised, online, non-iid instance classification. However, there are major issues with the presentation, and the unsupervised categorization claims made in the introduction are not supported by the experimental results. If the authors can address both concerns in the rebuttal, I'd be happy to recommend an acceptance. | This paper tackles a small-batch online unsupervised learning problem, specifically proposing an online unsupervised prototypical network architecture that leverages an online mixture-based clustering algorithm and corresponding EM algorithm. Special features are added to deal specifically with the non-stationary distributions that are induced. Results are shown on more realistic streams of data, namely from the RoamingRooms dataset, and compared to existing self-supervised learning algorithms including ones based on clustering principles e.g. SWaV. Overall, the reviewers were positive about the problem setting and method, but had some concerns about hyper-parameters (hYzM, cvrN, LjvY) and motivation for the specific setting where the method excels compared to other methods not designed for such a setting (hYzM, cvrN), i.e. small-batch setting, where it is not clear where the line should be drawn in terms of batch size and memory requirements with respect to performance differences between the proposed approach and existing self-supervised methods. Importantly, all reviewers had significant confusions about all aspects of the work ranging from low-level details of the proposed method to the empirical setting and evaluation (including for competing methods). After a long discussion, the authors provided a large amount of details about their work, which the reviewers and AC highly appreciate. However, in the end incorporating all of the feedback requires a major revision of the entire paper. Even the reviewers that were more on the positive side (cvrN and LjvY) mentioned it would be extremely beneficial for this paper to be significantly revised and go through another review. Since so many aspects were confusing, it is not clear to the AC that the underlying method, technical contributions, and other aspects of the works had a sufficient chance to be evaluated fairly, given that much of the review period was spent on clearing up such confusion. In summary, while the paper is definitely promising and tackles an important area for the community, it requires a major revision and should go through the review process when it is more clearly presented. As a result, I recommend rejection at this point, since it is not ready for publication in its current form. |
The authors suggest that conditioning on and generating explanations of a task or problem yields only minimal gains over raw in-context learning, but that the validity of such explanations may be correlated with the accuracy of answers the model produces. They use this hypothesis to motivate approximating reliability automatically as a method of calibrating in-context learning. Three datasets are used. One is an automatically generated synthetic dataset that is engineered so that automatically testing the reliability of explanations is possible, at least in theory. One is an adversarial subset of a previous QA dataset (HotPotQA) that is balanced so that GPT-3’s default performance is split 50/50 between correct and incorrect answers. The third is a Natural Language Inference (NLI) dataset with human-annotated explanations. The authors consider two methods for using explanations: explain-then-predict and predict-then-explain. In the former, the model first provides reasoning and then makes a prediction, with past work making the claim that this explanation helps the model predict more accurately. In the latter, the model predicts then explains, so that its explanation does not directly impact the prediction, but the explanations in the few-shot demonstrations still impact the final prediction. Varying number of few-shot demonstrations are used, as can fit in GPT-3’s context window. The results show only mild gains from using explanations on these benchmarks (which were specifically intended to probe the use of these explanations and thus may not be representative of the average few-shot learning case). The comparison to previous work is narrativized on lines 123-129, but evidence is not given for whether this narrative holds, which is makes it difficult to assess. Factuality and consistency are inspectd. For the synthetic dataset, rules are used to judge consistency, but this rule system is not described, even in the appendices. Again, while it is possible these rules are sound, it seems there are many potential pitfalls and not describing these rules makes their validity impossible to assess. For the other datasets, the authors manually annotate 100 of the same examples to report an impressively high annotator agreement. The results suggest that non-factual explanations indicate incorrect prediction, as a rule of thumb. Reporting of results is somewhat inconsistent: on E-SNLI E-P is not reported and factuality is not judged, though the authors give some reasoning for both of these. Not reporting the E-P strategy because results are so poor still seems strange. Next the authors attempt to calibrate the model, using a handful of extra data and human judgements. This is especially interesting, as the authors are more-or-less using the full context window available to GPT-3 so these examples could not have been used as in-context demonstrations. A method for calibrating factuality on each dataset is proposed. While sensible, these methods are dataset specific and somewhat adhoc, with no evidence given that they actually correlate with the factuality annotations reported in Table 2. Despite suggesting that E-SNLI can’t be assessed as factual/non-factual the authors attempt to calibrate E-SNLI using factuality, without discussion of how this works other than using “an analogous score following the same principle” where they consider the premise of the NLI inference as the context. If this is how calibration happens, why couldn’t this have been done when annotating E-SNLI for factuality? All the calibration methods are shown to improve the accuracy of the results. Related work is reviewed. A clear discussion of the potential risks of explanations given by models, especially trusting such explanations too much, is presented. Strengths - The research questions are well-motivated: despite impressive performance on selected datasets it is unclear how much or why explanations help models. - Evidence given that factuality is correlated with consistency and with non-factuality with incorrect prediction is an interesting result. - Exploiting extra data that can’t be put into the context window of a model for calibration is potentially a useful technique for many tasks. - The calibration shows seemingly meaningful performance improvements (though on highly specific and sometimes custom datasets) Weaknesses - ~~Only one version of GPT-3 is considered—GPT-3 Instruct, which is finetuned on human feedback, making the results somewhat less comparable to other work.~~ Edit: Authors have added significant new results. - Result reporting is selective—on E-SNLI factuality is not reported, the E-P method is not assessed, and Table 3 is missing many values. Edit: Authors have provided reasoning as to why some numbers were not included though in some cases (e.g. the expense of calculating those number), this is still a weakness, albeit a less intense one. - While the bespoke datasets for looking at this hypothesis are useful tools, it is unclear what these results mean for tasks where few-shot learning is commonly used. Edit: the authors have revises their framing in a way that partially alleviates this, focusing on textual reasoning. However, the available datasets still comprise a tiny fraction of textual reasoning datasets. - ~~The comparison to previous work is described, but evidence is not directly given. (Lines 123-129)~~ Edit: the authors have specified a new framing that makes this comparison significantly more valid. - ~~The automatic rules for evaluating factuality and consistency are not described, even in the appendices.~~ Edit: Authors have added a description of their synthetic task. - ~~Despite suggesting that E-SNLI can’t be assessed as factual/non-factual the authors attempt to calibrate E-SNLI using factuality, without discussion of how this works other than using “an analogous score following the same principle”.~~ Edit: Authors have described why they can heuristically calibrate factuality without being able to assess it. - ~~The calibration methods are methods are dataset specific and somewhat adhoc, with no evidence given that they actually correlate with the factuality annotations reported in Table 2~~ Edit: Authors have provided reasoning why their calibration methods should correlate with factuality annotations. ~~Authors have adequately discussed the limitations of their work.~~ Edit: Given the new framing, it will be very important for authors to revise the paper to make it clear how the available datasets limit the claims that can be drawn from this work, as the types of textual reasoning studied are rather limited. <doc-sep>This paper aims to investigate different aspects of explanation-enhanced few-shot in-context learning, especially under textual reasoning scenarios with the task of question answering and natural language inference. The paper mainly discusses: 1) whether the explanation fed into the prompt with P-E or E-P format can help the model improve its accuracy on downstream tasks, 2) whether the explanation is consistent with its prediction, 3) whether the explanation itself is factual enough to be trusted, 4) whether we can use the explanation to help the model calibrate its confidence in prediction. The paper performs experiments on three datasets, one synthetic dataset plus two existing annotated datasets. Through the paper, the authors claim that: 1) the explanation can mildly help the accuracy, 2) the explanation is mostly consistent with the prediction, 3) explanations are quite unfaithful, causing significant hallucination, 4) using the explanation to calibrate the model could further help the model. Strength: 1. the paper builds on top of the recent breakthrough of explanation-based in-context few-shot learning (Chain-of-thoughts) to investigate its reliability issue. 2. the paper investigates a highly important problem in large language model, which could contribute a lot to the community to influence the follow-up research. 3. the paper dives deeper to exploit the issue of explanation and propose explanation-based calibration methods. Weakness: (See the Limitation Section) Without sufficient experiments (synthetic + human-dataset across different tasks), it's hard to draw a more convincing conclusion. On the other hand, re-running the experiments on Davinci-002 will help tease out the confounding factor of prompt/model. The limitation of this paper is mainly in its deficiency of experiments to support its claims. First of all, the paper only test two tasks, namely question answering and natural language inference. Furthermore, for each of these two tasks, the authors only pick one human-annotated dataset. I think this might not be sufficient to draw a more general conclusion about LLM/GPT-3. I would hope to see more tasks, even many synthesized tasks would help in better understanding LLM's explanations. The other limitation is that the paper doesn't provide enough detail about its prompt engineering. According to the "Large Language Models are Zero-Shot Reasoners", picking a correct prompt could totally change the landscape. I'm afraid that the lack of comprehensive prompt engineering will make the claims weaker. For the experiment result in Table-1 E-P E-SNLI, I'm not sure whether better prompt engineering or Davinci-02 will make the number totally different. <doc-sep>This paper explored the capabilities of GPT-3 in using explanations in in-context learning. Although using in-context explanation has been demonstrated quite helpful for symbolic reasoning tasks, this paper finds that it's not the case for textual reasoning tasks such as NLI. Upon investigation, they find that simply including an explanation as in-context example does not always yield big improvements. Their analysis shows that this is because the explanations generated by GPT3 could be nonfactual. Given those observations, they showcase how to use in-context explanation to calibrate GPT3's prediction. The proposed method achieves significant improvements over three datasets. The observation in this paper is interesting and could be a good complement to current in-context explanation research. The proposed explanation-based calibration method is simple yet effective. The paper is clear and easy to follow. However, I also have the following concerns. The claim that incorporating explanation as in-context examples only marginally improves performance is a strong claim that contradicts prior works. To back up the claim, I suggest the author include a few more datasets in table 1. Besides, for experiments in S2, only 250 testing examples are considered. Can authors justify their choice, as compared to using the official test set? Since on such a small test set, the conclusion is not that convincing. Besides, for results in Table 1, I do not agree that such improvements (10% relative improvements) are "mild". They are just not that surprising compared to some other tasks. The authors did not address the limitations and potential negative societal impact of their work. | The authors perform an analysis that suggests that explanations may not provide reliable signal in few-shot in-context learning, showing that adding explanations yields only minimal gains over raw in-context learning. They then develop an approach to approximate the reliability of predictions automatically using these explanations. In the initial reviews, the reviewers pointed out issues with the empirical rigor of the study and its framing. However, they seem to have addressed these concerns by narrowing the scope of their contributions and providing additional experiments supporting their claims. |
**Summary** This work proposes CVaDE which is an extension of variational based deep clustering model (VaDE) with additional incorporation of prior clustering preferences as supervision. These priors guide the underlying clustering process towards a user-desirable partitioning of input data. The priors are provided in the form of pairwise constraints indicating which pair of samples belongs to same or different class. Clustering process is modelled using variational Bayes in which the clustering constraints are incorporated into prior probabilities with varying degree of uncertainty. The empirical results shows that in comparison to unconstrained clustering the small amount of pairwise constraints significantly improves clustering performance. Further, it demonstrates CVaDE's robustness to noise, generation capability as well as successful incorporation of different desirable preferences to drive clustering performance towards completely different partitioning. **Quality** The paper is well written albeit with numerous typographical error (some of which are listed at the end of this review). Experimental evaluation seems thorough. However, I would like to compare results on complex datasets as well as with large classes (> 10). Complex datasets includes STL-10, YouTube Faces, mini ImageNet etc. Please show efficacy on diverse sets of data covering large variation in number of classes, dimensionality, attributes. Moreover, clustering being unsupervised (here semi-supervised) one should not (rather cannot) employ different hyper-parameters for different dataset. Under the context of zero ground truth data availability, they should rather be fixed. Table 7 says otherwise. **Originality** As mentioned above, CVaDE is extended from VaDE but with prior input constraints. Thus the conditional ELBO loss objective is thus a simple extension of VaDE objective. Apart from this, the prior distribution used for pairwise constraints is adapted from work of Lu & Leen (2004). In summary, the work carries very little novelty. **Significance** Constrained clustering has been around for some time in various forms. However, the subtle difference CVaDE brings to the table is how to incorporate them into prior probabilities. Like VaDE, CVaDE is also clustering cum generative model. Once trained, model can be employed for sampling new data. Due to better training procedure using constraints, the generated samples is bound to be perceptually better. However, the samples are not better than the state of the art conditional generative models such as InfoGANs. **Clarity** 1. In eq(2), shouldn't it be $\\mu_{z_i}$ instead of $\\mu_{x_i}$. Is function $f(z_i; \\theta)$ not deterministic ? My understanding is given fixed $\\mu_{z_i}$ one can sample as many $x_i$. Same goes for $\\sigma_{x_i}$. 2. Figure 5, axis labels are missing. 3. Under experiments, please make clear what are we solving for - $z$ and $c$ ? Have you tried k-means on extracted $z$ post training ? 4. What is penalty weight ? I did not find any description. 5. Why C-IDEC cannot model different noise levels within same data set ? 6. Where is the derivation for Conditional ELBO formulation ? In appendix I only find solution to C-ELBO not how to derive Eq (5). 7. What is the impact of imbalanced dataset on CVaDE ? I presume apriori this imbalance is not known to the user. 8. Eq (19), is $\\mathbb{E}$ different from $E$ ? 9. Eq (19), Eq (20) summation w.r.t. is pulled out. Typo in $W_{ij}$ component. 10. Eq (21), some of the terms are approximated by Monte carlo sampling while others are still taking expectation 11. In Eq (18), If 3rd term is marginalised w.r.t. $q(Z|X)$ then it is technically wrong to apply monte-carlo sample to central line in Eq (21). Remember $\\frac{1}{L}$ approximates $q(z_i|x_i)$ which is applicable for 1st, 2nd and 4th terms. Not for all. 12. Eq(12) $\\delta_{c_ic_j}$ is missing <doc-sep>Summary. This paper extends the variational deep embedding VaDE model (a VAE-based clustering method) to integrate pairwise constraints between objects, i.e., must-link and cannot-link. The constraints are integrated a priori as a condition. That is, the prior over the cluster labels is conditioned on the constraints. The whole model, referred to as Constrained VaDE (CVaDE), takes the form of a conditional VAE tailored for constrained clustering. Experiments are curried out on various real-world datasets, and the proposed method is compared to VaDE as well as to recent and classical constrained clustering methods. Strengths. 1. The different ideas used in this paper, such as adopting a mixture of Gaussians as a prior over the VAE latent space for clustering, or the specification of the conditional prior over the cluster labels to integrate pairwise constraints, are not new by themselves. However, combining them together within a VAE framework is interesting and has not been investigated before to my knowledge. 2. The paper is well written, and the proposed method is clearly motivated and described. 3. Experiments are conducted on various data types. Weaknesses 1. The authors claim superior performance compared to recent constrained deep clustering models. However, looking at the results of Table 1, the proposed CVaDE and the C-IDEC baseline are tight, and the differences in performance do not appear to be statistically significant in most cases. 2. It does not seem like a lot of efforts have been spent for hyperparameters setting. a. For instance the same encoder-decoder architecture is used for all datasets, including image and text ones, even though the latter exhibit very different characteristics. In particular, the retained 4-layers architecture (500-500-2000-10) is too complex (very prone to overfitting) for a text dataset such REUTERS, which is extremely sparse, i.e., with very few nonzero entries. b. There are important differences in the optimization-hyperparameters (e.g., batch size) used to train CVaDE and its building block VaDE. It would be useful to report the performance of VaDE when trained using the same settings as CVaDE. 3. Despite being a work on constrained clustering, no results regarding the number of satisfied constraints are reported. 4. The authors claim efficiency, but complexity analysis and training time comparisons are missing. 5. Comparisons with baselines when the number of constraints varies are not reported. 6. For the noisy labels experiment, integrating the noise level “q” in the specification of pairwise confidence level is not fair. In practice, we may not always have access to such information in a context of unsupervised learning. Additional comments and questions. 1. For tractability purposes, the cluster proportions are all set to be equal (1/K). It would be useful to investigate the impact of this assumption on datasets exhibiting very unbalanced cluster sizes. One possibility is to preprocess some of the considered datasets to create such case. 2. Performance is assessed using Normalized Mutual Information (NMI) and Accuracy. I would suggest reporting the Adjusted Rand Index (ARI) as well. The latter metric is particularly suitable in a context of constrained clustering, as it measures the proportion of pairs of objects clustered similarly according to both the predicted and the ground truth partitions. 3. Have you considered conditioning the variational posterior on the constraint information G? 4. Are you using a held-out test set for evaluation? <doc-sep>This paper solves the constrained clustering from a probabilistic perspective in a deep learning framework. In general, this paper suffers from several major problems. I will illustrate my concerns point-by-point. 1. The authors mention that none of the existing work in the deep (constrained) clustering models the data generative process. First, this is not true. For example, Semi-crowdsourced Clustering with Deep Generative Models. Second, the authors should illustrate the benefits of data generative process for constrained clustering. That is the motivation of this paper. Unfortunately, the motivation is not strong and clear. 2. If I understand correctly, Eq. (9) and (10) are the core techniques for the proposed algorithm. Such a penalty is straightforward in constrained clustering. 3. In Section 3.4.2, there is another side information, named partition level constraint. The authors might want to explore this as well. This point is not a drawback. Just a suggestion. 4. Some traditional constrained clustering methods with deep VaDE features can be involved for comparisons. 5. It is better to provide some insights on robustness with noisy side information. 6. How to set alpha? Is there some normalization to make alpha within a small range? 7. It is better to show the performance with different numbers of constraints. 8. I am thinking whether two applications in the experimental section are practical in real-world scenarios. I mean how to obtain the pairwise constraints? If I were the project manager in charge of annotation, I will directly label their categories, rather than providing the pairwise constraints. | We thank the authors for their detailed responses to reviewers, and for engaging in a constructive discussions. As explained by the reviewers, the paper is clearly written and the method is novel. However, the novelty is to combine existing ideas and techniques to define an objective function that allows to incorporate cluster assignment constraints, which was considered incremental. Regarding quality, the discussion highlighted some possible improvements that the authors propose to do in a future version of the paper, and we encourage them to follow that direction. Regarding significance, although the experimental results are promising there were some concerns that the improvement over existing techniques is marginal, and that more experiments leading to a clearer message would be useful. In summary, this is not a bad paper, but it is below the standards of ICLR in its current form. |
This paper presents a theoretical framework that studies Shapley value in the context of Markov Games as a useful technique for value factorization and credit assignment in agents coalitions. Leveraging this framework, the authors proposed Shapley Q-Learning (SHAQ) derived from a novel definition of a Shapley-Bellman Operator. The proposed algorithm is compared with a suite of existing algorithms (COMA, VDN, QMIX) in predator-prey and the StarCraft MA Challenge, contrasting competitive results while showing interesting properties of interpretability. **Strengths** 1. The paper is well-written and properly motivated. The work is well-placed among the existing and vast literature in Multiagent Reinforcement Learning (MARL). 2. The combination of Shapley's theory with Q-Learning seems a novel contribution in the interesting and always challenging setting of MARL. **Weaknesses** 1. The experimental section would benefit from a discussion on the interpretability of SHAQ in the predator-prey setting, which seems to be missing in the current manuscript. The authors addressed the limitation of the work, including the assumptions and restrictions imposed in the scenarios considered. <doc-sep>The paper presents a new framework and corresponding algorithm to solve value factorization in global reward games. Specifically, it derives the Shapley-Bellman optimality equation from evaluating the optimal Markov Shapley value and proposes the Shapley-Bellman operator to solve it, which is also proved in the paper. Furthermore, Shapley Q-learning is presented to implement the theoretical framework for predator-prey and SMAC environments. Contributions: The paper proposes a new theoretical cooperative game framework and Shapley Q-learning algorithm for solving global reward games. Moreover, the authors give proof for the theoretical framework and evaluate SHAQ on Predator-Prey and StarCraft tasks, which shows good performance and interpretability. Strength: 1. well written, easy to follow 2. novel cooperative game framework for global reward game justified both theoretically and empirically 3. well literature review on relevant fields 4. proof details and codes provided Weakness: 1. Figures 1,2,3 are too small to read easily 2. Improvements seem not significant compared to SOTAs The assumption in line 78 for Markov convex games looks too strong, is it possible to extend the same results to general cooperative games? <doc-sep>The paper considers multiagent reinforcement learning in a global (cooeprative) reward game. It contrasts the results value factorization frameworks, and proposes an alternative via the Shapley value from cooperative game theory. Basically, the authors consider a form of game with coalition structures, and apply the Shapley value to decompose the reward, and derive and Shapley-Bellman optimiality equation (SBOE) corresponding to the optimal joint determinisitic policy. They propose a Shapley-Bellman opeator (SBO) that solves for the SBOE. These finally give rise to a new multiagent reinforcement learning algorithm, called Shapley Q learning, SHAQ for short, somewhat akin to existing value factorization methods. Empirically, on a few settings (predator-prey and starcraft) SHAQ exhibits better performance than existing approaches, and also provides some interpertability foundation. The key strength of the paper is in applying cooperative game theory tools to multiagent Q learning. Recently the Shapley value has become a very popular tool in machine learning due to its ability to decompose the performance of a model to the relative influence of specific features. This has proven a very strong tool for analyzing supervised learning models. The authors propose now propose to use this theoretic foundation to multiagent reinforcement learning. The key weakness in my opion is not having a clear, crisp takeaway from this work. If the main claim is the superior performanc on multiagent reinforcement learning, then the empirical analysis seems to be somewhat lacking, as it covers relatively few domains (there are now enough multiagent gyms that allow a wider variability of tasks). If the main claim is a theoretical foundation, then one might expect a tighter analysis and bounds as compared to existing approaches. Either way, I think the writing is very formal, and could be imporved. What is the main driving intuition here? MARL is typically considered through a non-cooperative game theory prism (Markov Game). Here you are trying to use cooperative game theory, which means you consider subsets of agents, and have some function mapping each such subteam to its success in the task. Then, one might view the Shapley value as a decomposition allocating each single agent its individual reward / impact in the team's success. But why use the Shapley value rather than other solution concepts (such as the Core, which you mention, or the least-core), or the Nucleolous, or the Kernel, or other similar power indices such as the Banzhaf index? Are you using some of the axiomatic foundations to the Shapley value? If so, then where? All in all, I really love the topic of the paper, but the execution could be improved (more domains for empirical evaluation, tighter theoretic bounds versus baselines). And the writing should focus on the intuitions before jumping to the technical definitions As I wrote, the empirical analysis is somewhat limited (but certainly a decent foundation). Also the writing could be improved - at the very least I'd give the formal definitions of a transferable utility cooperative game, coalition structures, the core (as applied to a general CS or characteristic function game). The paper does a better job on the RL side (where things are fully defined). Also, you should have the discussion on what happens in non team reward (non fully cooperative) settings. All in all, a very interesting paper, if only for the nice connection between RL and cooperative game theory. | Reviewers appreciate that the paper is making an insightful contribution to the important field of cooperative MARL and its connection with cooperative game theory. The paper is clear and mostly well motivated, and the theoretical analysis and empirical evaluation are sufficient. |
This paper proposed a novel diffusion model for molecular conformer generation. The proposed diffusion model is operated in the space of torsion angles of the conformer. An extrinsic-to-intrinsic score model is learned for the diffusion process to predict the torsional scores directly from the 3D point cloud representation of the conformer. The exact likelihoods of the generated conformers can be computed to enable energy-based training with samples from the Boltzmann distribution. The proposed diffusion model outperforms existing machine learning and cheminformatic-based solutions on the GEOM benchmark. This paper proposed a novel diffusion model that has a strong empirical impact on the fundamental task of molecular conformer generation. With comprehensive experiments, the authors showed that the model is able to generate more accurate conformers in less time compared with existing machine learning solutions. The model also consistently outperforms a commercial software OMEGA on the GEOM benchmark. The evaluations were conducted with high quality, with sufficient details, strong quantitative evidence, and adequate ablation analysis. Theoretically, the intuitions behind the important design decisions are well explained, i.e., learning a diffusion model in the torsional space instead of euclidean space, using a SE(3)-invariant model to parameterize the score function, etc. The theory behind the diffusion model and training algorithm is also sound. The authors have adequately addressed the limitations in the appendix. <doc-sep>This paper propose torsional diffusion for molecular conformer generation. Different from the previous conformer generation work which mainly predict the position of each atoms in the 3D space, this work view the conformer generation in a different view by generating the torsion. To achieve this, the paper also provides the exact likelihood of the generated conformers, which lead a better training pipeline comparing with the previous work. From the experiment side, the paper reaches a very good SOTA result comparing with GEOM-DRUGS dataset, which is very convincing. In addition, the paper propose torsional Boltzmann generator that can be expand and adapt in generating various class of molecules. The paper is in a good writing, very easy to follow and the idea is natural. Rather than learning the Euclidian information in the previous task, learning the torsion to generate conformer is more reasonable in a physical natural. Although learning torsion was proposed in GeoMol using MPNN, enhancing it use Diffusion model is still soundable. The experiment result is solid with many details. For the weakness, I know from the author's statement, they retrain the GeoDiff to adapt the larger dataset, but it still seems the GEOM-DRUGS and GEOM-QM9's performance for the baseline GeoDiff and GeoMol is much lower than their report number. Any suggestion towards this huge gap? Is it possible to use their split to have another comparison? Same as the Questions Section. <doc-sep>This paper studies molecular conformation generation. Different from previous works that focus on coordinate prediction or distance prediction, this paper proposes to focus solely on torsional angles of rotatable bonds in molecules, leaving all other degrees of freedom fixed. Inspired by diffusion models on Riemannian manifolds, the authors first lay the theoretical framework of torsional diffusion on hypertorus following the standard denoising diffusion models. Based on this framework, they show that such a diffusion process can also be defined on Cartesian coordinates, and design a SE(3)-invariant and parity equivariant score network operating on 3D point clouds. They show the proposed method achieves state-of-the-art performance on GEOM-DRUGS, with much fewer denoising steps compared to previous models. The resulting method is also capable of exact likelihoods estimation, making it suitable for matching the Boltzmann distribution over torsion angles using the energy function. ### Strengths Torsional angles are the most important degrees of freedom that determine the conformations of a molecule. Focusing on torsional angle modeling for conformation prediction is a straightforward and novel idea. To the best of my knowledge, it is the first work that marries denoising diffusion models with torsional angle prediction, and I like this idea. The resulting score functions for torsional angle diffusion is SE(3)-invariant and parity equivariant for every rotatable bond in a molecule, which is a nice property. The proposed model is capable of exact likelihood estimation, which is another nice property thanks to the use of SDE. ### Weaknesses The model relies on an external algorithm, such as RDKit ETKDG, to generate the initial local structures for the target molecule, and the only degrees of freedom are torsional angles. This means the chirality of the conformation is entirely determined by the initial structure, or equivalently, by the external algorithm that provides the initial guess structure. The authors didn't discuss limitations and potential negative societal impact of their work. <doc-sep>This paper argues that instead of generating bond length, angle and torsion simultaneously, one should focus on the more difficult part, i.e. the torsion. And the authors further propose a diffusion model working on this Riemannian manifold of the torsion angles. A Bolzman generator is then introduced based on the likelihood calculated from the diffusion model. Experiments on molecules with normal size, large size and small size datasets show increased performance in terms of RMSD and convergence time. Pros: -The paper is well motivated to work on the torsion angles only, which reasonably leads to advantages in speed and performance, under mild assumptions about the bond length and angle. The idea is novel. -The idea of using the diffusion model on the torsion-composed manifold is compatible with the Bolzman generator and leads to some good properties. -The paper is well written. The idea, the procedure, why symmetry properties are satisfied, how to solve the sample-wise intrinsic coordinates, are well elaborated. Cons: -Ablation studies, e.g. about the depth of diffusion, conformed matching and any other hyperparameters would help to understand and are desired. The authors have adequately discussed the limitations of torsional diffusion, which I think is quite fair. | This paper proposes a diffusion model for molecular conformation prediction in the space of torsion angles. The idea is new, and the experimental results are strong. All reviewers like this paper. |
This paper proposes a stochastic process, FILEX, to abstract out the essence of deep learning based emergent language systems. FILEX follows the intuition that the more a word is used, the more it will be used in the future. On four experimental settings, the authors show that the correlation between parameters of FILEX and the lexicon entropy is similar to the correlation between hyper parameters of neural networks and the lexicon entropy. strength - This work attempts to construct a simple enough theory for understanding emergent language. Such attempt is generally valuable for the overall community as the filed of emergent language itself is, emerging. - The proposed method is simple enough to mimic the model behavior thus providing a level of abstraction/ simplification for understanding. Additionally, I believe similar observation is also discussed in the previous VAE literature, just for the authors information[1]. - This work is inspiring to me and I believe such work should also be inspiring to future work. Though I am not fully convinced with the existing experiments (see below), I am happy to see either future work may reinforce its conclusion or turn them over. weakness - The proposed methods is too simple to miss important details of many aspects of neural networks. When the authors link the parameters to of FILEX to the neural network hyper parameters, their reasons are more about intuition rather than rigorously mathematically discussions. - The experimental settings are too simple and may be hard to generalize to more complicated setting. One direct generalization is whether the conclusion will hold if the vocabulary size is large (say 10K) and compositional (say a sentence of length 10). [1] Alemi et. al. ICML 2018. Fixing a Broken ELBO The authors have properly discussed the limitations <doc-sep>The paper proposes a stochastic process FILEX developed from the Chinese restaurant process, to mathematically model the lexicon entropy of emergent language between multiple agents (the speaker and the listener) in ELS (emergent language system) environments. The authors make correspondences between the FILEX and ELS to evaluate FILEX in real ELS environments. Experimental results on four ELS environments show that FILEX can correctly predict the correlation between hyperparameters and the lexicon entropy of well-trained emergent language in real ELS systems. In this paper, the authors propose a mathematical model of lexicon entropy of emergent language between agents. Moreover, the authors conduct extensive experiments to verify the model, and the results show its effectiveness. Strengths: 1. The paper provides an inspiring idea of mathematically modeling the properties of the emergent language without training in real environments. Besides, the mathematical description is more precise and testable than natural language. 2. The experimental results demonstrate the effectiveness of the proposed FILEX, which verifies the feasibility of the idea. Weaknesses: 1. The mathematical model may be not rigorous enough. ----a) The proposed FILEX is based on the assumption that word use is reinforced in emergent language in sec 3.1. However, the paper only provides evidence in human language but not emergent language between agents. The paper does not prove the assumption fits all ELS environments either. ----b) The correspondences of hyperparameters between FILEX and ELS in line 193 are based on analogy, thus the relationships between FILEX and ELS maybe not strong enough to support the alignments. Besides, beta in FILEX is corresponding to both buffer size and temperature in ELS, which may break the independence of the two hyperparameters despite the reason given in the paper. 2. The experiments seem to be not well-organized and insufficient somewhat. ----a) In sec 3.4, the meaning of the hyperparameter-entropy correlation and the reason why it can be used to evaluate whether FILEX matches ELS is not given. ----b) It is better to add the experimental evidence of the self-reinforcing assumption of emergent language in the four ELS environments, which is the basic assumption of the proposed FILEX. ----c) In line 173, as a testable mathematical model, FILEX is expected to provide more precise information, thus the equality of sign of correlation as the metric seems a little too weak. It would be better if the authors add some stronger metrics. 3. The organization and presentation of the paper could be improved. ----a) The introduction section misses necessary background introductions to the key concept “emergent language” and “lexicon entropy”, which may confuse readers from a broad domain. ----b) The paper lacks a formal definition of the problem or task. Besides, it is better to give the input, output, and goal of FILEX before the technical details in sec 3.1. ----c) In sec 3.4, it is better to provide more explanations about the obscure hyperparameter-entropy correlation, which may be unfamiliar to many readers. ----d) The paper may be not well-organized. For example, the environments in sec 3.2 and result analysis in sec 5.1 should be included in the experimental parts. ----e) There are some typos and small mistakes in the paper. For example, line 9 in Algorithm 1 is not correctly initialized and used elsewhere; “Section 3.3” -> “in Section 3.3” in line 102. According to the authors, there are weaknesses in this work that have not been addressed. Details are as above. This paper does not have any potential negative societal impact. <doc-sep>This paper fits into the Emergent Language literation and argues that prior work has been insufficiently rigorous when presenting their hypotheses. This work is presented as a starting point for mathematical methodology in developing and testing models of emergent language. Their main contribution is an approach "FiLex" to understanding models of emergent language influenced by the Chinese Restaurant process (CRP). The relation here is to describe words as having a similar property to the tables in the CRP in that their usage is self-reinforcing. This approach is detailed in Section 3.1 with Algorithm 1 and the Formulation on page 3. Their hypothesis, presented in Section 3.4, is that hyperparameters in FiLex correlate with hyperparameters commonly found in emergent language setups: time steps, lexicon size, learning rate, buffer size, and temperature. Their specific statement is that the sign of the correlation between those hyperparameters and entropy will be the same for FiLex as it is for the ELS setups. They experimentally test this hypothesis by comparing FiLex against 4 ELS setups in Table 2 and Figure 1 and then make claims in Sec 5 (discussion) that this model lets practitioners more rigorously understand confounding factors. Originality: This paper is original. It's taking a well-known idea (CRP) and applying it to a domain (emergent language) where the procedure is potentially beneficial for attaining understanding. It's unclear to me that this is actually the right approach for this domain and that's something that warrants greater discussion in the paper, but it's still an interesting direction that has merit. Quality: I don't think that this approach makes sense to do in ELS. I love the motivation to have better mathematical foundations in the space, but I disagree that this approach, or at least as presented, is the right path. First, why should it be the case that methods with same sign correlations in individual hyperparameters are actually indicative of each other? This is much too weak of a statement to be predictive as we could manufacture this artificially without much issue. The argument given at the end of section 3 that then qualitatively associates the variables doesn't fix this issue; changing just the algorithm used from PPO to something else that doesn't have a buffer size shows how fragile that is. The explanation in 3.4 (L173-L180) points to this as well, suggesting that there are too many factors unaccounted for (due to FiLex's simplicity) to do much else besides this. But that's exactly what the paper set out to do at the beginning after discussing how prior work skipped out on this important step. Perhaps they did so for this same reason? Sec 5, the evaluation, talks about making predictions ("[... FiLex] makes the correct prediction 20 out of 20 times.") and how that wouldn't happen if it wasn't predictive. But then the graphs (and the discussion) shows how tenuous that is. It's not very helpful for my research to run this method a bunch of times and hope that the correlation remains positive versus just varying a hyperparameter in my actual model and seeing what that does. Perhaps the authors are thinking that I cannot run my actual model that much because, say, it's a real robot. Okay that's fair, but I have no idea if this method would actually work in that setting and little belief that my model in totality would really be correlated given that a handful of hyperparameters are. Clarity: The paper is clear enough. The one area which would have been more helpful is to coalesce Table 2 and Figure 1 to be more apparently connected. It takes longer than I would have liked to understand what was going on in them (which are the results) and the stories they told. Significance: This paper is only significant in the world where the simple model is predictive of much bigger and more interesting models. The ones examined are not that, nor is it even assessed as predictive of anything in those settings. For it to get to a place where we can confidently say that this is interesting for real-world settings, it would need to be aligned with actual (emergent) language learning. That is unfair to evaluate wrt learning new words in this setting (as neither FiLex nor most ELS systems are doing that), but it is fair to ask whether the learned language distributions are the same. Is that true? Is FiLex learning similar word distributions to the ELS models? More to the point, because FiLex's learned distribution won't ever be different given its simplicity, is it Zipfian? We expect that to if it's ever going to be able to model a full language. Societally, this is fine. Wrt limitations, see the above S&W section. | This paper proposes FiLex -- a mathematical model to capture lexicon entropy in emergent language systems. The paper tackles an important and interesting problem in a field (emergent language) where relatively less theory currently exists. However, the reviewers find the experiments not convincing enough (e.g. they do not evaluate actual emergent language and instead use human languages) and lacking in scale. I do think the paper has some merits and can be strengthened further by addressing the reviewer comments, but the current version unfortunately seems below bar for acceptance. |
The authors present Myriad, a testbed written in JAX which enables machine learning researchers to benchmark imitation learning and reinforcement learning algorithms against trajectory optimization-based methods in challenging real-world environments. Myriad contains 18 optimal control problems presented in continuous time and ranging from biology to medicine to engineering. As such, Myriad strives to serve as a stepping stone toward the application of modern machine learning techniques for impactful real-world tasks. All environments, optimizers and tools are available in the software package at https://github.com/nikihowe/myriad. 1. All tasks are inspired by real-world problems, with applications in medicine, ecology, and epidemiology. 2. Myriad is, as the authors mentioned, the first repository that enables deep learning methods to be combined seamlessly with traditional trajectory optimization techniques. 3. The system dynamics in Myriad are continuous in time and space, offering several advantages over discretized environments. 4. The authors present a novel control-oriented imitation learning algorithm that combines optimal control with deep learning. 1. As a real-world testbed paper, I think it will be more helpful to have tools like Jupiter notebooks to directly demonstrate some examples mentioned in the paper. I went through the GitHub page mentioned in the paper. The code is clean and nice but it will be better to have some demonstrations. 2. The paper is well organized but it seems the authors are trying to put too much content in relatively short pages, which make it harder to captcha more details in each section, in this case, the different scenarios. <doc-sep>This paper proposed Myriad, which is a real-world testbed that offers many real-world relevant, continuous space and time dynamical system environments for optimal control. Myriad is written in JAX, and both environments and trajectory optimization routines are fully differentiable. This paper offered trajectory optimization tools to the machine learning community that is compatible to deep learning workflow. It encorages the development of machine learning algorithms with the goal of addressing real-world problems. The tasks that Myriad provides is simple and thoroughly studied in optimal control, while ML for trajectory optimization usually aims to solve more complex tasks. <doc-sep>Myriad is a testbed consisting of 18 continuous-time, optimal control problems (called systems or environments) from several real-world domains (medicine, ecology, and epidemiology). This work aims to enable comparisons between deep learning methods (like RL and IL) and optimal control methods, for which benchmark reference scores are provided. Written in JAX, Myriad also enables new algorithms like implicit planning over neural ODEs, a novel control-oriented imitation learning technique that is presented in detail. Deep learning-based approaches to decision making are not often compared to relevant benchmark results from more classical, continuous-time trajectory optimization, even when such comparisons are possible in principle. Myriad helps bridge this gap by means of a new repository and results presented in the paper. The challenges and the solutions taken by Myriad are explained with care, particularly in the key case where system dynamics need to be learned. The main example of such a learned model is a differentiable Neural ODE, outlined in Algo 2, for which experimental results are provided. The possibility of combining deep learning and trajectory optimization methods is intriguing. Several of Myriad’s current limitations (like its inability to handle pixel-based images) are clearly highlighted. It’s not clear to a deep RL practitioner like me how to use Myriad’s results as benchmarks against which to meaningfully compare RL or IL techniques. Environments called cart-pole and pendulum (wrapped in the standard gym API) do exist. But I see no reason to conclude that their underlying equations of motion are at all the same as the systems of the same name covered by Myriad. If they are the same, then the paper should document this correspondence, and compare Myriad’s results in those environments to appropriate RL or IL results, even if only from prior work. But if the underlying dynamics of Myriad’s systems are not the same as those of existing gym environments, then Myriad should bridge the gap (between continuous time and discrete time) by implementing gym environments which do share those dynamics. This would provide the missing part of the bridge between deep learning and optimal control that Myriad is targeting. <doc-sep>This paper presents a suite of tasks inspired from real-world problems. It also provides implementations of 5 trajectory optimaiztion algorithms. The environments and the trajectory optimization algorithms are made differentiable. This paper also demonstrates system identification and dynamcis learning via neural ODE on the presented tasks. It also proposes a new imitation learning algorithm that embeds planning into the policy structure by leveraging the differentiating through the non-linear program solver. Finally, references scores of the trajectory optimization with true dynamics, sysID-ed dynamics, learned dynamics, and the proposed imitation learning algorithms are reported. - Benchmarking of trajectory optimization algorithms, and making them differentiable is relevant yet under-explored to the ML community, and this paper provides a good starting point towards filling the gap. - The experiments span an interesting range of benchmarking trajectory optimization algorithms with true dynamics, dynamics learning, and trajectory optimization with learned dynamics. - The repository seems well structured and documented - Details of the tasks are missing. There is only a single-sentence description of the tasks in the appendix, which makes it difficult to tell if these tasks are really useful and proper for benchmarking trajectory optimization algorithms. For example, the paper claims the tasks to be challenging, but I am not sure if tasks such as pendulum, mountain car, and Cart-Pole Swing-Up can be considered challenging, which are more like standard easy-to-solve benchmarking tasks. I am less familiar with other tasks in medicine, ecology, and epidemiology, and thus not sure if those tasks are really challenging and can be used for benchmarking trajectory optmization methods. Please provide more details of the tasks and the reasons for choosing them. - The paper is not very well structured and miss some important details: Section 4, I don't think such a detailed explanation and comparison between the direct single shooting and direct multiple shooting is necessary (including figure 2), since they are just existing trajectory optimization methods and should not be the focus of the work. I feel this section, could be changed to a discussion on how different trajectory optimization methods work on the **proposed tasks**, in stead of how they work in a toy example as in current figure 1. Section 5, it seems that the proposed method to do constraint optimization is just to do gradient-descent-ascent on a GPU. I am not sure why this would just be significantly faster than traditional methods well optimized on a CPU. is there any specific optimization used for running it on the GPU? Or at least some quantitative comparison of the speed can be provided to understand if the proposed method is much faster and can be viewed as a contribution. Besides, how is the trajectory optimizer such as direct single shooting made differentiable? Is it, e.g., just by unrolling the computation graph under a automatic differentiable framework (JAX), or does it leverage some other techniques like implicit function theorem? This is unclear from the current paper. - Some statements are not accurate: the abstract states that the paper "enables machine learning researchers to benchmark imitation learning and reinforcement learning algorithms against trajectory optimization-based methods in challenging real-world environments". But later in the limitation section the paper states "it is at present unlikely to be useful for benchmarking RL tasks ...". Please be consistent on the statements. Also, from Table 4 in the appendix, most of the tasks can be defined with just a few (<10) parameters, which seems to indicate these tasks are not really "challenging". <doc-sep>This paper introduces a differentiable trajectory optimization library for use in optimal control and end-to-end learning of trajectory optimization modules. The library uses JAX to solve constrained continuous-time, continuous-control optimal control problems, particularly via a scalable form of gradient descent-ascent on an associated Lagrangian. To support system identification or learnable dynamics, the library uses the neural ODE to support continuous-time evaluation while being efficient to train. The neural ODE may be trained via system identification (i.e., supervised learning on observed trajectories) or in an end-to-end fashion by backpropagating through the trajectory optimization procedure to perform imitation learning. The authors demonstrate their library using all of the aforementioned features on some proposed benchmark problems, including problems that are not usually considered by the optimal control/RL community (e.g., cancer treatment and population dynamics). Post-rebuttal\\ ===========\\ Raised my score from 5 to 7 after discussion. The authors give a variety of benchmark problems, including those that are not usually considered by the optimal control/RL community. The included trajectory optimization algorithms should provide a good counterpart or baselines to compare against RL algorithms. I’m confused what the major contribution of the paper is: is it the trajectory optimization library or is it the provided problems? If it’s trajectory optimization, there are other libraries that also provide differentiable trajectory optimization (see “Relation to Prior Work”), so this work needs to differentiate itself from the others. If it’s the included problems, they are underemphasized in the paper and need to be discussed more. Since the community is already very familiar with robotics and video game benchmarks, a discussion on these new problems (what they are, why they're interesting, aspects they have that current canonical problems don't have) can make this library a compelling choice. <doc-sep>This paper develops a library that contains a suite of real-world continuous control problems and a collection of trajectory optimization implementations, which can serve as an excellent testbed for RL practitioners for challenging control problems. Most RL works in the current literature focus on complex dynamics, such as games, which has discrete timesteps, or robotics, which follows end-effect control to simplify RL learning. Althoug these RL applications are great, there remains a large space of important problems of relatively simple continuous dynamics. Trajectory optimizations have been developed for decades for these problems and it remains particularly important to answer the question whether RL algorithms can be used to further improve the state-of-the-art to these problems. There are only limited works on this domain with very limited testbeds available. I personally appreciate the efforts from the authors to make such a clear library for the RL community. It will be a great testbed for potentially huge RL advances in the continous control problems. The paper is generally great. If possible, including some simple RL implementation and results would make the paper even stronger. | This paper provides a test-bed called Myriad for trajectory optimization and system ID in jax, with the hope of engaging RL practitioners to benchmark against the test bed. The proposed test-bed provides examples ranging from different domains (such as medicine, and biology) deviating from traditional domains that are usually the focus on RL benchmarks. Further, the focus is on continuous time settings. Limitations are clearly specified. Authors have done a good job of describing the testbed and comparing existing methods. One challenge that remains to be addressed is how useful this testbed really is for RL practitioners. This concern has been raised because it seems like the dynamics of the proposed problems fall on the simpler end of the spectrum. Nonetheless, I am currently of the belief that having a JAX based test-bed for such domains is still valuable and I am hoping will contribute more to reproducible RL results in these domains. Currently it appears that the designer makes a lot of choices around the design and set up of the RL framework. It may be a real concern if the general RL community will not adopt the testbed for this precise reason. More importantly, it did strike me as odd that the contribution and abstract claim testbed for imitation learning and RL but do not provide a simple example of how this could be done. This was the main concern of tBG5. I also agree with CQCv's assessment that requiring dynamics equations to be explicitly provided by the user will significantly cost adoption of the test-bed. Reviewer vypE scored the paper very well but failed to justify the score for me to rely significantly on it. My expectation is that the authors will genuinely deliver on all the asks. Further, continue to improve the library to make it more amenable to testing of RL algorithms with a more friendly API. Overall I want to note that significant effort seems to be required to put together the test-bed but also believe all above concerns are valid and authors are highly recommended to incorporate as many changes as possible. The lack of explicit examples of simple baseline rl testing on the test-bed is concerning, irrespective of the API and potential simplicity of the domains (which I believe is not a huge concern if it warrants RL testing in novel domains). I strongly encourage the authors to incorporate feedback and I believe it will make for a much stronger testbed in practice. Hoping authors deliver on this, to the extent possible by camera-ready deadline and after, I am recommending an accept since the testbed has some utility even in its current form (though I am less optimistic about widespread adoption in its current form). |
This paper proposes a GAN-based approach to quantify both out-of-distribution and in-distribution uncertainty in image classification. Specifically, the authors use GANs to generate examples from the out-of-class regime several times, and each time then use the examples to train a one-vs-all classifier in the final DNN layer. Finally, the resulting classifiers are sued to model class conditional likehoods. The authors conduct experiments on MNIST, CIFAR10, CIFAR100 and show state-of-the-art performance in terms of OoD detection and FP detection. Strengths: 1. The presentation of this paper is clear and easy to follow. 2. Existing GAN-based approaches mostly only predict a single score for OoD examples, while this paper proposes to quantify both in-distribution uncertainty and out-of-distribution uncertainty. This paper achieves this by repurposing the training scheme of the classifier and the generation of OoD examples. The idea is interesting and can inspire others in this community. 3. The authors show strong results in several datasets, including MNIST, CIFAR10, CIFAR100. Weaknesses: 1. There are no ablation studies to study and verify the effectiveness of each design of the proposal. For example, the number of classes used for training the classifier each time, the different image quality of GAN-generated examples, etc. 2. It would be better to clarify the necessity and importance (or practical significance) of quantifying both in-distribution and out-of-distribution uncertainty in a single model. 1. The organization of the section of Related Work is not clear. 2. It would be better to show more GAN-generated examples for reference. Besides, it would be better to conduct experiments on a relatively large dataset, e.g., ImageNet. <doc-sep>This paper proposes a method to estimate both aleatoric and epistemic uncertainties. It does so by training a conditional GAN in a latent space of a pretrained autoencoder to shield each class with out-of-class samples. ## Strength - Evaluation metrics are comprehensive. - The provided toy dataset is illustrative. ## Weakness - Some related works are missing, e.g., conformal prediction. - It is hard to assess the effectiveness of the proposed method without experiments on ImageNet. I understand that the limited computational resources might be a concern, but any form of ImageNet like low resolution of 64x64 or TinyImageNet (which has 200 classes) would be very helpful. - Section 3.1 and 3.2 are disconnected. How exactly are aleatoric and epistemic uncertainties computed? How are generated $\\tilde{x}$ used? Why are they valid definitions? - Why is one-vs-all classifier necessary here? Can similar results be achieved by normal multiclass classifiers? - MC-Dropout is basically Bayesian ensemble and can be plugged in other methods as well. It is known that ensemble can drastically improve performance. So the comparisons in Table 3, 4 seem not very fair. Without MC-Dropout, the improvements of the proposed method is not always significant. N/A <doc-sep>The paper presents UQGAN, a model for uncertainty quantification, including both OoD and FP detection. The proposed model contains a classifier C that outputs for a pair sample-label $(x,y)$ the probability that the pair is in-distribution. For training C, the in-distribution pairs consists of real pairs $(x,y)$ directly drawn from the dataset, while out-of-distribution pairs are of two types: either $(x,y')$ where $y'$ is not the correct label associated to $x$, accounting for class uncertainty, or $(\\hat{x}, y)$ where $\\hat{x}$ is a sample generated by a cGAN conditionned on $y$. The cGAN's generator is trained in the latent space of a pretrained conditional auto-encoder jointly with a discriminator, but also with the classifier C so that the generated data pair $(\\hat{x}, y)$ is considered to be out-of-distribution by C. Additionally, a regularizer pushes the generated samples to cover a large range of directions in the latent space. They also combine their method with MC dropout and observe further improvements. Strength: - The paper is well organized and didactic. As someone not particularly familiar with the uncertainty literature, I especially appreciate the effort put into the related work section. - It proposes a novel unified approach for both OoD and FP detection. - The experiments are comprehensive, with detailed breakdown and hyper-parameter analysis in the supplementary. Weakness: - The method seems fairly complex to use as it requires different parts (cAE, Generator/Discriminator, Classifier) and hyper-parameters to tune. The authors have adressed societal impact in the adequate section. <doc-sep>This paper introduces cGANs to help deep classifier modelling uncertainty. The generator is encouraged to generate features that lie in real distribution (evaluated by the discriminator) and out-of-class regions (assessed by the classifier). A low-dimensional regulariser is introduced to generate diverse features. The authors verify their ideas on several benchmarks. The improved performance on out-of-distribution data splits shows the effectiveness of the proposed method. Ablation studies verify the importance of the proposed regulariser. Strengths: -The authors conduct experiments on several datasets, with results showing that the proposed methods are better than previous baselines. -The authors provide comprehensive ablation studies in the supplementary material to support the soundness of the design of the method. -The idea of generating out-of-class samples is interesting. Weakness: -The visualization experiments are conducted on simple datasets which only contain 2 classes. Authors are encouraged to show some visualization results on datasets having multiple-classes. -All experiments are conducted on small-scale and small-resolution datasets. The authors are encouraged to verify their method on more challenging datasets, such as ImageNet-Dog, ImageNet-O [1] dataset. [1] Hendrycks, et al. "Natural adversarial examples." CVPR 2021. NA | The authors propose a new approach for training image classifiers with complete uncertainty quantification based on generative adversarial networks. The main idea is to use GANs to "shield" each class separately from the out-of-class (OoC) regime. This is done in combination with a one-vs-all classifier in the final DNN layer trained jointly with a class-conditional generator for out-of-class data in an adversarial framework. Finally, these classifiers are then used to model class conditional likelihoods. The empirical validation shows improved OoD detection and FP detection performance when compared to SOTA in this setting. The reviewers appreciated the clarity of exposition and the positioning with respect to the related works. The unified approach applicable both to FP detection and OoD detection was deemed novel. On the negative side, the method seems to be extremely involved in terms of the required architectural pieces, distinction between low-dim and high-dim settings, primarily low-resolution data used for evaluation, and the number of hyperparameters. During the discussion the authors addressed the main questions raised by the reviewers. Nevertheless, given that all of the reviewers are leaning positive, I'll recommend the acceptance of this work. Please do a full pass in terms of formatting of the whole manuscript, including removing inline tables and figures, removing things like double parenthesis, bolding specific letters (e.g. L247), clarify the flow of information in figure 1 so that one can grasp the high-level overview of the algorithm, and incorporate the remaining points raised during the discussion. |
In this paper, the authors studied the optimization problem of shallow and deep neural network. They showed that under the non-degeneracy condition on certain Gram matrix, gradient descent (GD) can converge to 0 training loss efficiently. One important difference with the existing NTK and mean-field literatures is that a different scaling factor was used in this paper. Experiment results show that neural network with this scaling is different from NTK and mean-field scale, while it is still able to do feature learning. Strength - Understanding the optimization of neural network beyond NTK (lazy training) is an important direction. - Under certain condition on the Gram matrix, it can be shown that GD converges to 0 training loss. Empirically, the neural network indeed shows the ability of feature learning. Weakness: - Major concern: I was wondering if authors could clarify the convergence rate dependency on \\lambda_min (G) and \\lambda_max (G) in Theorem 1, Theorem 2 and Theorem 3, i.e., the relation between r,\\hat{c}_min,C and \\lambda_min (G), \\lambda_max(G). There seems no discussion about it. It seems to me that \\hat{c}_min exponentially depends on 1/\\lambda_min (G) (based on Lemma 3, especially the definition of K(I,\\lambda_1,\\lambda_2)), which implies the width m exponentially depends on 1/\\lambda_min (G). If this is true, I feel the results in this paper are less interesting, since it would require exponentially number of neurons for the global convergence. - For multi-layer neural network, only the second-to-last layer is trained while all other layers are random sampled and fixed. This is different from the setting in practice. It is understandable that there might be technical challenges when analyzing the case of training all layers, so I would not view this as a major limitation. Minor comments: - At the end if page 3 (fixed embedding): \\phi_j(x) = x_j instead of \\bm{x}_j Based on my above comments, I currently would not vote for accept because of the major concern above. <doc-sep>This paper establishes the global convergence analysis, with a linear convergence rate, of gradient flow for the neural networks in the mean field regime which possesses a feature learning aspect. The contribution of this paper is to show that the positivity of the Gram-matrix of input (random) features is sufficient for guaranteeing global convergence. [Contributions] Recently, the mean field setting of neural networks has become an important topic in the context of global convergence analysis of neural networks because of the presence of feature learning whereas the kernel (lazy) regime basically describes the local behavior of the dynamics of training. However, an optimization theory in the mean field regime is significantly more challenging and usually requires involved mathematical tools. In this sense, this study makes a certain contribution in finding a simple condition. The proof can be considered as a sort of extension of the NTK-theory. Indeed, this theory basically builds upon showing the positivity of (finite-width) NTK as well as NTK-theory but does not require fixation of NTK, unlike NTK-theory. Specifically, the positivity of NTK can be reduced to the positivity of the Gram-matrix of input (random) features and $d \\geq n$ by decoupling the parameter-dependent part from the NTK. [Improvements] - It is misleading to say that this theory covers multi-layer neural networks because the trainable layer is limited to the second-to-last layer. Therefore, the model in this paper is essentially two-layer neural networks with random feature inputs. - The explanation of feature learning in the last paragraph of Section 2 is insufficient. In particular, the large stepsize in proportion to the width $m$ is also needed to exhibit feature learning as well as network scaling $1/m$. (See feature learning paper [67] for the detail. Although this submission considers the continuous-time dynamics, this point should be mentioned.) - Some important references are missing. For instance, (modified) PL-condition based analysis of overparameterized neural networks is relevant to this submission in the sense that the theory essentially relies on the positivity of NTK. - Spencer Frei and Quanquan Gu. Proxy Convexity: A Unified Framework for the Analysis of Neural Networks Trained by Gradient Descent. NeurIPS, 2021. - Chaoyue Liu, Libin Zhu, and Mikhail Belkin. Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. 2020. - Mo Zhou, Rong Ge, and Chi Jin. A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network. COLT, 2021. [Additional references] More recently, global convergence rate analyses in the mean field regime under KL-regularization were established by several papers. - Kaitong Hu, Zhenjie Ren, David Siska, and Lukasz Szpruch. Mean-field Langevin dynamics and energy landscape of neural networks. 2019. - Jean-François Jabir, David Šiška, and Łukasz Szpruch. Mean-Field Neural ODEs via Relaxed Optimal Control. 2019. - Atsushi Nitanda, Denny Wu, and Taiji Suzuki. Particledual averaging: Optimization of mean field neural networks with global convergence rate analysis. NeurIPS, 2021. [Minor comments] - Typo (line2, page 6): $i \\in [n]$ → $i \\in [m]$. - There are typos in the definition of pseudo-L-layer NN in Section 3.2.2: The description of $\\phi_j$ seems redundant. For $h_i^{L-1}$, the index should be $i \\in [m]$. I think this paper makes a certain contribution in finding out a simple condition for the global convergence analysis in the mean-field regime. However, the quality of the manuscript could be improved. <doc-sep>The authors consider the training dynamics of neural networks under a modification of the mean field parameterization in the high dimensional (d >= n) setting. They provide an especially simple global convergence result under very mild assumptions (beyond the more restrictive d>=n one). The proof is particularly clean and simple in the setting of leaky-relu type activations, and somewhat more technical arguments are required to get the result to hold for more general activations. The d>=n assumption is key, as their results depend upon the minimal eigenvalue of the empirical kernel matrix defined by the inner product K(x,x') = <x, x'>, which is strictly positive when d>=n but not necessarily otherwise. They extend the results to the feature map/embedding setting, assuming the data come from feature maps \\phi(x) where \\phi(x) lies in a sufficiently high dimensional space so that the kernel defined over embeddings is positive definite provided the dimension of the embedding is large enough. Although the n <= d setting is somewhat limited, I quite liked this paper, as all previous mean field neural network analyses were quite complicated. This one also appears novel in that there is the connection to kernels which was unexpected. I should note I am not an expert on mean field analysis of neural networks, but to my knowledge the paper's contributions are novel and commendable. There were a number of points of clarification I'm hoping the authors could comment on. (1) The choice of the scaling in the network is non-standard and deserves more discussion. The authors claim at top of pg. 4 that the scaling agrees with the scaling of Yang & Hu '20---I assume the authors are referring to the \\mu P scaling in Table 1. But it seems to me that this is quite a different scaling than the one there, and also the standard mean field scalings. The \\mu P scaling requires the scaling of sqrt(D), not 1/sqrt(D), for the inner layer weights, since a_1 = -1/2. Moreover, the learning rate scaling in \\mu P is order 1, while the learning rate scaling in this paper is order m. I understand the need for learning rate scaling of order m; this is standard in mean field and is explained well in Appendix B. What happens when the scaling is 1/D vs. 1/sqrt(D)? Or even unit scaling (as is standard in mean field, see MFP column in Yang & Hu '20)? My understanding is that it only changes the scaling of lambda_min(G), and thus doesn't really affect the rates (at least in the instance of leaky relu activations). (2) Regarding Experiment 1: Are different d chosen for the different n? The results only hold when d <= n, so it's odd to consider d=20 for each of them but this seems to be what the authors have done. Regarding the negative result of (65; Wojtowytsch & E '20), please give more details on what precisely the negative result is, and how your experiments relate to it. My understanding is those results show that the risk can only decrease at rate t^{-c/d}, so that as d increases the rate should get worse. But here you are increasing n. It was also unclear to me why such low dimensions and sample sizes were used here. Typos and minor comments "feature training" on page 2, bottom of Sec 1.1 references for claim at bottom of page 2 about "weights would lease to collapse of the diversity"? The usage of summation terms everywhere rather than vectorized forms was distracting and atypical. If accepted, for the camera ready I would recommend re-writing everything in the form of e.g. f(x) = (1/m) c^T \\sigma(Wx) Presumably \\hat c = \\Theta(1) in assumption 1; if c depends on m things could go badly? i\\in [m] on top of pg.6, not i\\in [n] (30) should be \\partial \\mathcal L, not \\partial f, presumably "Appendix ??" on pg 17 I should note that I did not check the details of all the proofs in the appendix. The paper makes a novel contribution for convergence of NNs in the mean field regime with a clean and simple proof. <doc-sep>This paper studies the optimization of a pseudo-three-layer network. The first (output) layer is scaled by a law of large number (LNN) scaling, while the second layer is scaled by a central limit theorem (CLT)-scaling. The inputs are fixed random feature embeddings. Only the weights of the second layer are learnable. They proved that when the second layer is sufficiently over-parameterized (wrt the number of samples), GD flow converges to a zero-loss solution exponentially fast. In particular, the authors claim that, unlike the NTK regime, their setting exhibits feature learnings. As far as I can tell, the setting of this paper is still limited to the lazy training regime. In other words, the essential model of fitting the $n$ data points is following linearized model: $$ f(x;W+\\Delta) \\approx f(x;W) + \\langle \\nabla_W f(x;W), \\Delta \\rangle. $$ This can be seen as follows. - Notice that for the output of ``feature function'' $h_i(x)=\\frac{1}{\\sqrt{D}}\\sum_{j=1}^D W_{i,j}\\phi_j(x)$ changing an $O(1)$ value, $\\{W_{i,j}\\}$ only needs to change $O(1/\\sqrt{D})$. This change shrinks as increasing $D$. Also it is indeed required that $D\\geq poly(n)$ in the proof. In contrast, in Theorem 1, they only require m (the width of the output layer which is in LNN scaling) to be larger than $\\log(n)$. - On the other hand, the movement of W can roughly be estimated as follows, $$ ||\\Delta|| \\leq \\int_0^\\infty ||\\nabla_W L(W_t)|| d t \\leq \\int_0^\\infty \\sqrt{\\lambda_\\max(G) L(W_t)} dt\\leq \\sqrt{\\lambda_\\max(G)}\\int_0^\\infty e^{-r\\lambda_\\min(G) t/2}dt \\leq c\\sqrt{\\frac{\\lambda_\\max(G)}{\\lambda_\\min(G)}} = O(poly(n)). $$ Hence, on average, for each $i,j$, $W_{i,j}$ roughtly only moves $poly(n)/\\sqrt{D}\\ll 1$ when $D$ is sufficiently large. Please feel free to correct me if you think I misunderstood the results. The studied setting is not too much different from previous work of non-convex optimization in the lazy regime. Therefore, I cannot recommend accepting this paper. | This paper studies optimization of over-parametrized neural networks in the mean-field scaling. Specifically, when the input dimension in larger than the number of training samples, the paper shows that the training loss converges to 0 at a linear rate under gradient flow. It's possible to extend the result by random feature layers to handle the case when input dimension is low. Empirically the dynamics in this paper seems to achieve better generalization performance than the NTK counterpart, but no theoretical result is known. Overall this is a solid contribution to the hard problem of analyzing the training dynamics of mean-field regime. There was some debate between reviewers on what is the definition of "feature learning" and I recommend the authors to give an explicit definition of what they mean (and potentially use a different term). |
## Summary The authors compare empirical frequentist coverage of predictive intervals for several uncertainty quantification methods. The paper covers both classification and regression. The authors define an analogue of a confidence interval for classification. Coverage properties are also studied under covariate shift between training and test sets. ## Pros Coverage and width are a standard benchmarks for uncertainty quantification in statistics, and to my knowledge, this is the first work that undertakes a large-scale comparison for deep learning models. Some inspiration seems to have been drawn from Ovadia et al. 2019 in that the set of methods compared are similar and the same architectures are used. However, this work makes an important contribution in focusing on coverage / width, which I would agree are more interpretable metrics for practitioners. The set of methods spans several important strains of the literature: ensembling, Bayesian approximation, Dropout, GPs. ## Cons The work is timely and of broad interest; however, I think the presentation of results still needs some refining. Given that this paper focuses on empirical results, I would suggest the authors spend more time developing effective visualizations to communicate their conclusions. Tables 1 2 and 3 are perhaps necessary as a reference, but cry out for a visual aid. The authors state "We see that higher coverage correlates with a higher average width." This is something that seems like it could be communicated more immediately with the right plot. Figure 4 conveys some visual trends, but also could be improved. The coverage plot contains mostly blank space. The dots are clustered together and impossible to differentiate. In the width plot, what is communicated is that all methods have wider intervals with more shift. However, it is hard to differentiate the methods: again, the dots are on top of each other, the colors blend such that they do not seem to actually correspond to the colors on the legend (perhaps alpha should not be used here?). Figure 5 has similar problems. The use of alpha means that colors blend together and cannot be looked up in the legend. As before, the dots in the legend are tiny, so it is hard to differentiate the shades even in the legend. What is visually communicated is the spread of performance, but conclusions about any particular method are nearly impossible. Clearly this is a difficult problem to solve: there are 7 methods, and several axes of variation (coverage, width, shift). However, the plots at the moment do not convey much information aside from overall trends. Visual understanding method-specific results is not possible at the moment. I would also suggest that the authors devote more attention to the definition of coverage for predictive intervals, and how it relates to distribution shift. For example, the authors define coverage as equation (1) holding for any distribution P. It is not explicitly stated, but the implication here seems to be that the set $\\hat{C}_n(x_n)$ is determined from training data distributed as P, and coverage is measured from data drawn from the same distribution (i.e. this definition does not allow covariate shift). It would be useful for the authors to state in mathematical terms, what it means for coverage to hold under covariate shift. Is this equivalent to the notion of conditional coverage as defined in Barber et al. 2020? These are subtle enough concepts that I think they should be more precisely spelled out in the paper, even if some intuitive definition of covariate shift is widely understood. Clearly we cannot expect coverage to hold under a distribution shift that changes the conditional P(Y | X) between training and eval. What are the limits of what the authors allow? I would also suggest that the authors might include an explicit analysis of conditional coverage. For example, all methods seem to enjoy 95% coverage for in-distribution eval sets. However, it would be interesting to know if this coverage is uniform across classes or any other useful clustering of the data. ### A few specifics: * The authors' analogue of confidence interval for a classifier is novel to me, and is a convenient way to unify the presentation of results between classification and regression. If this is a novel definition, I would suggest the authors more explicitly point this out, as future literature may use it and should cite it. * Figure 4 is out of order with figure 3 - this needs to be fixed.<doc-sep>**Summary and key claims** This paper provides a comprehensive evaluation of the empirical frequentist coverage properties of existing uncertainty quantification baselines on both regression and classification tasks. The paper focuses on frequentist coverage as a faithful metric for the quality of uncertainty estimates. The experimental evaluations in the paper imply that accurate out-of-distribution coverage is a hard target for most existing baselines; a problem exacerbated by settings were dataset shifts are prevalent. *The key contributions claimed by the paper are:* - Introduces coverage and width as a natural and interpretable metrics for evaluating predictive uncertainty. - Provides a comprehensive set of coverage evaluations for popular uncertainty quantification baselines. - Examines how dataset shift affects these coverage properties. **Originality and Significance** Frequentist coverage is perhaps the most classical (and straightforward) measure of the quality of uncertainty estimates in statistics, so it's a bit odd that the authors claim the introduction of coverage and interval width as one of their key contributions. Despite not being as popular in the machine learning community, frequentist coverage has been considered in [R1] and [R2], and even coverage under dataset shifts was considered in [R3]. These existing papers not only consider frequentist coverage as a metric for uncertainty estimates, but they go as far as developing methods that provide theoretical guarantees on coverage. In fact, [R2] gives a more complete picture of uncertainty estimates by assessing both coverage and discriminative accuracy as both metrics do not necessarily correlate. I think that the key contribution of the paper is the experimental evaluations on many baselines and many datasets to analyze the performance of different methods with respect to coverage. While this analysis is interesting, it lacked insights into baselines' performances and the role of the evaluation metric used in assessing the comparative performances of baselines. For the most part, the experimental section was limited to reporting performance of all baselines on all datasets without providing insights into **why some methods perform better than others w.r.t this specific coverage metric** and **how the introduction of the coverage metric changes our perception on which methods are best**. I was expecting to see more evaluations that rank baselines w.r.t say calibration or Brier score, and then show that a ranking based on coverage would be significantly different, thereby motivating the usage of coverage in the uncertainty analysis toolbox. I would have also appreciated a breakdown of aleatoric and epistemic uncertainty, and how coverage may be a good metric for assessing either types of uncertainties, etc. Having read the experimental section---which is the key section in this paper---I was not exactly sure what to make of it. The key take away of the experiments highlighted in the abstract and discussion is that uncertainty estimates do not generalize well to out-of-distribution samples. However, such finding is not new and has been discussed before in (Ovadia et al. (2019)). Also, it is not clear how the introduction of the coverage metric helps us arrive at this conclusion; it seems to me that the same conclusion could have been arrived at with calibration or AUC-ROC on out-of-distribution samples. **Technical comments** I have two main comments on the technical aspects of the paper: 1) The authors found that GPs are clear winners when it comes to coverage. However, I am afraid that the frequentist coverage of Bayesian uncertainty (credible) intervals are extremely sensitive to the selection of the Bayesian prior (See the works of Szabo and van der Vaart in [R4] and references therein). Frequentist coverage is a specifically sensitive quantity in Bayesian analysis as a very large or very small prior length-scale of a GP kernel may give us very good or very bad coverage. The same issues are relevant (in a more subtle way) in Dropout NNs and any Bayesian NN approximation. Since most baselines considered in your frequentist analysis are actually Bayesian models, it is very important to report how robust are your findings to different selections of the priors (in this case, priors will correspond to hyperparameters). I did not find any discussion on the impact of hyperparameters in the resulting quality of uncertainty intervals and their impact on dataset shifts, despite this being a central concern in Bayesian models. A different approach for tuning hyperparameters may render models other than GPs come on top in your comparison. 2) Frequentist coverage is a concept associated with **regression** problems: we want a **contiguous** coverage set $C$ to contain the **real-valued** prediction target $y$ with a probability $1-\\alpha$. In $K$-classification problems, the true real-valued target is the class probabilities $p_K$, and a confidence set in this case would comprise a $K$-simplex that covers the true class probability $1-\\alpha$ of the time. But the true class probabilities $p_k$ are never observed; we only observe discrete values for one out of K classes. So calculating empirical coverage of class probabilities is impossible in classification problems. The authors extend the notion of coverage to classification in a different way: a coverage set $C$ is a discrete set of possible labels whose sum predicted probabilities add up to $1-\\alpha$, and coverage is achieved if the true label belongs to this set (Equation (2)). I find this definition incomplete because your coverage set $C$ is not contiguous anymore; it may contain labels 1 and $K$ and excludes 2,...,$K-1$. As you can see, in this scenario a coverage set wouldn't make sense unless the targets 1 to $K$ are unordered. So I think you have to say that these applies only to unordered categorical targets for this to make sense. Also, I do not see how this definition would work for binary classification, which is always ordered? In the case of binary classification, it seems to me that calibration is actually a more expressive metric than coverage as it accounts for class probability even when $K=2$. **References** [R1] Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, Ryan J Tibshirani, "Predictive inference with the jackknife+", arXiv, 2019. [R2] Alaa, Ahmed M., and Mihaela van der Schaar. "Discriminative jackknife: Quantifying uncertainty in deep learning via higher-order influence functions." ICML (2020). [R3] Tibshirani, R. J., Barber, R. F., Candes, E., & Ramdas, A. (2019). Conformal prediction under covariate shift. In Advances in Neural Information Processing Systems (pp. 2530-2540). [R4] Botond Szabó, A. W. van der Vaart, and J. H. van Zanten, Frequentist coverage of adaptive nonparametric Bayesian credible sets, Annals of statistics, 2015. <doc-sep>Paper provides an evaluation of the reliability of confidence levels of well known uncertainty quantification techniques in deep learning on classification and regression tasks. The question that the authors are trying to answer empirically is: when a model claims accuracy at a confidence level within a certain interval , how often does the actual accuracy fall within that interval? This is conceptually similar to the recent slew of papers seeking to empirically evaluate the softmax calibration of deep models where the question there is how often do predicted probabilities of the winning class reflect the true probability of the correct answer, but in this paper the focus is on confidence level and confidence intervals. Studies are conducted for both regression and classification. Confidence levels and intervals are evaluated using the notion of coverage probability and width. While these have a straightforward interpretation in the regression setting, for classification the authors use the top K probabilities that captures 95% of the prediction probability mass to evaluate coverage and width. Thus for classification, the width is the number of classes over which 95% of the probability is smeared. Ideally one would want a model that has a low width, and high coverage probability (i.e a model that is both reliable and accurate). The aim of the paper is not to produce this ideal model, but rather to empirically evaluate whether the predictive uncertainty of various methods proposed in the DL literature can be relied upon. Various UQ methods are tested for both regression and classification datasets, and for the latter case, also under dataset shift. Pros: + Paper is well written and ideas are, for the most part, presented well. + Experiments test a variety of state-of-the-art UQ methods. + There has not been work looking at this specific metric -- i.e., the reliability of prediction intervals. And with increasing usage of DL in high-risk applications, an evaluation of this kind might be useful. Cons - The authors appear to be conspicuously avoiding much usage of the terms "confidence levels" and "confidence intervals", but it appears that this is really what the paper is about. Justify why you are taking this stance. The section on "theoretical coverage guarantees" is not sufficiently explanatory or convincing in this regard. - A quantitative discussion on the mismatch between the coverage probability and the quality of softmax calibration is missing. - My biggest concern is the conclusion of the paper: the authors state "we conclude that the methods we evaluated for uncertainty quantification are likely insufficient for use in high-stakes, real-world applications where dataset shift is likely to occur." Yes, the models' coverage probabilities are indeed significantly below the reported confidence level when data is corrupted (both for CIFAR-10 and ImageNet), but the fact that the width increases should give us an attack vector into the problem. You say this is not sufficient, but I'm not convinced this is the case. 95% of the probability mass is now smeared over a much larger number of classes. In other words, an increasing width necessarily means the predictions have increased in entropy, and also that the probability mass in the winning class is now significantly lower under data corruption than what it was for the clean set. Both of these quantities (entropy and winning softmax) can be used to filter out predictions when the model is not confident (subject to a suitable confidence threshold), at least in the non-adversarial case. And in the real-world, this could be a practical approach to ascertain when a model's predictions should be trusted or discarded. In summary, while the authors have done a commendable job with experimental evaluations, the conclusion is too strong and -- in my opinion -- incorrect to justify acceptance.<doc-sep>In the submitted manuscript, "Empirical Frequentist coverage of deep learning uncertainty quantification procedures", the authors propose to investigate the Frequentist coverage properties of predictive intervals by numerical experiment for a number of machine learning models applied to benchmark datasets. I can't say that I find this a strong submission because: 1. the authors give a confused (mis-)definition of coverage; essentially they seem to have taken Barber et al.'s definition of "marginal distribution free prediction intervals", mangled it and then called it Frequentist coverage citing Wasserman 2. the authors claim one of the contributions of this manuscript to be "introduce coverage and width as a natural and interpretable metrics for evaluation predictive uncertainty" but in fact these aspects of predictive intervals from ML models has been studied for many years, as a simple google search will confirm 3. the results shown will not generalise in any meaningful sense: for example, GPs are found to have excellent coverage over the set of regression tasks shown, but in fact GPs are themselves a case study in the difficulties of achieving Frequentist style coverage in the domain of Bayesian non-parametrics (e.g. Hadji & Szabo 2019; Neiswanger & Ramdas 2020; Rousseau 2016 ; prior over-smoothing being the root of many problems ). | Overall, the reviewers agree that there is definite value in the empirical evaluation you have provided. However, as you have acknowledged in your responses to the reviewers, the presentation could be significantly improved. A final point that was not touched upon by the reviewers--where possible (e.g. certainly not ImageNet, but for some of the smaller datasets in Table 1) it would be helpful to have a comparison to fully Bayesian methods (you have linear regression and GPs, but I don't see the implementation details; my suggestion is to implement these within an MCMC framework, specifying reasonable priors over the (hyper)parameters). |
The authors propose a change to an established approach (Sun et al., 2019) for performing inference in Bayesian neural networks, which works directly in function-space rather than weight-space. The previous work by Sun et al. (2019) derive an ELBO with an intractable KL divergence between processes, so to make the problem tractable, they estimate the gradients of this KL term using the spectral Stein gradient estimator. This estimator, however, has been shown to be less efficient for high-dimensional distributions. As an alternative approach, the current paper's authors propose a simple estimator of the KL term, which is based on a Taylor expansion of the random functions induced by BNNs and an MC estimate over context points from the input space. The authors empirically show how their method leads to state-of-the-art uncertainty estimation and predictive performance on several datasets. **Strengths** This is an impressive paper; incredibly well-written and thorough, with a good and concise related work section. The proposed approach is a novel and significant contribution to the field, which will undoubtedly be of great interest to the NeurIPS community. The proposed approach is carefully derived in a way that makes it possible for the reader to follow along all the way. The paper contains a great empirical assessment with an impressive amount of details for making the work reproducible. When a setting or a choice of, say, a hyperparameter or distribution is not immediately clear, the authors carefully discuss the options. This might be the most polished and detailed paper I have ever reviewed. The proposed approach also works well in practice, and several studies of the effect of hyperparameters and priors (in the supplementary) provide valuable insights. **Weaknesses** It's really quite hard to find any substantial weaknesses in the paper. In my view, the greatest weakness is that the authors do not compare their approach against that of Sun et al. (2019) despite framing the entire paper as an improvement over this. It would have been interesting to see the two approaches compared side-by-side on problems that both methods can handle. An additional (minor!) weakness of the proposed approach is that it introduces a few extra choices to be made and hyperparameters to be optimised compared to the method by Sun et al. (2019). The authors do, however, provide suggestions and thorough discussions of how to make these choices. **Comments, typos, and various minor things** * Line 100: $\\mathcal Q_\\theta$ -> $\\mathcal Q_{q_\\Theta}$. * Line 111: The word "poses" doesn't seem to fit in this sentence. * In the equation following line 138, the matrix $\\mathbf S$ is only defined in the appendix. * In line 175, if $\\Sigma$ is the variance, I think the reparametrisation should be written (informally) using $\\Sigma^{½}$ or using the Cholesky decomposition $L$. * Line 207: $SM$ should be $SK$, I suppose. * In lines 246-247, I find the phrasing of "predictions will be significantly higher" a bit confusing. I suppose it refers to the predictive uncertainty, which should be higher, but I'm not entirely sure. * I suppose the ablation studies are actually more like hyperparameter tuning studies rather than actual ablation studies. The authors have adequately addressed the limitations of their work. A potential negative societal impact has not been discussed, but it is not necessary given the paper's theoretical nature. <doc-sep>The manuscript proposes a scalable functional-space variational inference method by incorporating prior information. The main contribution of the paper is to propose a KL divergence estimator between a posterior and variational distribution over functions by approximating the first-order Tayler series expansion of the mean of the parameters. The proposed approach seems intuitive and the proof seems reasonable. The results demonstrate the benefits of the approach. Weakness: - The diagonalization approximation to the covariance matrices in Section 3.2 is a simplification that deviates from the original approach from Sun et. al. - The proposed method are presented only image classification problems. - Authors need to compare other approaches that scale functional variation inference to even larger models: please refer to the following manuscript. "Carvalho, Eduardo DC, et al. "Scalable uncertainty for computer vision with functional variational inference." CVPR'2020 - Other approaches which are not functional variational inference but provide well-calibrated uncertainty estimation need to be compared. Karandikar, Archit, et al. "Soft calibration objectives for neural networks." Advances in Neural Information Processing Systems 34 (2021): 29768-29779. Krishnan, Ranganath, and Omesh Tickoo. "Improving model calibration with accuracy versus uncertainty optimization." Advances in Neural Information Processing Systems 33 (2020): 18237-18248. The results presented are convincing for the image classification task, but you need to compare against other variational inference approaches proposed in the literature. Some references are provided above. <doc-sep>This paper proposes a tractable method for function space Variational inference that allows the method to be applied to realistic real-world settings. The authors achieve this by means of two approximations. Firstly, rather than calculating a KL divergence between two posteriors for which we do not know the closed-form density, they calculate the KL divergence between locally accurate linear approximations to the posteriors, whose density *is* known. Secondly, they use a rough approximation to the supremum over the aforementioned KL divergence, which they argue is enough since it encourages the variational distribution to match the prior on a set of context points. ## Strengths **Originality**: the paper proposes a novel solution to a challenging and important problem in the BNN literature. I particularly enjoyed the idea of using local linearization to approximate the KL divergence between the approximate posterior and prior! **Clarity**: for the most part I found the paper well written and explained. There were a few issues, which I detail later. ## Weaknesses **Quality**: While the experimental evaluation is quite extensive (many benchmarks and many baselines – very nice!), I do have some concerns and suggestions, which are also listed below. My main concern is that the experiments do not add to the understanding of the performance of the method–the relative strengths and weaknesses are not apparent, and the impact of the various approximations is unclear. There is also a lack of some important details that make it difficult to interpret the results of the experiments. **Significance**: I have a number of concerns about the approximations and assumptions in this work, which have not been addressed in the text. Without having these concerns addressed it is hard for me to make any judgement about the potential significance of the work. My primary concern is with the choice of prior. This concern and others are listed in the questions below. The authors have not made an effort to explicitly describe the limitations of their work. For example, they have not discussed the sensitivity of their proposed method to its hyperparameters, nor have they described the trade-offs between their method and existing functional BNN methods. However, I am confident that this issue can easily be addressed. <doc-sep>This works presents the function-space variational inference in tractable manner. Specifically, the existing function-space variational inference [1] has computational issue in computing KL terms between variational and prior distributions over function-space. This work first presents the tractable form of KL terms by approximating the distributions over function-space linearly by the Taylor approximation, and then employs the estimate of supreme of KL terms by using the context sets, which results in the function-space variational inference in tractable manner. Through experiments, authors validate that the proposed inference could perform the reliable uncertainly estimation, and outperform the existing Bayesian deep learning model for predictive performance and uncertainty estimation. [1] Functional variational bayesian neural networks, ICLR 19 $\\textbf{Strengths}$ $\\bullet$ This work proposes the tractable function-space variational inference, which can easily be computed. $\\bullet$ This work has been sufficiently validated that the proposed inferences leads to the reliable uncertainly estimation and superior predictive performance on a variety of the datasets. $\\textbf{Weaknesses}$ $\\bullet$ The technical novelty looks incremental. $\\bullet$ As described in methodology section and shown in Table 1 and 2, the predictive performances seem to be largely affected depending on how to chose the context points and distribution. Thus, the performance could be unstable depending on users who might have deep understanding of given task or not. Thus, I think that explaining the intuition on how to choose context distribution should be described in detail. See section Weakness above. | The review process for this manuscript is complex. The reviewers are not in consensus. Most of them have engaged considerably with the original submission as well as the significant updates that the authors have made to the manuscript post submission. In my opinion, new full covariance rank results are what make the paper interesting and these were presented after the original submission. Normally, I would find this not to be fair as the reviewers are not obligated to read such a big revision to a submitted article. But at least two reviewers have engaged with the revision considerably and I feel like the paper is stronger than what the current scores imply. The last holdout reviewer maintains a few outstanding low-confidence concerns about the paper—I do not think these should hold back the manuscript from being presented and discussed at the conference. I am voting to accept this paper in spite of its low score, but recommend that the authors correct their behavior. Such a large revision to a manuscript puts an enormous tax on the review process; this is basically a "journal level" edit to the submission and normally this would require a second round of review. |
This paper presents an approach to reasoning about the relations in a knowledge hypergraph using sparse and local hypergraph reasoning. Their main motivation is to make learning and inference efficient in very large domains by utilizing sparse tensor representation for hypergraph neural networks, applying a sparsification loss during training, and subsampling. Main concerns * The paper is not well written. While it motivated the problem well, the intuition behind design choices is not clear and needs to be explained in a better way. * The experiment section needs to be improved. I highly recommend to add more experiments and baselines. Many related work is missing from this paper. For example [1], [2] [1] Bahare Fatemi, Perouz Taslakian, David Vazquez, and David Poole. 2021. Knowledge hypergraphs: prediction beyond binary relations. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI'20). [2] irui Chen, Xin Wang, Chenxu Wang, and Jianxin Li. 2022. Explainable Link Prediction in Knowledge Hypergraphs. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management (CIKM '22). * Since one of the contributions of this paper is reducing time and memory, such claims need to be proved and shown by analyzing the algorithms. The experimental numbers are not enough to make such claims. * Details of the datasets used in the paper should be included. Furthermore, since the paper claims it is targeting very large hypergraphs, such datasets should be added to the experimental results. <doc-sep>The paper describes a graph SpaLoc, a novel graph machine learning method designed to learn from hypergraphs to predict missing links between nodes. The paper is well organized and reads well. The research problem is pertinent to the scope of the conference and it is well explained - I appreciate papers that explore learning from hypergraphs. The authors provide a reasonable overview of the literature landscape, although I would also include NodePiece, [Galkin et al.]). The novel contribution (the SpaLoc model) is sufficient to meet the acceptance bar. The intuition of leveraging sparsity and locality of rules to induce from data is reasonable, and SpaLoc's architecture implements such ideas in a well-grounded manner. A note on SpaLoc scalability: sure, the method scales better than other direct competitors, but it is important to underline that all datasets used in the paper are toy datasets (see my question on training time on transductive link prediction datasets). I have some questions on the evaluation section, perhaps the authors can clarify in the rebuttal: 1) There is a blind spot in comparing with 3rd-party methods, which the authors should address, and that is the comparison against methods for multi-hop query answering on incomplete knowledge graphs (such as Arakelyan et al.'s CQD or QueryToBox). The "Maternal great uncle" example in 303 could be addressed by these methods as well, and a comparison would be interesting. This could be done at least in a qualitative way. 2) On the transductive link prediction experiments, could you clarify better what is the arity hyperparam value you used to train SpaLoc for the results in table5? If you did not remove hyperdeges (360-360), does it mean that the benchmark training sets have been turned into hypergraphs? (all three benchmark datasets are regular knowledge graphs) - I'd just need a clarification on this point, to make sure the protocol you followed was fair to the transductive baselines. 3) Why not adopting the conventional transductive link prediction protocol instead? It is hard to tell if SpaLoc really outperforms the baselines in these conditions (e.g. very low number of synthetic negatives per positive). Besides, reporting other metrics such as MR, MRR, Hits@n, n=1,3 would also help. 4) I could not find hyperparam values and hyperparam ranges of the baselines or SpaLoc. It is important to make sure these values guarantee a fair comparison across the board, as transductive methods are indeed sensitive to these values (see also Ruffinelli et al. ICLR-20) 5) What is the time to train the mode on the transductive link prediction benchmark datasets? Is it worth adopting SpaLoc in practice or KGE baselines still outperform the method in terms of running time? 6) On the inductive link prediction experiment: NodePiece and relational GCN (R-GCN) would also be two good baselines to compare against (Nodepiece being also interesting to compare in terms of memory footprint, as it does not learn representations for each node of the graph either). All in all, a well-written paper, with a sufficiently good original contribution to an interesting problem. I have some doubts on the impact of the method (see my questions above), but I am happy to keep my positive score if the rebuttal and discussion phase address the concerns I have listed above.<doc-sep>The paper addresses the problem of predicting relationships between entities in a knowledge graph. Knowledge graphs are sometimes also called heterogeneous information graphs or multi-relational graphs. More specifically, the paper's main focus is on making neural networks for hypergraphs more scalable by inducing and taking advantage of sparsity as well as the locality of the computation which is typically (empirically) sufficient for the reasoning task under consideration. For large graphs, the paper overcomes the problem of intractable inference. The authors describe their approach using Neural Logic machines as the base model but also describe how their ideas can be applied to other hypergraph representation learning methods such as k-GNNs.At the core of their approach is a second loss term which regularizes the network towards sparse hypergraph representations. Another contribution is a subgraph sampling strategy which improves the efficiency of the method during training, where hypergraphs might not (yet) be sparse. The experiments are extensive and indicate that SpaLoc might achieve higher evaluation metric scores. A shortcoming, which most link prediction methods have in common, is the missing variance of the results. Since this is (unfortunately) a community standard, I don't want to hold it against the authors' method and setup. A more problematic issue I see with the experiments is that the evaluation appears non-standard and uses randomly chosen negative samples. Compared to the standard evaluation procedure used in the knowledge base completion literature, which always compares all possible substitutions in a triple, this leads to a much simpler task. I've reviewed this paper before, and despite several reviewers mentioning the non-standard evaluation setup, it seems the authors used the same evaluation in the resubmission. What I would also continue to work on is an improvement of the presentation of the method in section 3. There are several paragraphs and subsections but what I am missing is how these pieces fit together and a coherent story that enables the reader to understand the way these parts build a whole. Some sentences between subsections connecting the various contributions should be added. <doc-sep>Summary: The paper introduces a framework for learning and inference in hypergraph large scale domains. A sparse representation of hypergraphs and corresponding neural modules is discussed, together with a sparsifiyng regularization loss. Locality arguments are introduced, which allow to scale by exploiting sub-sampling techniques for the input hypergraph. Experiments show that the proposed method outperforms other inductive hypergraph techniques and transductive knowledge graph embedding techniques. Strong points: - The method proposes a sound solution to scale in hypergraph learning and inference tasks - The paper is clearly written Weak points: - The main messageof the paper is not clear (is it the architecture? the scaling strategy?) - Consequently, related work discussion and competitors are not selected accordingly For these reasons, I recommend a weak reject. I think that the paper is a sound and meaningful strategy, but it is hard from the current shape of the paper and of the experiments to measure the novelty. DETAILED FEEDBACKS: - POSITIONING: The paper is a bit unclear about the main message of the work. Abstract and introduction often mention "rule learning", or "rule that explains", which could give a completely wrong impression of what the paper is eventually about. Also the use of the term "reasoning networks", which, without a clear definition, recalls standard reasoning techniques. Instead, for me the paper is much more focused on scaling hypergraph neural networks. And here comes my first doubt: from the current paper (both related works and experiments) it is hard (if not impossible) to position the proposed method in the literature. Because many competitors do not focus on scaling and are not scaling/sub-sampling strategies, but standard (hyper)graph representation techniques. The experiment that goes more in this direction is the ablation study with multiple sampling strategies. But again it is unclear what is the novelty there, as those techniques are recovered from the literature. - CONTRASTIVE COMPARISONS: Connected to the previous comment, it seems to me that many of the choises made are very interesting / sound, but not discussed and not compared critically with other solutions in the literature. For example, the "permutation" layer is kind of building a "propositionalized" representation of the tuple (a flat representation, with permutations explicitely encoded in the representation). I think this has to be a core part of this relational representation (which is in a certain sense the opposite of what GNNs tend to do) but it is hard to understand the reasons behind such choice. - OTHER METHODS USING SAMPLING: Still connected to previous points, the discussion about other sampling methods and strategies is not discussed in enough details. I think this is one of (if not the) main contributions of the paper. Scalability is reached by means of sampling. While not being an expert in the specific area (subsampling), many knowledge graph methods with logical rules (which creates hyperedge features) are strongly related to the method, as they need (often locality-based) heuristics to scale to large graphs. E.g. UniKER: A Unified Framework for Combining Embedding and Definite Horn Rule Reasoning for Knowledge Graph Inference, EMNLP 2021, and some of the references in the paper. In this paper, horn rules (which often take the shape of typed paths) are extracted first and then used to sample training data. Sparsifying heuristics are also used during training (thresholds on confidence levels). QUESTIONS: 1) What is the main contribution of the paper? 2) Why is it novel and which gap in the literature is it filling? 3) Are there other methods using sampling / local heuristics? How the proposed method compares to them? SUGGESTION FOR IMPROVEMENTS: I really believe that the paper makes a nice contribution, but it may require a bit more work. Two directions seem more important to me: 1) the paper must be positioned better, in such a way to clearly underline the main novelty(-ies). 2) comparisons (both discussions and experiments) must concern closer methods (i.e. other scalable (hyper)-graph techniques). UPDATE: I thank the authors for their response. I would like to raise my score to weak accept, as the authors tried to place better their work w.r.t. the literature. | In light of the discussion and the rebuttal, we find that this paper can be accepted. We strongly encourage the authors to address the clarity and accessibility suggestions raised by the reviewers in their revision. |
The paper under review studies the question of whether gradient descent can solve the problem of calibrating a deep neural network for separating two submanifolds of the sphere. The problem studied in the paper is very interesting and as been the subject of recent increasing interest in the machine learning community. The contribution is restricted to a simple set up and addresses the question in the finite sample regime. The framework of the analysis hinges on the Neural Tangent Kernel approximation of Jacot et al. This extremely technical paper is indimidating and I wonder how many readers will actually read the proofs. Moreover, one may wonder if the NTK approach precludes a deeper understanding of the actual performance of the network under consideration. <doc-sep>The paper analyzes deep networks in terms of multiple manifold problems. However, it needs easier explanations for a wider audience. Moreover, I think the article was written as a book rather than a conference paper, considering 245 pages. What would be the actual benefits of the analysis? The paper talks about one-dimensional cases, and I was wondering if the multiple manifold problem in one-dimension is enough. If I understand correctly, this one-dimension manifold is supposed to represent one class in the last layer. Some notations are hard to follow, especially in Section 3.1. Actually, some of them are used without the description.)<doc-sep>The paper studies the conditions for a deep fully-connected network to separate low-dimensional data classes. A binary classification setting is considered, where the two classes are modelled as two different manifolds. The manifolds are assumed to be one-dimensional for the ease of analysis. It is shown that the network depth should be sufficiently large so as to adapt to the geometrical properties of data (e.g. the manifold curvature); the network width should increase polynomially with the network depth; and the number of data samples should also scale at a polynomial rate with the network depth. The authors show that if these conditions are met, with high probability a randomly initialized network converges to a classifier that separates the two class manifolds. The proof technique relies on conditioning the network parameters of the l-th layer on the parameters of the previous layers using a Martingale model, which gives sharp concentration guarantees. The paper is rigorous and well-written, and very detailed proofs have been provided for the presented results. The authors may still find it useful to address the following issues: 1. The separation Delta between the two manifolds seems to be a critical parameter in the proposed performance bounds. However, the definition of Delta in Section 3.1 is not very clear. In particular, what does the notation \\angle(x, x') mean? Does this mean Euclidean distance, or angle as the notation suggests? If this means an angle, please explain whether this refers to the angle of points on the sphere, or the angle measured on the tangent space of the Riemannian manifold, etc. 2. Although I like very much the analysis proposed by the authors, some of the main assumptions regarding the studied setting may be too restrictive in practice: a. The proposed bounds apply only when the separation parameter Delta is positive. In a practical application, the manifolds representing different classes may be (asymptotically) intersecting due to the degeneracy of image transformations/articulations in some manifold regions - e.g. extreme illumination conditions, scale changes, etc. b. The assumption that the data samples lie exactly on a Riemannian manifold also seems to be restrictive. A probabilistic model for the data samples, e.g. involving a noise model capturing the deviations of the points from the manifold, would have been more realistic. Would the theoretical analysis be completely intractable under such extensions, or could that be a potential future direction of the study? It would be nice if the authors could comment on such issues. <doc-sep>The authors consider a binary classification task. As a model the authors use a deep fully-connected neural network and train it to separate the submanifolds, representing different classes. They assume that sub-manifolds belong to the unit sphere. Also, the authors restrict their analysis to a one-dimensional case. The main claim is that by increasing depth we can improve model generalization of a network, trained by SGD. The proved result seems to be imporant for understanding the generalization ability of deep neural networks. While proving it the authors used various imporant tools from martingale concentration approach. However, the paper is not well suited for such venue as ICLR. Moreover, I am sure that the proof itself contains some parts that are of separate interest for the community. I would propose to divide the paper into several parts, so that some of them contain general results about approaches to prove concentration inequalities, which are of general interest and can be used somewhere else. Also, I would expect more comments on which tools, used for proving the main results, are completely new and how they can be used to establish similar results for other network architectures. Also, I would expect some comments how in principle the restriction that the submanifolds belong to the unit sphere can be removed. Some more discussion how the proposed setup is related to the setup, considered in the paper of Goldt (2019), are needed. ======== After reading authors comments. In principle, I tried to understand the main steps of the proof, they looks OK. Although, I can not verify the details of the proof. I still think that ICLR venue is not OK for such long submissions. In principle, I am OK to increase the grade for one point. | This paper introduces the multiple manifold problem - in a simple setting there are two data manifolds representing the positive and negative samples, and the goal is to train a neural network (or any predictor) that separates these two manifolds. The paper showed that this is possible to do with a deep neural network under certain assumptions - notably on the shape of the manifold and also on the ability of the neural network to represent certain functions (which is harder to verify, and only verified for a 1-d case in the paper). The optimization of neural network falls in the NTK regime but requires new techniques. Overall the question seems very natural and the results are reasonable first steps. There are some concerns about clarity that the authors should address in the paper. |
The authors present the interesting and important direction in searching better network architectures using the genetic algorithm. Performance on the benchmark datasets seems solid. Moreover, the learned insights described in Section 4.4 would be very helpful for many researchers. However, the overall paper needs to be polished more. There are two many typos and errors that imply that the manuscript is not carefully polished. Explanations about some terms like growth rate, population, etc. are necessary for broader audience. More importantly, while some of step jumps in Figure 6~9 are suspicious, it turns out that all the step jumps happen at the same number of steps, which are identical to the change of learning rates described in Section 4.2. Thee clear explanation about that phenomena is required. * Details - Please represent the blocks (e.g. 1*1conv) better. Current representation is quite confusing to read. Maybe proper spacing and different style of fonts may help. - In Page 5, "C_{m}ax" is a typo. It should be "C_{max}". - Regarding the C_max, does sum(C_max) represent (D * W)^2 where D is the total depth and W is the total indicies in each layer? If so, specifying it will help. Otherwise, please explain its meaning clearly. - In Figure 4(a), it would be better if we reuse M_{d,w} notation instead of Module {d_w}. - Please briefly explain or provide references to the terms like "growth rate", "population", and "individuals". - Different mutations may favor different hyper-parameters. How the authors control the hyperparameters other than the number of epochs will be useful to know. - Even though the sparse connection is enforced for some reasons, overfitting, variance, or any other benefits that slim structure can bring in has not been evaluated. They need to be presented to verify the hypothesis that the authors claim. <doc-sep>The problem is of increasing practical interest and importance. The ablation study on the contribution and effects of each constituent part is a strong part of the experiment section and the paper. One major concern is about the novelty of the work. There are many similar works under the umbrella of Neural Architecture search who are trying to connect different building blocks (modules) to build larger CNNs. One example that explicitly makes sparse connections between them is [1]. Other examples of very similar works are [2,3,4]. The presentation of the paper can be improved a lot. In the current setup it’s very similar to a collection of ideas and tricks and techniques combined together. There are some typos and errors in the writing. A thorough grammatical proofreading is necessary. In conclusion there is a claim about tackling overfitting. It’s not well supported or discussed in the experiments. [1] Shazeer, Noam, et al. "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer." arXiv preprint arXiv:1701.06538 (2017). [2] Xie, Lingxi, and Alan L. Yuille. "Genetic CNN." ICCV. 2017. [3] Real, Esteban, et al. "Large-scale evolution of image classifiers." arXiv preprint arXiv:1703.01041 (2017). [4] Liu, Hanxiao, et al. "Hierarchical representations for efficient architecture search." arXiv preprint arXiv:1711.00436 (2017). <doc-sep>The authors bridge two components (density of CNNs and sparsity structures) by proposing a new network structure with locally dense yet externally sparse connections. + Combination of being dense and sparse is an interesting area. - Although experiment results demonstrate evolving sparse connection could reach competitive results, it would be interesting to show how separating a network into several small networks is useful, for example, interpretablity of deep neural network. There is an interesting work: "Using deep learning to model the hierarchical structure and function of a cell" https://www.nature.com/articles/nmeth.4627 | This paper proposes a genetic algorithm to search neural network architectures with locally dense and globally sparse connections. A population-based genetic algorithm is used to find the sparse, connections between dense module units. The local dense but global sparse architecture is an interesting idea, yet is not well studied in the current version, e.g. overfitting and connections with other similar architecture search methods. Based on reviewers’ ratings (5,5,6), the current version of paper is proposed as borderline lean reject. |