Input
stringlengths
128
43.6k
Output
stringlengths
141
10k
In this paper, the author investigates how to utilize large-scale human video to train dexterous robot manipulation skills. To leverage the information from the Internet videos, the author proposes a handful of techniques to pre-process the video data to extract the action information. Then the network is trained on the extracted hand data and deployed to the real robot with some human demonstration collected by teleoperation for fine-tuning. Experiments show that the proposed pipeline can solve multiple manipulation tasks. **Strength** - The direction explored in this paper is important. Utilizing the internet video data for robot learning is well motivated. Especially considering the similarity between human and multi-finger hands, this direction looks very promising. - The authors perform experiments with multiple real-world tasks with pick and place, pushing, and rotating objects. **Weakness** - Although the objective of this paper is very impressive, the experiments can not support the introduction and there are multiple overclaims. - Section 4 is titled VideoDex: Learning Dexterity from Youtube. However, I can not find any evidence that the author utilizes YouTube data for learning dexterous manipulation. As mentioned in the Section on Retargeting Wrist Pose, ORB SLAM and the camera’s acceleration data are used to compute the camera pose trajectory. This information is not readily available in the YouTube data. The experiments and methods are misaligned with this claim. - In the introduction line 42, the author mentioned that our key insight is to combine these visual and action priors from passive data with the physical constraints of how robots should move in the world. However, the method does not consider the surroundings of the human hand, and the detection results itself is not accurate. How to incorporate physical information into the training data? - Missing literature discussion on previous learning from video works: [1] *DexMV: Imitation Learning for Dexterous Manipulation from Human Videos, 2021*: This paper focuses also on how to learn dexterous manipulation from human videos. The reviewer understands that this literature paper uses simulated tasks while the authors focus on the real robot settings. But it seems that similar pipelines are also used in this paper: estimating the human hand, retargeting, and learning from retargeted hand pose. [2] *The Surprising Effectiveness of Representation Learning for Visual Imitation, 2021*: This paper also focuses on how to leverage the video data for better learning. It also uses a GoPro camera to collect a video of each trajectory, which is the same as the Ego4D dataset used in this paper. It shows that by learning from this video data, the final manipulation performance can be improved a lot. These literature works use very similar methods to achieve robot learning. The novelty claims of this paper can also be found in this literature. - Missing details for Re-targeting Wrist Pose The detection module FrankMocap is a 2D hand detector, it is not clear how the author can get 3D keypoints from the hand model in the camera frame. Also, this section is important in the whole technical approach, it is better to provide visualization of the final retargeted robot. A hand wrist pose and robot arm should also be visualized in Figure 3 if they are used in the training. If the wrist pose and arm joint pose is not used, how to pretrain the action prior? - Missing details about transforms In the equation, it is not clear why the author uses T and M to denote pose simultaneously. What are the differences? If M is also a $SE(3) $transformation, how to compute the position part of the $M_{World}^{C_1}$? Besides, the reviewer can not find any information about how the $T_{Robot}^{World}$ is determined heuristically in both the main paper and supplementary. <doc-sep>The authors demonstrate a system in which they combine a few different components to get interesting supervised-learned open loop behavior of real robot hands doing several different tasks. In particular the most notable part of the approach is using videos of human hands as an “action prior” which informs their supervised mapping. # Strengths - Good core idea. The overall idea of using action priors from human videos, via hand tracking, to make robots work better, is a good idea. There are a lot of closely related works, but I think they are well referenced in this paper. - Good execution on several key parts. The execution details of handling moving cameras with camera pose tracking, together with per-frame hand tracking, seems to be well done. I also like just using R3M features out of the box, this is smart and interesting to see external validation. - Results of real robots with hands doing a variety of things. # Weaknesses There are various unscientific elements of this paper in its current form. While the work is interesting, I can’t recommend a strong accept for a paper in this form. Hopefully the list below will help the authors improve both this work and their future work. If the authors can address all of the following weaknesses in their rebuttal, which I think is all doable and within scope to do in a rebuttal, I’d be happy to move from weak accept to strong accept. 1. It seems like the authors are not very upfront about the fact that this method does not produce closed loop policies. Only on the last page or two is it mentioned that the whole method is open loop. This is fine to study the task of (i) inputting an image of a scene and (ii) outputting an open loop trajectory, but, it of course is very limiting. The tasks are carefully chosen such that they don’t require any closed loop feedback. This aspect of their approach is not what most researchers in the field would expect… so a common experience of a researcher would be to look over the first handful of pages of this paper, and only at the last page or so realize that this is an open loop method. Please just make this clear up front. 2. Several false statements in the introduction: - “ To build such robotic agents that can operate anywhere, we need access to a lot of successful robot interaction data in many environments.” —> not necessarily true… This is a reasonable hypothesis, but one that isn’t tested in this paper, and it can’t be stated as a fact. - “ However, deploying inexperienced real world robots to collect experience must require constant supervision which is in feasible.” —> also not necessarily true… but also a very reasonable hypothesis. Just need to say “may require” instead. - “Most of the inefficiency in robot learning is due to the exponentially large action space.” —> an opinion, and can’t be stated as fact. 3. “NDPs can produce safe and smooth trajectories” … yes, but this is a meaningless statement. They *can* also produce trajectories that are completely unsafe. There is nothing about NDPs/DMPs that provides safety other than a bit of smoothness that may arguably help. But there is nothing that helps here with the presence of obstacles in the environment, or humans, etc. This statement probably only serves to confuse/mislead inexperienced readers, please remove/fix. 4. The paper mentions a “physical” prior as a key component, but this is just that it uses Dynamic Movement Primitives it seems. I’m not sure this is the best way to communicate this. Line 191 also says physically-aware NDPs… they don’t know anything about contact physics… maybe just say second order system or dynamical system or something, maybe physically-inspired, but not physically-aware. And whenever it says, for example line 269, “baselines without a physical prior” it should just be instead clear that this just means they don’t use DMPs. 5. Line 213 “ is VideoDex able to perform general purpose manipulation?” Since the method is open loop, the answer is no. That’s fine, and the results are still impressive, but should be clarified… this is not something that needs to be empirically evaluated, it’s just a result of the formulation. 6. It’s very confusing that citation 44 is used open loop… this isn’t an intention of the method. Also, is the RNN version closed loop over time? It’s not clear. And if it’s not? … I’m not sure how the RNN would be any different if it’s not used sequentially over time. 7. Please state exactly how many demonstrations were used for the different experiments. 8. In the conclusion… “ this is because training RL in the real world is difficult due to hardware limitations.” Yes, but this isn’t reason to make the used behavior cloning method open loop instead of closed loop. ## Minor Don’t worry about these too much but I mention these as opportunities to improve the paper further. - Ego4D is not cited on page 2 (mentioned but not cited) - HR() is not defined in an equation. Also, I would recommend not using two letters for a math symbol… it looks like a matrix H multiplied by a matrix R - Why use ORBSLAM3 rather than COLMAP for the poses? Already running colmap for the calibration. <doc-sep>VideoDex pretrains a policy network with videos, with gyroscope and accelerometer data, of humans performing a task, then fine-tunes with demonstrating trajectories collected by teleoperating the robot. In order to train with the human data, they use the approach from [49] for mapping human pose to robot pose and use ORBSLAM3[55] to account for the camera motion. They feed the image data, labeled with the outputted pose, into a ResNet18[15] backbone initialized with R3M's[6] features and use a Neural Dynamic Policy (NDP) [13] network to generate actions. The paper demonstrates that using human data allows improved performance on 6/7 tasks. Pros The paper presents a theoretically simple method of learning from videos of humans. The method is demonstrated on 7 different tasks, outperforming the baselines without human data on 6 of them. Cons The writing of the paper is somewhat scattered. The analysis of why the proposed approach using NDP rather than a MLP works better with human data could be stronger. The paper needs to be much clearer that it relies on gyroscope and accelerometer data from the human videos, which is a barrier to truly using internet-scale data.
This paper studies how to learn dexterous manipulation from human videos. In the initial review, the reviewer appreciated the direction and real-world experiment but also raised concerns about the need of special sensor for tracking. During rebuttal, the authors effectively addressed this concern by providing additional experiment results, and reviewers were satisfied with the response. AC would like to recommend acceptance for this paper.
**Summary of contributions:** This paper proposes a new framework to design new loss for GANs. The authors show that their framework is quite general and encompass a number of existing approaches (e.g. the original GAN formulation, hinge loss, etc..), they also propose a categorization in three different classes and derive new loss function. They then compare experimentally the different existing loss and the new proposed loss that fall under their framework. **Main comment**: The framework proposed in the paper is interesting since it's quite general and the authors are able to derive a large number of existing as well as new loss from it. However, I think the framework has several limitations: 1. The formulation is based on the likelihood ratio which is only defined if the support of $g$ and $f$ match, this is known to not be the case in the context of GANs. 2. The benefit of the framework is not clear, while it provides a way to derive new loss it's not clear what are the advantages of the new loss. Theoretically the author argue that it is a hard question to answer, and I agree. The authors try to answer this question through experiments but I find the experiments not very convincing. In particular, the authors argue that subclass A objectives are more stable based on the CelebA experiment, however it's not clear to me that the instability is due to a specific choice of objective function, it might just be that the hyper parameters where slightly off for the other objectives. I believe it would be interesting to understand better the results on CelebA, in particular maybe to show that some objectives are indeed more stable, they can vary several hyper-parameters and compare how often each objective is better than the other, that would make the results and conclusion much more convincing. *Minor comment*: The paper is overall clear but the clarity of some sections could be improved. I think theorem 1 would be more clear if stated a bit differently simply saying that $D=\\omega(r)$ maximize $\\phi(D)+r \\psi(D)$ and that $r=1$ minimize $\\phi(\\omega(r))+r \\psi(\\omega(r))$. Section 3 is a bit dense, the subclasses also seem a bit arbitrary. I believe section 5 could be improved by stating more clearly the different observations, right now it looks more like a description of the figures than a clear statement of the question that the experiments try to answer and how they answer it. <doc-sep>This paper generalizes the min-max problem of GANs to form a richer family of generative adversarial networks. Interestingly, most of the well-known variants of GANs can be found in the spectrum of formulations covered by the family proposed in this work. In terms of modeling, it is evident that the family proposed in the paper is richer than that of f-GAN. The family in this paper is shown to have a connection to WGAN except that the Lipschitz condition is omitted. However, under the light of existing works including f-GAN and other relevant works, the obtained theoretical results are not surprising to me. In addition, apart from providing a richer family, this work does not significantly influence the practical aspects of GANs. I have some following questions: 1. If we solve the min-max problem in (2) subjected the fact that \\phi and \\psi satisfy Eq. (9), is it equivalent to minimizing any divergence between two distributions with pdfs f and g? 2. D(x) is not a typical discriminator whose values between [0;1] providing the probability to distinguish true and fake data, is not it? D is more similar to a critique whose output values are real-valued, is not it?<doc-sep>Summary ======== In this paper, the authors set out to find what scalar functions will make for a “max” part of the “min-max” GAN objective. They then find such a class of functions, and show that only a ratio between two equal probabilities will be admitted as a solution. Pros: ==== The paper nicely introduces a different way of seeing GANs, not as a difference between the generated and real data, but as a an integer of the ratio between generated and real distribution times the discriminator. Only if the ratio is 1 everywhere is the discriminator unable to maximize the max part of the GAN objective. Further, I liked the idea that the discriminator shouldn’t just decide what class data belongs to, but also estimate the probability ratio. Specifically, in the formulation here, the max part is maximized when $D(X) =\\omega(r(X))$, so maximized iff $\\omega^{-1}(D(x))$ doesn’t just classify, but says the probability ratio between the two classes. If this idea is expanded upon, I think the authors could make a novel contribution. Cons: ===== Unfortunately, the authors have neglected to carefully explain how their contribution relates to previous work. It’s telling that the paper cites only two papers from 2018, one from 2019 and none from 2020. All other citations are from previous years, even though 2018-2020 has been a time of much GAN research. A key way in which the author’s work hasn’t been sufficiently compared to previous work is with their main claim “We propose a simple methodology for constructing such [min-max] problems assuring, at the same time, consistency of the corresponding solution.” In [Liu], they show a class of of functions where consistency is also guaranteed, and the class shown by the authors here is a subset of the class in [Liu]. The details are at the bottom of my review Further, many of the techniques in this paper seem very similar to [Song], where they also investigate the f*-gan divergence. Specifically, the claims they make in Theorem 1 seem very similar to Prop. 2 in [Song]. Also the change of measure trick in the introduction can be found in [Song]. A detailed comparison of this work to that work would also be helpful. Since when reading this paper one simply doesn’t know what is previous work which has already been done by others and what is the author’s novel contribution. Once the authors address this, and one is confident the contribution is indeed novel, then the submission would be worth considering. Details of why this is a subset of what’s already been shown in [Liu]: There, they examine the difference between the target density $d$ (in this paper $d$ is $f$, but Liu uses $f$ for something else) and the generated density $g$ via $\\sup_{f\\in\\mathcal F}\\mathbb E_{x\\sim d,y\\sim g}[f(x,y)]$, so we find the function $f$ in a class $\\mathcal F$ which maximally separates the classes from $d$ and $g$. Now this work proposes to do the same thing, but with $f(x,y)=\\phi(D(x)) - \\psi(D(y))$ where $\\phi(z) = -\\int_{\\omega^{-1}(0)}^z \\omega^{-1}(t)p(t) dt + C_1 $ and $\\psi(z)=\\int_{\\omega^{-1}(0)}^z p(t) dt + C_2$. In [Liu] they then split f(x,y) up into two functions m and r, such that f(x,y)=m(x, y) - r(x,y) where m(x,y) has the form m(x,y)=v(x)-v(y). This can be done in your case too, resulting in (here we drop the constants C_1 and C_2 for simplicity) $v(x) = \\int_{\\omega^{-1}(0)}^{D(x)} p(t) dt$, $v(y) = \\int_{\\omega^{-1}(0)}^{D(y)} p(t) dt$ and $r(x,y) = \\int_{\\omega^{-1}(0)}^{D(x)} (\\omega^{-1}(t) + 1) p(t)dt$ Since D(x) must be in $\\mathcal J_\\omega$, this integral has an infimum, and theorem 4 from [Liu] can be applied to achieve the same results as in this paper. [Song] Song, Jiaming, and Stefano Ermon. "Bridging the Gap Between $ f $-GANs and Wasserstein GANs." arXiv preprint arXiv:1910.09779 (2019). [Liu] Liu, Shuang, Olivier Bousquet, and Kamalika Chaudhuri. "Approximation and convergence properties of generative adversarial learning." Advances in Neural Information Processing Systems. 2017. <doc-sep>Overall, this paper provides impacts on understanding the core of generative models with adversarial optimization problems. This paper shows the diverse possibilities of formulating the generative model optimization problems that the researchers can further investigate for better performances.  Also, this paper shows that generative models with unexplored losses achieve the best results in various datasets which demonstrates the possibilities of future improvements of generative models. Overall, this paper is valuable to the machine learning community (especially for generative models and adversarial training). The below are some concerns for this paper but those concerns are not bigger than the advantages of this paper. 1. Quantitative experiments - Although the authors provided two tables (Table 2 and 3), there were not much analyses about the results. - I understand that it is not an easy problem to understand "when" should we use "which" function. However, it would be great if the authors can discover some trends in the results to demonstrate which type of functions work well with which type of datasets. - I think it would be great to use some synthetic data with known characteristics of distributions as the target distribution to analyze for understanding this point. 2. Other types of dataset - Generative models are widely utilized in computer vision.  - However, there are various other types of datasets that can get benefits of generative models such as tabular data and time-series data. - It would be good if the authors can provide some simple experiments to demonstrate its generalizability. 3. Minor points - It is not clear to transform between equation (3) and (4). I think this is a critical part in this paper; thus, it would be good to explain a little bit more for this part. - The authors explain the differences between f-GAN and this paper. However, it is not super clear to understand. It would be good to clarify this point to highlight the novelty of this paper. --------------------------After reading other reviews are rebuttals--------------------- After reading all the reviews from other reviewers and corresponding rebuttals, I think this paper is a good paper and enough to be accepted in ICLR. 1. I think it has a clear difference from f-GAN. It can provide a new loss function for the generative models which can further extend the success of generative models in the future. 2. Experiments are not super interesting but at least it has some intuitions corresponding to the authors' claims. 3. General theoretical results for the generative models (such as when should we use which loss) is a very difficult problem to solve. Maybe this paper can provide some intuitions for solving that large problem. But it seems too much to ask this thing to the authors of this paper. Without that, I think this paper is still worth to present to the ICLR readers and participants. Therefore, I am standing on my original score (7).
This paper proposed a new family of losses for GANs and showed that this family is quite general and encompasses a number of existing losses as well as some new loss functions. The paper compared experimentally the existing losses and the new proposed losses. But the benefit of this family is not clear theoretically, and this work did not also provide the very helpful insights for the practical application of GANs.
This paper addresses the problem of MoE routing under the cases of different network topologies by allocating another abstraction layer for the topology and designing an auxiliary objective to optimize. Experiments show very good improvement in terms of speed compared to strong baselines. Strength: 1. The paper offers an important contribution to the AI community at the system level, which is probably not difficult to approach for many people working in this field. In fact, in my humble opinion, not so many AI people have the opportunity to access detailed hardware information as cloud users such as with Azure or AWS. 2. The experiments show very good improvement over strong baselines. System analysis is clearly presented. Weakness 1. The paper addresses the system level. However, since it claims a significant boost of speed without sacrificing the model accuracy, it needs to show the accuracy, e.g. at least the LM-related one with NLP-related metrics. 2. Line 240, which claims "without loss of generality", is probably too strong. My suggestion is if the solution is good, with the current hardware settings, the authors can run current codes for other many applications of which codes are available to further solidify their claims. 3. Likewise, why not show the data dispatch distribution of other ranks but only rank 0? If space is limited, appendix space is always there. 4. In the era of GPUs and large data, the motivation is led by demonstrating only 128MB of data is probably inefficient. Probably at least some GBs, or even stronger in a combination with different types of data would make a stronger motivation. 5. No code is provided. Maybe not very relevant since the paper addresses the system-related level and thus is hard to judge those impacts. <doc-sep>The paper proposes a new algorithm to improve training efficiency of Mixture of Experts models in a distributed training setting by exploiting the network topology information. To achieve this, the authors propose a new auxiliary loss term incorporating communication bandwidth to encourage tokens to be routed to closer nodes rather than further nodes. By applying this new algorithm, authors claim that they could achiever faster throughput (1.01x - 4.77x) without losing accuracy on their several different clusters. As a result, they show a faster wall-clock time convergence. The communication overhead is one of the major issues for the MoE model training and this paper proposes a new method to deal with this problem naturally. Given the increased usage of MoE model technology, this is a timely work. Having a soft guidance seems like a good idea not to hurt the original training dynamics while encouraging locality of token routing. And, as authors mentioned, there have not been this kind of topology aware loss terms before as far as I know. However, there are a few missing details about model configurations and algorithms asked in the question section. And, the overall speed gain is minor. This paper is focusing on the computation algorithm itself. So, it might not have direct societal impact. <doc-sep>Sparsely gated Mixture-of-Expert (MoE) plays a vital role in large-scale model training but suffers from both load imbalance and global communication. In addition, the existing even dispatch approach may cause network contention and worsen the previous challenges. This work proposed a topology-aware large-scale MoE training method, called TA-MoE, that can adapt communication volume to fit the underlying network topology without interfering with the model convergence. The key ideas are abstracting the dispatch problem as a communication cost optimization problem and then adding an auxiliary loss with pattern-related coefficients. Experiments show that TA-MoE provides up to 1.61x speedup and 4.77x speedup over DeepSpeed-MoE and FastMoE without accuracy loss. Strengths: + this work tried to tackle a very significant and interesting challenge in MoE system: network topology may worsen the communication and load balance problems during the dispatch in MoE. + the paper is well organized and easy to follow + the proposed TA-MoE method is simple and effective: extensive experiments show that TA-MoE is able to offer noticeable speedup over the state-of-the-art under different hardware and model configurations. Weaknesses: - the experiments are mostly doen with GPT models; it would be better to have models with different neural architectures in the evaluation benchmark. It is unclear how TA-MoE works on other MoE using models other than GPTs. The authors have adequately addressed the limitations and potential negative societal impact of their work.
Mixture-of-Expert (MoE) models have demonstrated a lot of success recently. To further improve upon the existing literature this paper studies MoE routing for different network topologies. This is essentially to deal with the communication overhead of MoE training. The strategy is to add another layer on top for the topology along with a corresponding objective to optimize. The authors also provide experiments demonstrating improved speed of convergence. The reviewers were in general positive and liked the idea of the paper. The reviewers did however raise issues about lack of clear demonstration that accuracy is not compromised, lack of large data, and a few other more technical concerns. The reviewers concerns seem to be more or less addressed by the authors. My overall assessment of the paper is positive. I think the general premise of the paper is interesting and the paper has interesting ideas. I do agree however that the experiments need to be more thorough. I am recommending acceptance but request that the authors follow the reviewers comments to improve their experimental results
This paper discusses applications of variants of RNNs and Gated CNN to acoustic modeling in embedded speech recognition systems, and the main focus of the paper is computational (memory) efficiency when we deploy the system. The paper well describes the problem of the current LSTM, especially focusing on the recurrent connection matrix operations, which is a bottle neck in this scenario, and introduces variants of RNNs (e.g., QRNN). Also these variants may not yield enough performance compared with LSTM, but 1-D convolution and/or deep structure helps to avoid the degradation. One of the biggest issues of this paper is that they use CTC as an acoustic model, while still many real speech recognition applications and major open source (Kaldi) use hybrid HMM/DNN(TDNN, LSTM, CNN, etc.) systems. Therefore, the paper's claim on CTC is not along with the current application trends. (It may be changed near future, but still hybrid systems are dominant). For example, the WSJ WER performance listed in Table 3 is easily obtained by a simple feed-forward DNN in the hybrid system. The latest Lattice free MMI with TDNN can achieve better performance (~2.X% WER), and this decoding is quite fast compared with LSTM. The authors should consider this current situation of state-of-the-art speech recognition. Also, the techniques described in the paper are all based on existing techniques, and the paper lacks the technical novelty. Other comments: - in Abstract and the first part of Introduction: as I mentioned above, CTC based character-prediction modeling is not a major acoustic model. - The paper needs some discussions about TDNN, which is a major acoustic modeling (fast and accurate) in Kaldi - p.4 first line "and represents element-wise multiplication": The element-wise multiplication operation was first appeared in Eq. (1), and it should be explained there. - Section 3.2: I actually don't fully understand the claims of this experiment based on TIMIT, as it is phoneme recognition, and not directly related to the real application, which is the main target of this paper I think. My suggestion is to place these TIMIT based experiments as a preliminary experiment to investigate the variants of RNN or gated CNN before the WSJ experiments. (I did not say that Section 3.2 is useless. This analysis is actually valuable, and this suggested change about the position of this TIMIT experiment can avoid some confusion of the main target of this paper.) <doc-sep>This paper present a study on efficient acoustic modeling using neural networks-based model. Four approaches are presented and evaluated: diag LSTM, QRNN, Gated ConvNet and adding a 1D convolution layer. The evaluation is done on ASR task using WSJ and in phoneme classification task using the TIMIT corpus. The study show that the inference speed is improved with comparable of better performance than the standard LSTM model. The findings presented in this paper are interesting and quite useful when one wants to implement a LSTM-based acoustic model on mobile devices. The paper is well written and easy to ready. The main issue of this paper is the lack of novelty: the three evaluated approaches (Diag LSTM, QRNN and Gated ConvNet) are not novel, the only novelty is the addition of a 1D convolution, which is not enough for a conference like ICLR. Minor comments on the experiments: * The network quantization approach has been shown to lead to efficient neural networks, could the authors provide a comparison between their approach and the quantization approach ? * On the TIMIT experiment, the authors could add a decoder and use the PER metric instead of the frame accuracy, so they could provide comparison with the literature. * WSJ and TIMIT are quite small corpora compared to the available corpora, maybe the authors should consider using large corpora like Librispeech. It could be interesting to see the performance of the presented approaches. Overall, this paper feels more like a technical report: the findings could be useful, but its novelty is too limited for ICLR. Hence I argue for rejection, and suggest that the authors consider submitting the paper to a speech conference like ICASSP.<doc-sep>This paper investigates a number of techniques and neural network architectures for embedded acoustic modeling. The goal is to reduce the memory access and make efficient computation, in the meantime, to sustain good ASR performance. Overall, the paper is well motivated and well written. However, I have following concerns. 1. It is not clear from the paper whether both the training and inference are conducted on embedded devices or only the inference? I assume it is the latter but can't find it explicitly mentioned in the paper. 2. The exploration carried out in the paper is more on the system level and the novelty is not overwhelmingly significant. 3. My major concern is that the reported WERs on WSJ and phoneme classification accuracy are quite off. 20%-30% WERs for WSJ do not seem to be usable in real applications. Honestly, I don't even think this performance is better than well-trained GMM-HMM acoustic models using a Viterbi decoder. Furthermore, there is no clear winners across the investigated architectures in terms of performance. One question is if one wants to deploy such an on-device system, which architecture shall be chosen? 4. A more general comment on the work explored in the paper. First of all, the on-device memory issue puts a heavy constraint on the capacity of acoustic models, which will significantly hurt the modeling capability for the DNN-based acoustic models. Deep learning acoustic models can outperform GMM-HMM because they can use large model capacity with very deep and complex architectures when a large amount of training data is available. Second, for CTC, when the training data is limited, its performance is far worse than the hybrid DNN-HMM model, let alone a pure end-to-end fashion without using external LM and dictionary. If WFST-based decoders (composition of WFSTs of LM, dictionary and deblank/repetition) are used, then the memory issue will surface again.
In this work, the authors conduct experiments using variants of RNNs and Gated CNNs on a speech recognition task, motivated by the goal of reducing the computational requirements when deploying these models on mobile devices. While this is an important concern for practical deployment of ASR systems, the main concerns expressed by the reviewers is that the work lacks novelty. Further, the authors choice to investigate CTC based systems which predict characters. These models are not state-of-the-art for ASR, and as such it is hard to judge the impact of this work on a state-of-the-art embedded ASR system. Finally, it would be beneficial to replicate results on a much larger corpus such as Librispeech or Switchboard. Based on the unanimous decision from the reviewers, the AC agrees that the work, in the present form, should be rejected.
The authors introduce the problem of telegraphic summarization: given a sentence, we want to reduce its size while retaining its meaning, with no penalty for grammatical mistakes. The main application presented by the author is that of summarizing fictional stories and plays. The setting proposed by the author prescribes that the summarized sentence can be obtained by the input sentence by dropping some words. So, for example, the simplest baseline for this problem would consist of simply dropping stop words. The approach proposed is basically an auto-encoder, consisting of a 2-step encoder-decoder network: in the first step, the sentence is encoded into a vector which is in turn decoded to a (smooth) indicator vector to mask words in the sentence; in the second step, the masked sentence is encoded into a vector, which is in turn decoded into the output (summarized) sentence. The optimization is a tradeoff between recoverability of the input sentence and norm of the indicator vector (how many words are dropped). In order for the network not to learn repetitive masking patterns (eg, drop first half of the sentence, or drop every other word), an additional loss is introduced, that penalizes keeping easily inferable words or dropping hard-to-infer words. Concerns: - the problem doesn't seem to be well-motivated. Also, the length of the obtained summarized sentences is ~70% that of the original sentences, which makes the summaries seem not very useful. - the proposed complex architecture seems not to justify the goal, especially considering that simply dropping stop words works already quite well. - In order for the presented architecture to beat the simple stop-words baseline, an additional loss (L4, linkage loss) with "retention weights" which need to be tuned manually (as hyper-parameters) is required. - there's not enough discussion about the related work by Malireddy et al, which is extremely similar to this paper. A good part of that work overlaps with this paper. - comparison with literature about abstractive summarization is completely missing. Minor comments: - Figure 1: Indicator Encoder should be Indicator Decoder. - Are negations part of your stop words? From your discussion, you should make sure that "not", "don't", "doesn't", ... do not belong to your stop word set. - How did you optimize the hyper-parameters r (desired compression), the regularization weights, and the retention weights? - Were pre-trained word embeddings used as initialization? - What's the average compression of golden sentences? <doc-sep>The authors consider the problem of telegraphic sentence compression: they train a system in an unsupervised fashion to predict which words can be dropped from a sentence without drastic loss of information. To that end, they propose a new auto-encoding type architecture which uses the extracted words as latent code, and, most importantly, a linkage loss which relates a word's perplexity given the summary of its left context to its likelihood of being retained. The model itself is sober and well motivated, and the linkage loss is, to the best of my knowledge, original. The authors show that their method outperforms some simple baselines in terms of ROUGE and compression on a small human-annotated test set. The paper is generally well written, although the initial presentation of the model could be made a little clearer (it is not obvious from the text that the Decoder takes the text as input -- Figure 2 helps, but comes a couple pages later). However, the authors fail to appropriately justify the choice of their hyper-parameters (e.g. "The optimum value of r for our experiments was found to be 0.65", "the best value of b was found to be 5", "The weights λ1, λ2, λ3, and λ4 have been set to 3, 2, 50 and 3 respectively for our experiments" -> how is "best" measured on the validation set, which does not have gold references?). The choice of the specific sparsity constraint (one could as well imagine using a simpe L1 regularization for the Binarization loss) and of \\Chi_i (why not simply use the likelihood?) could also be better motivated. The model also relies on a hand-crafted rules (Section 3.3) whose effect needs to be made more evident. What weights are used in practice? How were they chosen ("We observed that..." needs to be further developed)? The authors claim that "the quantitative scores are not affected significantly", but that is presumably only the ROUGE score, what about annotator's preferences? Most importantly, however, the task of telegraphic sentence compression, whose usefulness is not a priori obvious, is barely motivated. The author refer to "Malireddy et al. (2018)" for a justification, but it is important to note that the latter provides a telegraphic summary of a whole document, with a compression factor of 0.37. The claim is that the concatenation of the telegraphic sentence compression can act as a summary of a whole document, but given the fact that compression for individual sentences is closer to 0.69, this is yet to be demonstrated. And even if that were true, it is unclear whether the cognitive load of reading a sequence of telegraphic sentences would be that much lower than that of reading the original text. This paper presents some interesting ideas and is well written, but the content is not quite sufficient for publication. In addition to the clarifications and justifications requested above, the authors are encouraged to apply there methods to full lengths documents, which would make for a more substantial contribution. <doc-sep>The paper explores unsupervised deep learning model for extractive telegraphic summaries, which extracts text fragments (e.g., fragments of a sentence) as summaries. The paper is in general well structured and is easy to follow. However, I think the submission does not have enough content to be accepted to the conference. First, in term of methodology (as described in Section 3), the paper has little novelty. There has been intensive study using various deep learning models on summarization. The models described in the paper contain little novelty compared with previous work using autoencoder and LSTM for both extractive and abstractive summarization. Second, the paper claims contributions on using deep learning models on telegraphic summarization, but the advantage is not well demonstrated. For example, the advantage of the resulting summary is not compared with state-of-the-art sentence compression models with intrinsic evaluation or (probably better) with extrinsic evaluation. (By the way, it is interesting that the paper argues the advantage of using telegraphic summaries for fictional stories but actually gives an example which looks also very typical in news articles (the “earthquake Tokyo 12 dead” example).) Third, there has been much work on speech summarization that summarizes with the “telegraphic” style (this is natural, considering speech transcripts are often non-grammatical, and “telegraphic” style summaries focusing on choosing informative fragments actually result in usable summaries.) The author(s) may consider discussing such work and compare the proposed methods to it.
This paper presents methods for telegraphic summarization, a task that generates extremely short summaries. There are concerns about the utility of the task in general, and also the novelty of the modeling framework. There is overall consensus between reviewers regarding the paper's assessment the feedback is lukewarm.
This work tackles the task of forecasting dynamics in different domains simultaneously. Using an encoder which is trained to determine the task, the inferred latent vector is then used to adapt a forecasting network to the task at hand. Experiments on three datasets linked to fluid dynamics are then conducted to assess the proposed model. Pros : - This is an interesting problem which is quite timely given the development of the field of forecasting physical dynamics using neural networks. - The proposed solution seems sound and principled. Moreover, it is well motivated and the writing was quite clear. - The different additions made to the forecaster network are also quite interesting, I especially liked the AdaPad solution to deal with boundary conditions. Conducting an ablation study also considerably strengthens the paper. Cons : - All experiments are conducted on somewhat similar datasets, which are based on fluid dynamics PDEs. It would be nice to see how the model deals with other families of dynamics. Especially given the fact that the contributions of this work seem geared towards practical considerations. - The setting of the experiments should be more precise and additional details should be given: how are the different datasets constructed, what supervision is there exactly regarding the different tasks, how many domains are there in each dataset and what are the differences, how is the balance between the different domains ect. This is a good work on a timely subject. The contribution is not groundbreaking but should be significant enough to warrant acceptance. <doc-sep>This paper addresses the problem of learning a deep learning model for dynamics forecasting which generalizes to changes in dynamics. These changes can be induced by different parameters, boundary conditions or external forces. The proposed model takes a meta-learning approach and proposes to partition data into different heterogeneous domains. It consists of two components: an encoder which infers time-invariant features given observed domain data and a forecaster which predicts the dynamics given these features. The paper evaluates the proposed approach on several datasets and provides some theoretical insights. + * This paper addresses a new and interesting generalization problem for dynamics forecasting * It proposes a model to address different changes in the dynamics. * Evaluation is done on relevant datasets with several baselines and some ablation studies. - * The applicability of the proposed approach is restricted to problems where relevant weak supervision from task parameters is available. This seems like an important limitation in real-world applications. How valid is this scenario? The question of choosing relevant parameters for weak supervision is important for applying this model to other datasets, yet the definition of these parameters is unclear; how robust is the model when chosen parameters are not useful ? The performance of Wrong_enc (Table 2) tends to say that this model will then fail. * It is unclear why the model can adapt to changing boundary conditions with AdaPad as it generates them from features $\\hat{z}_c$ extracted from data inside the domain and weakly supervised by quantities unrelated to the boundary condition (e.g. mean vorticity or season). * The theoretical analysis, inspired by existing work in multi-task learning / domain adaptation, has some limitations and does not add much value to the paper. I have some concerns with the domain adaptation upper-bound to the target error in Theorem 3.4 and Proposition 3.5. This upper-bound is not minimized thus the target risk can be high i.e. the model is not guaranteed to adapt well. Moreover, the validity of the theoretical analysis is unclear as several assumptions may not be verified e.g. bounded loss in Theorem 3.1, Proposition 3.3; lipschitz continuity in Proposition 3.5. Theorem 3.4 requires that the assumptions in Theorem 2 in Redko et al 2017 are verified, yet these assumptions are not mentioned in the paper. * Some ablation studies are missing: 1) the contribution of each term in equation (2) and 2) the dimensionality of $\\hat{z}_c$ which is fixed arbitrarily. Other questions: * It would be good to better explain how the experiments include changing boundary conditions between domains. The testing scenarios only mention different initial conditions or external forces. * Why do the baselines ResNet-c and Unet-c not adapt well despite having access to relevant weak supervision (p8)? This is the same information used by the proposed model to adapt. * How redundant is the time invariance term (3rd term in equation (2)) with the invariances enforced in the architecture of the encoder? This paper tackles a new generalization problem for dynamics forecasting and proposes a model supported by experimental results. However, this model can only be applied to problems with relevant weak supervision which may not always be available in practise. Moreover, the definition of relevant parameters is unclear and the robustness of the model to the choice of these parameters is not measured which may restrict its application to other datasets. There are also unclarities on the ability of the model to adapt to changing boundary conditions with AdaPad, some ablation studies are missing and I have concerns on the theoretical analysis which brings limited value to the paper. For this reason, I am giving this paper a weak reject. --- Post-Rebuttal comments --- I thank the authors for their response. After studying it, the theoretical results still have some major issues and feel disconnected from the model. In particular, key assumptions are not enforced in the model (e.g. lipschitz continuity) and the generalization error of the model in Th3.3 is uncontrolled as the upper-bound is not minimized by the model (the Wasserstein distance between domains is fixed and is high in all generality). Its use for the model is thus not very convincing. On practical aspects, the capability of handling boundary conditions should be better justified and evaluated. For this reason, I keep my score unchanged and recommend rejecting this paper. <doc-sep>The paper suggest a remediation for a common problem for dynamics forecasting which is the lack of generalization to other domains/tasks. The author suggest to tackle this with via a 2 component architecture, one for learning the task and one for forecasting. In empiricial experiments the authors show the practical feasibility of their approach. As a caveat: I'm not an expert in the area, so my review remains on a superficial level consequently for which I apologize. I overall liked the paper quite a bit, the question discussed is relevant, the empirical evaluation is very good, the theoretical results seem as relevant as they would get and the related work discussed is crisply presented and relevant. One question I would have is that results in Table 1 are overwhelmingly good with only UNET-c coming close. Do we know for these tasks what the "theoretical" upper bound (e.g. by the right PDE system) would be? Is it computationally even possible to compute this upper bound? I'm wondering how much of a gap there still is too close. In a similar vein, what is the intuition behind DyAD + ResNet being better than DyAD + UNET mostly? Are there some complementary strengths between DyAD and ResNet that this combination can exploit better than DyAD + UNET? This is a good paper that I'd like to see accepted for its combination of theoretical results, empirical results and methodological novelty. <doc-sep>This paper is interested in learning general forecasting models for physical dynamical processes. The paper proposes a decomposition of such a model into an encoder that captures the innate properties of the system, and a forecaster that autoregressively makes predictions conditioned on the encoded properties. This is framed as a meta-learning approach, and is shown to substantially outperform single-task approaches and off-the-shell meta-learning approaches across multiple datasets. The paper provides some theoretical analysis, and qualitative analysis of what is learned. Overall, the paper shows that learning shared models across domains is an important and fruitful way forward for modeling physical processes with machine learning. Strengths: - The problem statement is well-motivated. Learning generalizable deep learning models across diverse settings is an important open problem. - Experiments use interesting and real-world problems. - Results are strong and appear reliable. - AdaPad is an interesting idea specialized to the case of physical complex systems, since it is designed to address boundary condition issues. - Visualizations show the model is behaving essentially as expected. - Although there are many design choices that go in to the model, each such design choice is well-motivated. - Aside from some aspects of the theory section, the exposition is generally quite clear and well-organized. - Assumptions are made clear. - The fact that the encoder can be trained first and independently of the forecaster should be very useful for further rapid developments. - Great to see ESE metric used as a complement to raw error. - Table in Appendix showing alternatives to AdaIn is very useful in increasing confidence in AdaIn for this application. Weaknesses: - The biggest concern is the theory section. The multi-task learning and domain adaptation results are general results that are not adequately connected back to the specific model and problem the paper is considering. Yes, it is widely accepted that multi-task learning and domain adaptation can work well, especially when tasks are related in some measurable way, and it can be a useful exercise to restate existing theory in the language of your framework, but what (if any) novel claims is the theory implying? Are there any predictions the theory makes about the particular approach which can be validated in experiments? - The theoretical bound on error that decomposes the error of the encoder and forecaster is similarly lacking in its interpretation. Yes, it can be a useful exercise to show that the error can be decomposed along the lines of the model, but does this bound somehow suggest that the decomposition results in lower error than a monolithic model? Or is it showing that you can work independently on improving either part of the model and improve the overall error? Where is there potential for practical value in this theorem? - For example, one place there could be potential to validate the theory is to check in experiments that task pairs with lower Wasserstein distance actually support better domain adaptation. However, in the Introduction of the paper it acknowledges that “Even the slightest change in these features may lead to vastly different phenomena”, but doesn’t that suggest that Wasserstein distance may not be a useful metric here for measuring task similarity? Couldn't turbulence limit the usefulness of such a metric? - Proposition 3.3 says the bound is “strictly looser” than the bound in Theorem 3.1. For clarity, it would be very helpful to combine the bounds into an inequality showing this strictly-looser property. It is not immediately apparent from the statement of the theorems since the inequalities contain different terms. - As is, the theory doesn’t really hurt the paper, but, for the amount of space dedicated to it, it doesn’t add much. The paper could be substantially improved by either (1) adding interpretation/predictions/validation of the theory that connect it back to the approach in the paper, or (2) removing some of the less useful parts of the theory from the main paper to free up space for more of the interesting analysis of what the model actually learns. - Also, it is interesting but a bit counter-intuitive that the theory section relies on results in multi-task learning and domain adaptation, instead of theoretical results from the meta-learning literature. As is, since the paper relies on multi-task learning so much, it is missing references to related work in multi-task learning (i.e., related work outside of modeling physical dynamical systems). - Similarly, it would be helpful to mention why there are no comparisons to multi-task learning or domain adaptation methods in the experiments. Why do they not apply here? - The three terms in the loss function of the encoder are well-motivated, but it is not clear how important each term is. Ablations on these terms would be very informative for the reader to understand what’s generally required to train an encoder. - In Section 5 it says “VarSepNet employs separation of variables through different loss terms”. What are these loss terms and how are they different from the ones in the paper? - In the ablations with no encoder, how do AdaIn and AdaPad work? Don’t they require some z? Where does this come from if not from the encoder? - U-Net does seem it could be at a qualitative disadvantage compared to DyAd in terms on number of parameters, especially since U-Net c is one of the more competitive baselines. It would be useful to see results for a larger U-Net c, or at least some evidence that the U-Net is not underfitting the training data. Additional question of interest: Overall, this is a very important a potentially deep line of research. The most exciting promise of such work is the potential of revealing shared regularities across vastly disparate dynamic systems, that is, across complex physical processes. And it seems the approach in the paper could be particularly well-suited to such research. For example, the authors could train a single encoder+forecaster model across all the datasets in the paper, and analyze relationships in the learned encodings across datasets. Training models across highly diverse domains have been tried in multi-task learning (e.g., "Pretrained Transformers as Universal Computation Engines" arxiv 2021, "The Traveling Observer Model" ICLR 2021, "Modular Universal Reparameterization" NeurIPS 2019, "One Model to Learn Them All" arxiv 2017). Is such a generalization part of the longer term vision for this line of work? Minor comments: - In Section 2.4, some references would be useful in the sentence ending with “…the combined force equation.” - There are several inconsistencies in the use of parentheses in citations throughout the paper. Correcting these would improve readability. - In last sentence of first paragraph of Section 4, the word “task” could be changed to something like “problem”, since “task” has another meaning in the paper. - Should the 7.26 for U-Net-c on Ocean Currents future be bolded? - In the last paragraph of Section 5.1: “We tried to vary…” -> “We tried varying…” or “We varied…”. - Appendix A.2.1: footnote for PhiFlow is on the wrong page. - Appendix A.2.1: The last paragraph seems like it should be the first paragraph of A.2.2. - In proof of Proposition B.5, there is an extra or missing set of norm bars in the first inequality. Overall, this is very interesting and useful work. The problem is well-motivated, and the approach and experiments are carefully designed and generally convincing. If the concerns about the theory are addressed, I would be happy to increase my score. Adding the additional info and experiments requested could increase it further, and make this a particularly strong paper.
The paper addresses the problem of domain generalization for learning spatio-temporal dynamics. It proposes a solution where an encoder captures some characteristics of a given environment, and a forecaster autoregressively predicts future dynamics conditioned on the characteristics learned by the encoder. Said otherwise, the forecaster learns the general form of dynamics parameterized by an environment representation extracted by the encoder. The conditioning is implemented via an adaptive instance normalization mechanism. A form of padding is also introduced in order to take into account boundary conditions. The two components encoder and forecaster are trained sequentially. This approach is casted in a meta-learning framework. Theoretical results inspired by multi-task learning and domain adaptation are also demonstrated. The model is evaluated and compared to different baselines on three problems, and for two different settings: varying initial conditions with a given dynamics, and dynamics with varying parameters. This is a borderline paper. It targets a timely and important problem of domain generalization for dynamic environments. The proposed solution is original and compares well experimentally to several baselines. It allows for better generalization performance for the two test settings considered. In the current version, the paper however suffers from different weaknesses. First there is the imprecision of the arguments and the description of the experiments. Some of the arguments and claims are vague and sometimes abusive, not backed up by evidence. For example, a central claim is that the encoder learns time invariant quantities characterizing the environment when the learned representations indeed change with a time shift in the input for any environment. The same goes for the argument developed for the padding construction. It is claimed to model boundary conditions, but this is not supported by any theoretical or empirical evidence. As noted by the reviewers, the theoretical analysis is disconnected from the algorithmic and experimental developments and does not bring much additional value to the paper. What is more embarrassing is that some of the claims in this section are overstated and induce incorrect conclusions. From Theorem 3.1 and proposition 3.3, the authors suggest that multitask learning leads to better generalization than learning independently, while this is not formally guaranteed by the results (this is acknowledged by the authors in a later comment). Besides, the conditions of validity are not discussed while they seem to only cover situations for which the train and the test distributions are the same. The same holds for the second theoretical results (theorem 3.4). It is claimed that this result supports the authors’ idea of training encoder and forecaster sequentially, while it does not. Besides, the bounds in this result cannot be controlled as noted by the reviewers and are not useful in practice. Overall, the paper addresses an important topic and proposes new solutions. The results are promising and it is indeed an interesting contribution. However, inaccuracies and incorrect or exaggerated claims make it difficult to accept the current version of the article. The article would make a strong and innovative contribution if it were written as a purely experimental article with a detailed description of the experiments and comparisons.
The paper studies the Mixture of experts (MoE) architecture which has become popular in NLP recently as a way to increase the capacity of network without increasing depth. The authors aim to develop a theoretical understanding of the MoE model/conditional computation. The authors begin with a formal model for conditionally activated sparse models which can capture common existing MoE models. The authors use LSH (locally sensitive hashing) for the gating in MoE and use this to derive a few theoretical results regarding the ability to approximate real valued Lipschitz functions in Rd. The authors perform some small scale experiments to back up and verify their theoretical findings. ######## POST REBUTTAL ###### Thanks to the authors for running the experiments and for sharing the insights. Like I said earlier, it is important to study the theoretical underlining of MOEs. This paper starts with it, although as a researchers actively working in MOEs, I do not think that the paper exactly answers the key questions. The results proven are expected and not surprising, but on the other hand, as pointed by the authors, non-trivial to prove. So I would say that it is a decent paper at the moment and would suggest the authors to keep going in this direction to develop a more thorough understanding so that they can uncover some more fundamental results. Strengths: 1. Relevant Problem - MoEs are becoming very popular in NLP. Thus it is important to study their underlying theoretical and working mechanisms. The paper tackles this relevant problem. 2. LSH (locally sensitive hashing) - Authors propose to use LSH for gating. This can actually be quite promising in my opinion because it takes the local vicinity into consideration. 3. Well written - It is a very well written paper and is easy to follow - I really like the limitations section. It is good to see for a change that there is somebody who knows and writes the limitations of their work. Weaknesses: 1. Weak Experimental Evaluation - I think that LSH is infact good. Authors need to perform more experiments to show its effectiveness. - I am not suggesting to go to the huge model sizes but atleast medium scale models and datasets should be evaluated. Very well discussed. <doc-sep>From my understanding, the main contribution of the paper is as follows: 1. the authors capture the sparsity structure of these popular transformers. They model the transformers into DSM models. 2. They show that the DSM model can represent the LSH model. 3. They provide theory on the LSH model. These theories can be used to interpret the success of Switch and Scaling Transformers. 4. Motivated by the theory, they proposed a new LSH-based model and run toy experiments to show its efficacy. See my comments below. I have the following concerns: 1. In the current manuscript, the connection between contribution 1==>2==> 3==>4 is still a bit vague (see my comments in the **Summary** part) . When reading the current version, it is easy to get confused about the main contribution of the paper: is it "explaining why general sparse model like Scale transformer works well."? Or is it " designing new LHS methods to save inference costs?" For me, the contribution of the "explaining..." part outweighs the "designing.. " part. This is because the author didn't provide any real-data experiments on LSH. If no real-data experiment is provided, LSH is a pure theoretical tool to prove the theory on DSM and the "designing.. " part is minor. However, the current script over-emphasizes the "designing.." part, causing great confusion for me. 2. The performance of DSM on CIFAR-10 does not quite match the theory. To support the theory, I would suggest the authors run experiments on transformer-based NLP tasks instead of CV tasks on CIFAR. 3. All the theories are built on the function Lipschitz assumption. It would be better if the authors verify the Lipschitz condition. Is it a necessary condition or it is due to the limitation of the theory? If it is the latter case, what is the main technical challenge to relax this assumption? 4. In line 255, why is random d-degree polynomial a Lipschitz function? <doc-sep>This paper provides a theoretical treatment of modern sparsely activated networks with the Data-dependent Sparse Model (DSM) model. The authors show that the DSM model can simulate modern sparsely activated networks and the locality sensitive hashing (LSH) model. It is proven in the paper that the LSH model can be expressive as a dense network for approximating real-valued Lipschitz functions while requiring much fewer FLOPs. Furthermore, experiments are conducted to validate the theoretical findings on Lipschitz target functions as well as the CIFAR-10 dataset. Strength: 1. The paper is the first work to treat sparsely activated networks theoretically, thus novel to me. 2. The paper is well-organized. Weaknesses: 1. The theoretical analysis is based on the assumption of the L-Lipschitz target function and I am not sure how significant the work is. Furthermore, the neural network size used in the experiment is also very small. 2. I am not very sure about the relation between the theoretical findings and experiments. Theorem 4.1 and Theorem 4.3 conclude that LSH-based sparsely activated networks can be expressive as their dense counterparts when their size and number of samples match. As for size, the LSH model is measured using hash table size and the dense model is measured using width. As a result, in experiments, I am expecting to see the LSH model is as good as dense ones when # buckets == width of the dense network. However, in the figures, the width of dense networks is compared to the number of activated units. 3. For comparison of DSM and dense networks, the authors mention that 'Sparsity helps in both DSM and LSH models, ... using the same number of activated units.' However, it seems that the comparison may be unfair. Specifically, with 64 activated units, DSM chooses the best 64 units out of total of 1024 units while the dense one has only 64 units in total. It seems unsurprising to me that DSM is better than its dense counterpart. Yes, the authors have addressed the limitations and potential negative societal impact of the work. <doc-sep>This paper proposes the DSM model to sparsely approximate Lipschitz functions. The authors theoretically demonstrate their method in a wide range of scenarios, from one-layer shallow neural networks to switch and scale transformers. The original idea (but I am not sure as I am not familiar with this domain) of interpreting DSM as KNN is very interesting. However, :(, the experiment setting is a bit weak and seems to be finished in a rush. #### I have increased the rating from 4 to 5 after rebuttal. # Clarity: ## Strengths: This paper offers a detailed introduction to the LSM model and other background knowledge. ## Weaknesses: If the authors can further unify the usage of notations, the overall readability will be better. For example, the authors use s for the sparsity parameter, but in sec. 3.0 it switches to k, and later, k is used as the intrinsic dimension of input distributions. The usage of notation A^x is also a bit confusing. I also recommend the authors add some figures to illustrate their idea. For example, the Euclidean LSM and sec 3.0 can be well explained by figures. # Originality: I am not familiar with this domain, so I may not be able to judge this point. But still, I find the argument in sec. 3.0 interesting. It points out a potential direction in that we may interpret neural networks as KNN operators. The current content can be enhanced in some directions. The authors may try to remove the constraints of unit B rows, typical network blocks like CNNs, Attention networks, and Residual connections, do not have such unit structures. Also, extending it to deep neural networks will be more attractive. # Quality: ## Strengths: The theory analysis is careful and in depth. But some settings and assumptions need either explanation to justify the necessity or adjustment to cater the practice demands. ## Weaknesses: ### Weird experiment setting: 1. Now that the main point is the efficiency of the proposed method, why not report inference time? FLOPS is a good metric but not enough. 2. Needs more detailed ablation study. Specifically, detailed study on how sparsity parameter s influence the model accuracy, approximation MSE, and inference time. 3. Now that the paper put much attention on discussing input distributions, the authors should also use input distributions on low dimensional manifold in R^n. Currently it is unclear how the input is sampled. 4. From numerical perspectives, polynomial may not be good choices, as they tend to be extreamly ill conditioned when degree is high. The authors may consider B-spline or Bézier curve for some realistic industrial scenarios. Also, random neural networks, even shallow ones, may be good candidates. ### Theoretical Settings need clarify and adjustments. 1. It is a bit weird to assume input distributions be uniform, can it be replaced as absultely continuous with respect to the uniform distribution (Lebesgue measure)? 2. Now that the proof is based on Euclidean LSH, it should be clearly stated in the theorems. # Significance: The theory results is good, but needs stronger empirical evidence to support it. 1. I strongly encourage the authors to add figures to illustrate their concepts and ideas. 2. Experiments on SOTA neural networks will be much appreciated. 3. Needs a more detailed ablation study to justify the theory results.
The paper provides a theoretical analysis of sparsely activated neural networks. They introduce LSH (local sensitive hashing) as a new routing function for theoretical analysis and proved a few results on representation power and inference time. One reviewer pointed out that the theoretical results are expected and do not provide much interesting insight, which I agree with. Nevertheless, this is one of the early papers that study sparsely activated networks and may serve as a starting point. I recommend acceptance.
The paper proposes a new approach to inject knowledge into pre-trained language representation models (PLMs). Instead of tuning the original PLM parameters, the paper plugs in new adapters for knowledge injection to avoid catastrophic forgetting. Pros: * Injecting knowledge into PLMs is an advanced topic. The authors focus on the catastrophic forgetting problem during knowledge injection. * Evaluation is solid. The authors evaluate their model on three downstream tasks and show that the adapters improve the performance. * The paper is well written and can be easily understood. Cons: * The approach is simple but achieves good performance over a variety of tasks. I appreciate that the authors conduct the knowledge probing experiment but its P@1 is quite low and worse than BERT. Some more explanations are expected. <doc-sep>Summary: The paper proposes a novel approach of incorporating different types of world knowledge sources contained in texts such as facts or linguistic syntax. To do this, they introduce additional transformers layers between the layers of a pre-trained language model such as Roberta and term this model as "K-Adapters", where the K stands for K different streams of knowledge. Pros: - Incorporating different sources of information into a pre-trained model such as Roberta is an interesting idea. - The proposed approach is simple and interesting and it scales to many different types of information as the different adapters can be trained in parallel with the weights of the pre-trained LM being fixed. - Performance gains on different classification tasks such as entity tying, question-answering, and relation classification highlights the utility of the approach. Cons: - Sec 1: Introduction: In the introduction, there are multiple mentions of the phrase “rich knowledge” but it is unclear what do the authors mean by that in the context of pre-trained language models. Some recent works such as “https://arxiv.org/abs/2002.08910, https://arxiv.org/abs/1909.01066 ” suggest that pretrained language models do indeed contain a lot of world factual knowledge. Hence, the statement in the paper that pertained LMs lack world knowledge is contradicting these works. - There is also a frequent mention of catastrophic forgetting of knowledge during the finetuning step. I tend to disagree that this is necessarily bad for a pretrained model, because it has been shown that finetuning pre-trained LMs perform well in open-domain question answering where some degree of world knowledge is needed. - Furthermore, producing entangled representations may not necessarily be a negative thing, if multi-task learning approaches are able to show an increase in performance due to knowledge injection. - In Table 1, dependency parser doesn't really fall under the same class of knowledge sources such as Wordnet or Wikidata. A dependency parser may be able to provide some sort of syntactic structure of the underlying text. Moreover, such syntactic information is not always generalizable to different domains and thus has the limitation of not being accurate enough. - The Introduction section is not well-motivated and does not present convincing arguments as to why external knowledge infusion is really required in some tasks. It just states that knowledge infusion using k-adapter model outperforms Roberta models in different tasks. - In Section 3.1, not enough space has been allocated to explain the Adapter model in detail. If the authors had used mathematical notation or equations for explanation, then it would have been much more clear. - In Section 4, it is mentioned that they select three downstream tasks for evaluating their models. However, the paper doesn't provide justifications as to why these tasks were selected, how can these tasks highlight the importance of k-adapter model, etc. - In the results table 2 and table 4, as the performance improvements are somewhat marginal, it is important to know if these improvements are statistically significant or not. The paper doesn't report if the results are from single run or the mean of multiple runs. - I have concerns about data leakage during the pre-training step. As the factual adapter makes use of supervised relation classification dataset (T-REx), I feel that there might be some overlap between entity typing and relation classification datasets used for evaluating the performance of the model. The authors should present an analysis as to what degree of overlap if any is present during the pre-training and evaluation tasks. - The paper lacks a detailed analysis section that could explain as to which test examples are getting correctly classified when using k-adapter model in tasks like relation classification, entity typing compared to other baseline approaches such as Roberta, Roberta + multitask. Currently, the paper pays just too much emphasis on raw numbers or performance improvements in various tasks. - The results of the probing experiments suggests that BERT-large model vastly outperforms k-adapter model on Google-Re and T-REx datasets probing datasets. This raises an important question over the validity of the results in different downstream tasks. For a fair comparison with baselines, the authors should compare the performance of the k-adapter model with BERT-large + multitask across different tasks. - In almost all of the experiments, the authors use Roberta as the underlying pre-trained language model. For demonstrating generalization to different pre-trained LMs, the paper should also evaluate when k-adapter model is trained when BERT-large or T5-large are used as underlying models in place of Roberta. Grammar errors: - page 1, 3rd line from bottom: remains -> retains - section 3.3, information that concerned -> information that is concerned - section 3.4, father index is commonly referred to as head index of a word.<doc-sep>#### Summary This submission proposes a general method (K-Adapter) for injecting knowledge (either factual or linguistic) into pre-trained language models. The key architectural property of the approach is that K-Adapters are isolated from one another, allowing the use of multiple adapters without interference. These K-Adapter modules take hidden layer _inputs_ from the main pre-trained model (eg, BERT), and are pre-trained on their knowledge outputs before a fine-tuning phase where they feed into a joint downstream task-specific model along with the pre-trained model outputs. #### Strong and weak points The set of baselines seem strong, and the experimental results consistently show that using _either_ factual or linguistic knowledge K-Adapters improves, while using _both_ yields the best results. The LAMA probing experiment is a nice sanity or validation test that the knowledge injection is achieving the desired effect. Being able to "hard-code" knowledge into the model in this way could be useful in a variety of applications. It is overselling it a bit to say the model captures "richer" commonsense knowledge, however. The basic architectural idea is well-motivated and simple, in a good way. The supplemental materials mostly provide additional reproducibility details on architectures, hardware used, learning rates, etc. #### Recommendation (accept or reject) with one or two key reasons for this choice. I recommend to accept. The proposed approach yields strong quantitative performance against solid and relevant baselines, and the LAMA experiments give some support to the hypothesis that it is doing so by capturing knowledge as intended. The general design pattern could spur further innovations in modular network designs or knowledge capture strategies as well. #### Questions to clarify / additional evidence required "BERT-MK inegrates fact triples from the knowledge graph." - how? I can follow the cite but this sentence provides little information. "inject different types of knowledge independently" - is it correct to say then, that, by design, there can be no _beneficial_ interactions or synergies among different types of knowledge? Alternatively, in the fine-tuning phase, could different adapters interact or affect each other via the downstream coupling in the task-specific layers? Is this observed in practice? How should the reader think about the relative magnitude of the presented improvements? At one point I see "K-ADAPTER (F+L) makes significant improvement of ..." but I believe "significance" is only meant coloquially here. Section 3.1: how was this structure chosen, what was the motivation or intuition here? What limits, if any, do you foresee with the use of separate parallel "knowledge modules" like this? Could we use 10, 100, 1000 K-Adapters? #### Additional feedback to improve It would be helpful to cite Ling and Weld 2012 (or similar) for the definition of "loose" micro/macro F1, or briefly explain it inline in the evaluation setup. Likewise for the "catastrophic forgetting" phenomenon affecting other knowledge injection attempts - is there some previous work explicitly demonstrating this problem when using multiple knowledge sources? If not, it would have been interesting to have an experiment of this sort in this work. <doc-sep>########################################################################## Reasons for score: The authors propose a plug-in based adapter approach to allow for task specific parameter settings without updating the original pre-trained model which prevents the potential for catastrophic forgetting while also removing the need for separate models for separate tasks. The work seems to build off Houlsby 19 as briefly cited, but its in plug-in nature seems easier to adopt for multiple tasks. There is however not direct comparison with it or Cooper et al 19 ( https://arxiv.org/pdf/1902.02671.pdf ) which makes it difficult to assess. The way in which the adaptors were pretrained was a little unclear to me. The experiments are extensive and well done. ########################################################################## Pros: 1) The number of experiments run ( 3 tasks on 6 datasets total ) are extensive and shows the K-adaptor approach can benefit from the factual adaptor in particular in giving better performance over RoBERTa ( with or without multi-task learning ). 2) The proposed adaptor seems concise and easily expanded to incorporate other knowledge sources ( though there are few details which could help clarify things see #2 in next section ) 3) The probing task using LAMA to show how much factual knowledge has been memorized by the K Adaptor ( RoBERTA + facAdapter ) was well done and its discussion was very interesting. ########################################################################## Cons: 1) The proposed adapter solution is somewhat similar in nature to that of Houlsby 19 ( and to a lesser extent Cooper 19 ( https://arxiv.org/pdf/1902.02671.pdf ) ) and it feels like an omission to not discuss Houlsby 19 and make experimental comparisons against it discussing pros/cons more thoroughly especially since in the extensive experiments done in this work it is shown the linguistic adapter usually only adds a tenth of a percentage point when using RoBERTa with a single Factual adapter. In this single adapter case then its not immediately evident how these models would differ and what the advantage is. Both Houlsby and Cooper are evaluated on the GLUE benchmark and provide code. 2) I was a little confused as to how the adapters were specifically pre-trained and it might be a question of Figure 1b, but also sections 3.3 and 3.4 could have been expanded to clarify it a bit. It is my understanding that when pre-training the facAdapter on the relation classification task for instance in Section 3.3, for a given example in T-REx-rc, two entities and context are passed into RoBERTA whose weights remain fixed while those of the KIA units of the facAdapter are updated and the final hidden representations of RoBERTA and the facAdapter are concatenated to form an input representation of the entities given their context and this used for the actual task. Is my understanding correct? If so I'm confused as to how the subsequent pooling and concatenation actions are done. Clarifying this process for 3.3 and 3.4 would be beneficial for clarity purposes and its not discussed in the supplemental materials either. 3) Your RoBERTa Large baseline already beats most of what you are comparing against which is fine as your adapters give gains ( again particularly the facAdapter), but it also would have been interesting to see what sort of gains would have been achieved using a different less powerful model as the base RoBERTa small or just plain BERT and additionally some sort of ablation testing or explanation on the choices made for the adapter networks themselves ( ie, N=2 Transformers etc , hidden layer size, etc ) though its possible this could be left for future work. For clarity in Figure 2 where you show N x Transformer Layer ( and N=2), I'm assuming the first Transformer Layer feeds directly into the second Transformer Layer which then feeds into the Up Projection layer correct? If so it might be better just to show two transformer layers like that instead and additionally, naming the projection layers Up and Down Projection Layer respectively. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above ######################################################################### Small typos: In Abstract: we propose K-ADAPTER, which remains the original parameters .... "remains" -> "keeps" or "retains" In Introduction: they fail to continual learning ..... "fail at continual learning" It remains the original representation .... "remains" -> "leaves" (pg2) while remaining the original parameters of RoBERTa frozen... "remaining" -> "keeping" Section 3: It remains the original representation .... "remains" -> "keeps" 3.1: Different from Houlsby et al. (2019) add adapter layers -> "In contrast to Houlsby et al. (2019) who add adapter layers" 3.3: all relations having lees than .... "lees" -> "less"
The paper augments pre-trained language models by introducing “adapter”, where each adapter is another language model pre-trained for a specific knowledge source (e.g., Wikidata) and an objective (e.g., relation classification). The representation from each adapter is concatenated to the representation from the generic LM. Specifically, they introduce two adaptors, “factual” (mostly derived from Wikipedia), and “linguistic” (from dependency parser), and the experiment shows modest improvements over various benchmarks. This is a borderline paper, as both methods and experiments are reasonable yet not very novel or strong. The clarity of the paper can be improved (as pointed by R1 and R4), without any mathematical notations, model details are to be interpolated from figures. The novelty is limited and experimental rigor can be improved (i.e., for many settings, gains are fairly small and no variance reported).
This paper presents a formal analysis for the impact of graph reordering (i.e., ordering the in-memory storage sequence of graph node embeddings) on the cache efficiency of near neighbour searches using near neighbour graphs. The connection of the graph ordering (i.e., memory layout of the graph nodes) and the cache complexity is formulated, based on which the cache complexity of the Gorder (Wei et al., 2016) is analysed and two other graph ordering (Corder and Porder) are proposed. Experimental results on large real datasets confirm the effectiveness of the analysis and the importance of graph reordering for near neighbour search efficiency. Strengths: 1. The paper has presented a solid analysis for the impact of graph reordering (i.e., ordering the in-memory storage sequence of graph node embeddings) on the cache efficiency of near neighbour searches using near neighbour graphs. 2. Experimental results on large real datasets are presented, with an interesting discussion on the results. 3. Source code of the paper is available. Weaknesses: 1. The proposed method Corder is ineffective as discussed in the experimental results. Perhaps this method can be dropped to make room for adding more details on the experimental settings to the main content of the paper. 2. The other proposed method Porder is only somewhat better than the existing method Gorder (Table 5). 3. A lot of contents have been included as supplementary material, which makes the paper somewhat difficult to follow. Typo: "degree-based groupingFaldu et al. (2019)"; "Studies show that Studies show that" The paper included an interesting discussion on the results and limitations. <doc-sep>The paper studies practical performance of different methods for rearranging the node layout in the memory for graph-based approximate nearest neighbor search algorithms (HNSW specifically). It also proposes a simple modification of the existing methods based on query profiling. The paper claims to have up to 30-50% improvement in query latency on 100M datasets and is accompanied by code (though not clear if it is going to be released). Strengths: - Simple change that is very likely to be agnostic to the type of the graph algorithm used. - Sizable gains from the algorithm on large datasets. - Source-code (hopefully will be released with the publication - not clear from the text). - Long discussions of the results. Weaknesses: - The paper does not have a clear description of the methods (no pseudocode or even text description of step-by-step actions). I guess the readers are suggested to follow the cited papers, but IMO there should be at least a sketch of the best solution. - The source of 1000 queries for POrder not discussed in the paper. Were they used from the train set or the test set? - There is no implementation of POrder in the code, which is confusing. The code also has bugs (e.g. nonexistent “-openmp” flag), it does not compile without fixing the dependencies (could had been fixed if you provide Dockerfile) and there are errors in its description. - Judging the code, the construction is done in a single thread. If index construction time provided in the paper is for this regime (which is not clear from the paper, but seems to be the case), it should be redone in the multi-threaded regime. I do not think there is any negative societal impact. <doc-sep>This paper proposes to use graph reordering to improve the cache locality of graph-based nearest neighbor search algorithms. An analysis is conducted to show why graph reordering works and the experiments show that graph reordering significantly improves performance. Strength 1. Graph-based nearest neighbor search algorithms are very popular and graph reordering improves the performance of graph-based nearest neighbor search algorithms. 2. Although the conditions are restricted, the analysis explains why graph reordering improves performance. 3. The profiling-based ordering scheme makes sense. Weakness 1. The authors should dig deeper to show why recording improves performance. Currently, the explanations are vague (e.g., due to software prefetcher and auxiliary functions). The authors may want to make them more specific by explaining how software prefetcher works or conduct some experiments to show where the reduction comes from. 2. I believe that graph reorder works for graph-based algorithms in general. But it helps to show by experiments that graph reorder improves the performance for algorithms other than HNSW (e.g., NSG or NGT). 3. The legends in the figures are too small to read. Yes
This paper studies how to order in-memory sequences for graph embedding. There was a positive consensus that the studied problem is interesting and results are sufficiently discussed. There were some concerns on missing results, which were addressed during rebuttals.
This paper introduces the PAC-Bayes Information Bottleneck (PIB). Starting from the generalization bound Eq. 4 which shows that the generalization gap is upper bounded by a function of I(w;S), the authors proposes PIB which has an additional regularization term of \\beta I(w;S). Since the computation of I(w;S) is intractable, the authors then make several assumptions to simplify its computation, arriving at an estimate of I(w;S) by \\tilde{I}(w;S) (eq. 15), and use SGLD to compute it in practice. Experiments show that (1) there is a two-phase transition in SGD training as indicated by \\tilde{I}(w;S); (2) \\tilde{I}(w;S) seems to correlate with the generalization gap, under different variations of experiment hyperparameters: number of hidden layers, noise ratio, and number of random label data; (3) it improves performance compared to l2 and dropout regularization. Strength: This paper addresses an interesting and important problem. The proposed PIB is novel. The experiments show that the proposed \\tilde{I}(w;S) correlates with the generalization gap, and helps improving the performance. Weakness: In order to make the computation of I(w;S) tractable, the authors make several important assumptions. It would strengthen the paper a lot if the paper discuss and perform experiments to show if the assumptions are valid, in the experiments the authors run. Furthermore, in section 5.5, can the authors show the generalization gap (together with the train and test acc), with different regularization? Ideally we should see that with PIB as the objective, the generalization gap is much smaller than the other methods. With this, we can then be confident that the improvement is due to reduced generalization gap instead of better training. In summary, this paper is novel, but the experiment should be strengthened as detailed in the main review. <doc-sep>This paper proposes a new version of the Information Bottleneck objective for training neural networks. This is in part motivated by previously-derived PAC-Bayes bounds on the generalization error that are proportional to the square root of the mutual information of the weights and the training dataset: I(w;S). Thus this new information bottleneck objective attempts to minimize both the empirical risk and this mutual information.The paper derives a computationally-tractable algorithm for estimating I(w;S), then this algorithm is used to show that this quantity is inversely correlated with generalization loss on a variety of neural network architectures. Strengths: - The paper proposes an exciting general principle of deep learning. As far as I know, the contributions here are novel and will be of high interest to the community. - The authors build on previous work by showing how their IB objective addresses the shortcomings of previous work in this area. - It is very well written. This is a highly-technical paper, and the details are presented in a careful and thoughtful way. - The experiments are well done and the results support the conclusions. Specifically, this objective is motivated by a PAC-bound (the tightness of which is not clear) and various approximations are used to estimate I(w;S) (the accuracy of these are not immediately clear). The experiments address these issues by showing that the motivation and the approximations are reasonable. Weaknesses: - The limitations of this method are not discussed clearly. For example, the paper provides an algorithm for sampling from the weight posterior p(w|S), but how does this compare computationally to standard training of a neural network, or a estimating the posterior in a Bayesian Neural Network? - There are some minor grammatical and spelling typos throughout, e.g. "infection point". An excellent paper with exciting ideas, clear presentation, and technical depth. <doc-sep>The authors propose new interpretation of the Information Bottleneck (IB), dubbed PAC-Bayes Information Bottleneck (PIB). Where the IB is defined wrt the mutual information between feature representations $T$ and either inputs $X$ or targets $Y$, PIB is defined wrt the empirical risk over the dataset $S = \\\\{X_i, Y_i\\\\}_{i=1}^n$ and the mutual information between model parameters and $S$, $I(\\mathbf{w}; S)$, or information stored in weights. The authors show that PIB is a bound on generalization error. The authors derive a tractable estimator for $I(\\mathbf{w}; S)$. The authors present an approximate inference method for p(\\mathbf{w} \\mid S) that utilizes the proposed PIB. The authors show that PIB reflects the hypothesized two-phase fitting and compression modes of neural networks across different activation functions, network depth, and network width. They show that $I(\\mathbf{w}; S)$ yields a good estimator of the generalization error that is robust to label noise. They show that their inference method improves generalization across several benchmark datasets. **Strengths** To my knowledge this paper has significant technical and empirical novelty. The authors do a good job of summarizing previous work and differentiating their contributions. This is not my area of expertise, but the derivation of PIB, the estimator for $I(\\mathbf{w}; S)$, the proposed optimal posterior all look novel and correct. The experiments are well done, thorough, and support the main claims of the paper. **Weaknesses** The main weaknesses of this paper are language and clarity. I would recommend a thorough Grammarly or perhaps external advice. The graphs should report means and standard error intervals over multiple random seeds. **Some specifics** - Some technical terms are used before definition (e.g., phase transition in abstract) - In the abstract IB cannot explain, maybe IB theory can, as you use in the intro. - IIW and $I(\\mathbf{w}; S)$ are redundant, would recommend just using $I(\\mathbf{w}; S)$ - "Third, mutual information becomes trivial in deterministic cases." Please elaborate / cite. - "(2) we derive a solution to the intractable...," can something intractable have a solution? maybe approximation is better. - "optimal posterior of PIB," does PIB have a posterior or is the posterior over the weights? - Figure 2. IIW only shows compression phase, can the loss also be included in these plots? To my knowledge this paper demonstrates significant technical and empirical novelty. I believe the main weaknesses can be addressed prior to publication. Therefore I recommend acceptance. However, I am not an expert on this topic, so my confidence is only a 2. <doc-sep>The authors propose a formulation of the information bottleneck problem, replacing the mutual information between input X and latent representation Z via the mutual information between the sample S and the weights W obtained from the sample. They derive closed-form solutions for this mutual information in the Gaussian setting and propose an SGLD scheme to optimize the objective. Using this objective and optimization algorithm, the authors investigate several interesting scenarios, including different activation functions and noisy labels. The paper is generally well written and treats an interesting and timely topic. The idea to limit the information about the sample that is contained in the weights is not new (the authors cite several works that bound the generalization error via this information), but this is the first time that I have seen a corresponding cost function implemented in practice. There are, however, a few issues that are not perfectly clear to me: - The authors cite the literature stating that the generalization gap is limited by I(S;W) if the loss is sigma-sub-Gaussian. Does this hold for the negative log-likelihood in (6)? Also, in (6) is S a random variable or not? (4) requires that I(S;W) is computed as an expectation over p(S), while the log-likelihood in (6) is an expectation over P(w|S), i.e., not over p(S) but over a concrete S. How can this be understood? - Connected to this, is it safe to call the resulting cost function an information bottleneck cost function? I assume that this is better called an IIW-regularization rather than an IB cost. The IB cost is a very specific formulation that combines a mutual information cost with a mutual information utility, whereas here we have a general cost with an additional mutual information cost as regularization term. - The authors correctly claim that I(X;T) becomes trivial if the network is deterministic. More precisely, this mutual information becomes infinite in many of these cases (see "Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle" by Amjad and Geiger). I believe that this result carries over to I(S;W) being infinite for deterministic learning algorithms. This may not hold for all learning algorithms, but certainly for some. My own gut feeling suggests that I(S;W) is infinite for SGD with finitely many epochs (e.g., by the fact that there are only combinatorially many options to shuffle the batches), but that it is finite for SGLD, where noise is added to the weights. It is therefore not clear to me in which settings the analysis in Section 3 is a valid approximation. In other words, in which settings is the assumption that p(w|S) is Gaussian valid? Does it only hold for SGLD? - Connected to the point above: In which cases is the assumption that p(w) is Gaussian a valid approximation? - Can this Gaussian assumption about p(w) be used to bound I(S;W) from above? (E.g., for a Gaussian learning algorithm, can it be shown that the term I(S;W) is maximized if W becomes Gaussian as well? This would be at least intuitive from a channel coding perspective, where a Gaussian channel input is known to maximize the mutual information through a Gaussian channel, and which is then known to produce a Gaussian channel output.) - In Algorithms 1, pls. compare line 9 with your equation (15). In (15), you sum over squared inner products. In line 9 and 11, you square over the resulting sum of inner products. Is this difference intended, and if so, how can it be explained? Also, do we have $T_0 \\ge T_1$ in Algorithm 1? - In Fig. 1, why is the mutual information I(W;S) evaluated for different layers? What is the exact meaning of splitting the IIW between layers in terms of the generalization bound? I was assuming that the generalization bounds all consider the entire set of weights, and that the proposed PIB should do so as well. - Also in Fig. 1, the discussion of the inflection point is not fully clear. - In Section 5.1, it is claimed that the variance of the information explodes. Can this be made more precise (e.g., by writing down the mathematical symbol for this variance)? Furthermore, this is not shown in the figures, if I remember correctly. - In all figures, why is the mutual information I(S;W) so small? These numbers do not seem right. I would assume that it is necessary to "learn" more than 10⁻2 bits/nats to successfully solve a classification problem. In other words, while the general trend of IIW seems to be correct, I am not convinced of the correctness of the absolute numbers. Can you provide some intuition about these small numbers? Is this connected with the proportionality symbol in (14)? (But going from (8) to (9) it seems to be that additive constants are dropped, not multiplicative constants.) For the sake of clarity, I would prefer that footnote 3 is in the main text. Also, in some instances the notation and terminology is not clear. E.g., is S sampled iid in (4)? Why is the "oracle prior" called an oracle? How exactly is the bootstrapping resampling weight \\zeta_k defined? Why is the temperature $\\beta$ called the annealing temperature just before (18)? At the end of Section 5.2 you write that the l2-norm keeps increasing -- the norm of what? A very interesting paper, dealing with an interesting and timely topic. Unfortunately, the paper is not perfectly clear throughout all sections.
This paper revisits the information bottleneck principle, but in terms of the compression inherent in the weights of a neural network, rather than the representation. This gives the resulting IB principle a PAC-Bayes flavor. The key contribution is a generalization bound based on optimizing the objective dictated by this principle, which is then tractably approximated and experimentally verified. Reviews raise concerns about assumptions made to achieve the tractable version and a public discussion debates whether this is truly a PAC-Bayes bound. The authors address these adequately. Another concern is whether improvements in experiments can be ascribed to the new objective. Authors add new experiments in support of this. Additional concerns about the clarity of certain aspects of the paper were or were promised to be addressed by the authors. Overall, the perspective of this paper, its technical contributions, and experimental evaluations appear to be worthwhile to share with the community, as they advance the applicability of the information bottleneck principle.
I liked this paper quite a lot. Although this paper does not belong to my area of expertise, I was able to understand the paper clearly because of its lucid exposition. Experimentally, the authors show a novel GNN design with an attention module that has comparable performance to the MLP and outperforms other GNN designs. I believe that this will be a valuable contribution to many practical problems. Unfortunately, this work does not have any theoretical results, and evaluating the experimental results is outside my range of expertise. Therefore I would like to defer this paper to my fellow reviewers.<doc-sep>Main Idea In this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multiple attention heads. The authors carried out experiments on several real-world networks (4 assortative and 3 disassortative) with comparison to several state-of-art GCN methods. Strength: The authors study a very interesting problem of GCN/graph embedding or disassortative graphs. The proposed method is well motivated with solid theoretical motivation from graph wavelets. The proposed model is very intuitive generalization of graph wavelet methods. The empirical evaluation is very thorough on seven networks with comparison to about 10 baselines of different kinds. Weakness: Though the authors mentioned the use of sparsification of attention for speed-up, however, it mentioned that t is set to zero. It is interesting to see how scalable the proposed method is as it needs to have global attention to possibly all nodes. An empirical comparison of running time would be very helpful. The authors only carry out experiments on three disassortative which are all very small. It would be interesting to see more experiments on disassortative graphs. Alternatively, it would be interesting to have an experiment on synthetic graphs where the \\beta can be controlled and varied smoothly to see how it affects the performance of different algorithms. The authors picked only node classification of evaluation tasks. It is interesting to see how the disassortative could impact other tasks like graph reconstruction and link prediction. <doc-sep>This work propose a new GNN architecture to help GNN break its limitation on only working over homophilic networks. The technical is to use introduce graph global attention. I think the paper is written okay. The motivation is clear. The solution is reasonable. However, I have following criticisms: 1. This work has limited novelty. Observing that GCN cannot work well over heterophilic networks is not a new idea and observation. Using attention to capture the features from far-away nodes is natural but not novel. I do not think that it is reasonable to argue against other works, e.g. [1] that adopts the above idea by saying they are not expressive enough. Expressiveness sometimes may lead to model overfitting. Actually, ChevNet [2] can also capture far-aways nodes and be expressive enough. Why does it not work well? I guess that it is due to some overfitting issue. Moreover, if I understand it correctly, the limited difference between this work and [3] is most likely the global attention, which has very limited contribution. 2. Although the work claims everywhere to tend to decrease the complexity, when computing the global attention, one still needs to do computation for every pair of nodes, which is of course not scalable for even medium-sized graphs. 3. The heterophilic networks used for evaluation are very small with only several hundred nodes. Why not try larger ones, say actor, Cham. in [4]? I guess the computational issue comes from the global attention. [1] Non-Local Graph Neural Networks. [2] Convolutional neural networks on graphs with fast localized spectral filtering. [3] Graph wavelet neural network [4] Geom-gcn: Geometric graph convolutional networks. ---post-discussion update---- I would like to thank the authors for preparing the rebuttal and attending our discussion. However, I still think the complexity is a concern of this work. I do not think that Eq. (3) can be implemented within the complexity that the authors claimed. Moreover, if the authors use another way to compute the attention scores, that way should be very clearly stated instead of written in a different form. Given the high complexity, I cannot clearly see the advantage of this work in comparison to [1], as the non-local attention has been proposed in [1] already. [1] Non-Local Graph Neural Networks.
This paper proposes a GNN that uses global attention based on graph wavelet transform for more flexible and data-dependent GNN feature aggregation without the assumption of local homophily. Three reviewers gave conflicting opinions on this paper. The reviewer claiming rejection questioned the novelty of the paper and the complexity of the global attention mentioned in the paper. Even through the authors' responses and subsequent private discussions, concerns about complexity and novelty were not completely resolved. Considering the authors' claim that the core contribution of this paper is to design fully learnable spectral filters without compromising computational efficiency, it is necessary to consider why it is meaningful to perform global attention based on graph wavelet transform in the first place. In terms of complexity, although the wavelet coefficient can be efficiently calculated using the Chebyshev polynomials mentioned by the authors, in the attention sparsification part, n log n is required **for each node** in sorting, resulting in complexity of n^2 or more. There may still be an advantage of complexity over using global attention in a message-passing architecture, but it will be necessary to clarify and verify that, given that the proposed method uses an approximation that limits global attention within K hops. Also, this paper modifies the graph wavelet transform in graph theory, which requires a deeper discussion. For example, as the authors mentioned, the original wavelet coefficient psi_uv can be interpreted as the amount of energy that node v has received from node u in its local neighborhood. The psi_uv defined by the learnable filter as shown in Equation 3 has a different meaning from the original wavelet coefficient. There is insufficient insight as to whether it is justifiable to use this value as an attention coefficient. Overall, the paper proposes potentially interesting ideas, but it seems to require further development for publication.
The paper proposes a novel framework for semi-supervised learning, that solves two issues of previous methods: 1) over-reliance on labeled data and 2) error accumulation. It shows that jointly solving the main task together with another task (that discriminates whether the data label is real or not) leads to better performance. Strengths - The proposed framework seems to be novel. - It works well in experiments, on a wide range of tasks (classification, label propagation, and data imputation). - It seems to be potentially beneficial for many domains, since it does not have domain-restrictions, while many previous SSL methods rely on certain image domain techniques such as consistency regularization (and data augmentation). Weaknesses - Since the proposed method is only compared with the original pseudo-label method, comparing with other extensions of pseudo-labelling methods that are mentioned in Section 5 will make the contributions more clear. - In addition to the papers mentioned in Section 5, there are a few papers that try to address the error accumulation in semi-supervised learning methods that is observed in pseudo-labelling. For example: "In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning" from ICLR 2021 and "Repetitive Reprediction Deep Decipher for Semi-Supervised Learning" from AAAI2020. Questions - I am not sure if I understood the experiments correctly. As the missing rate goes higher, do we have more unlabeled samples (as explained in the last paragraph of page 6), or do we have more noisy-labelled samples (as explained in 1st paragraph of Section 4.1)? - Can we show the 3rd task (data imputation) in Figures 2 to 4? - One of the benefits of the method seems to be that it can be incorporated into a wide range of SSL algorithms. I think the paper demonstrated that it can be used to enhance pseudo-labelling method, but what kind of other SSL algorithms can SCL incorporate? Minor questions and comments - SSL is a very hot topic and there has recently been many advances. Since the experiments do not compare with many of the recent works, it would be better to emphasize why they were not compared. (For example, Section 1 has a discussion on how recent SSL methods utilize consistency regularization, which relies on heavy data augmentation techniques that is only available in certain domains.) - What kind of value for parameter alpha is used in the image classification? (For the other two tasks, I think the appendix explains that alpha is 1). - If we are given a labeled dataset L and unlabeled dataset U, it seems we can automatically construct vector M (which is explained in end of page 2). If this is correct, then why do we need M as an input in Algorithm 1 in page 6? - What is P introduced in the beginning of Section 2.2? It seems like it is a set from the $p \\in P$ notation but since it compares with M in the loss function, it also looks like a vector. - typo "perforamnce" in page 6 - Should $m_i, m_j$ in the beginning of page 3 be $M_i, M_j$? - Is $Y$ a label space ($y \\in Y$), or is it the full set of labels in the training dataset ($Y = Y_L \\cup Y_U$)? - Ideally it would be better to perform several trials and report mean/standard error in Table 1. =========== after rebuttal Thank you for answering my questions. The additional experiments are helpful to have a better understanding about the proposed method. It looks like the advatangeous points of the proposed method is now about the low computational costs, according to the new experiments including UPS, rather than better performance. Although this still may be beneficial for the research community, it seems to be slightly less significant and also may affect the storyline. I would like to also recommend to put the new experiments with UPS in the main paper instead of the appendix. The proposed method seems to have some nice benefits, but I feel there are a few weaknesses that should be addressed. I also have a few questions and it would be helpful if the authors can take a look at the previous section (main review). <doc-sep>The paper introduces Self-interested Coalitional Learning (SCL), which is a novel approach to semi-supervised learning. SCL combines the traditional self-training approach to semi-supervised learning with an auxiliary task that infers label observability. The empirical results show that, in a variety of scenarios, SCL outperforms both self-training and the original model. This is an interesting paper on a topic with important practical applications: semi-supervised learning. The contribution appears to be original, and it is likely to influence future work in the field. The authors are explicitly calling out and addressing the two main weaknesses of traditional self-learning approaches: error accumulation and over-reliance on the labeled data. The paper would greatly benefit from an additional section that would provide an intuitive, illustrative example of how and why the proposed approach outperforms self-training. Ideally, it should compare and contrast the convergence of (1) self training, (2) the auxiliary task, and (3) SCL. The paper would also benefit by tightening the narrative around the ALPHA parameter, which, in the main paper, is only discussed in the theoretical framework. Appendix A provides no value of ALPHA for the first dataset, and it proposed (without any justification) a value of 1 for the other two domains. Appendix B is extremely brief and not very helpful. The authors make no recommendation on how to tune alpha, and the argument that even the worst alpha (in the 0.1 - 0.9 range) is better than the original model is fairly weak, given the wide variations of the accuracy due to changes the value of alpha. OTHER COMMENTS: - for Table 1, please add three more rows: 0%, 90%, and 99%. The former is critical to understanding the upper-bound performance, while the later two will bring SCL into a more realistic semi-supervised regime, where unlabeled data is one or two orders of magnitude more abundant than the labeled data - please add to Figure 6 the horizontal lines with the accuracy of the original model for each of the three missing rates - it is still unclear why did you choose to use only 10%of the data for image classification (page 6); is scalability to large datasets a concern? - please spell-check the paper - eg, "perforamnce" on page 4 - page 2: please replace "more sufficient" - page 3: "jointly solving above two tasks" --> "jointly solving THE above two tasks" - page 3: "there are some other works embody" --> "there are some other works THAT embody" - page 4: "are impacted the influence" --> "are impacted BY the influence" - page 7: please replace "well learn" Overall, this paper uses a novel idea to improve the state of the art for semi-supervised training. <doc-sep>This paper proposes a new semi-supervised learning method. Motivated by the error accumulation problem of typical self-training paradigms, the authors propose to explicitly model the confidence of pseudo labels as an auxiliary task. They come up with a self-interested coalitional learning (SCL) strategy to solve both tasks jointly. Under the new framework, the main task is transformed into a cost-sensitive learning problem. Experiments demonstrate that pseudo labels are substantially more accurate with the new method and better performance of the main tasks at different label missing rates. Pros: - Overall the paper is well-structured and easy to follow. - The new method achieves its original goals and improves SSL effectiveness by jointly solving the main and the auxiliary tasks. - The authors introduce a new SCL strategy to solve the problems, which can be applied to a broader class of learning problems. Cons: - Lack of experiments - The proposed method is only compared with the self-learning method (with the same base learner). While this demonstrates how the model is improved with SCL, it is also necessary to compare with state-of-art SSL methods. - It's also valuable to include the supervised method with fully-labeled dataset as a reference in all experiments. - For data imputation, a more common case is that missing state is correlated with input/output instead of simply random missing. It also checks the method robustness against labeled/unlabeled distribution shift. - Compared with original self-learning method, the new method has an extra discriminator model, which are based on the same base learners as for the main tasks. It's meaningful and more fair to compare with supervised models of higher capacity. - The paper doesn't cover how SCL can work together with consistency regularization, which is commonly used together with self-learning. Besides, I have a few questions: - Although Table 1 doesn't have a row for Missing rate = 0% (full dataset), it seems SCL methods have better accuracy than model trained with full dataset for the first two tasks. Is this because the SCL has double model capacity due to the extra discriminator? - Why is the test accuracy of pseudo-labels 100% for SCL method in Figure 4? Are they calculated differently? This is an interesting paper from technique perspective. But it definitely needs more empirical studies to demonstrate practical value. <doc-sep>This paper proposes a new semi-supervised learning framework by introducing an auxiliary task that distinguishes whether the pseudo-labels are truly labeled or not. Then, this information is used to add a reweighting loss to the main objective. Experiments on several simple benchmark datasets show that the proposed method outperforms some naive baselines. The idea of introducing the auxiliary task that discriminates whether an instance is labeled is quite interesting. In effect, such a strategy is first introduced in active learning [1]. In the VAAL method [1], a similar discriminator is introduced to identify where an example is labeled or not, which is then used to indicate the uncertainty of an example for active selection. Therefore, the proposed method has a close connection to a recent work in SSL [2] that also employs the uncertainty measure to select high-quality pseudo-labels. I have the following concerns. 1. The derivation in section 3.2 is confusing. For example, in Eq. (3), the second equality is incorrect and the term $\\frac{dd}{dx}$ should be added. Also, it would be better to change the notation of $d$ (discriminator) to another one, since derivative dx also uses the notation d. Besides, I actually did not understand why $\\mathcal{L}_B$ depends on $f$, since $f$ and $\\mathcal{L}_B$ are from two different branches without sharing network blocks (Figure 1). 2. The experimental section is not convincing and this is my main concern. The datasets and the baselines are too simple. State-of-the-art SSL methods should be employed to support the claims. In particular, the uncertainty-based SSL method [2] should be compared. As I have discussed above, the proposed method can implicitly be equal to existing techniques in SSL. Is the proposed method complementary to existing methods? Or it is contradictory to some techniques? These questions require an in-depth empirical analysis. Overall, this work is below the bar of an ICLR paper regarding its poor experiments. [1] Sinha S, Ebrahimi S, Darrell T. Variational adversarial active learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 5972-5981. [2] Rizve M N, Duarte K, Rawat Y S, et al. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning[J]. arXiv preprint arXiv:2101.06329, 2021. Interesting idea, poor experiments and confusing derivation.
This paper proposes a new method for the important problem of semi-supervised learning. This method relies on an auxiliary task, label observability prediction, to weight the examples according to the confidence in their pseudo-labels, so as to avoid the propagation of errors encountered in self-training. Limited experiments show that the proposed method can compete with other methods in terms of performance or training time. On the positive side, all evaluators agree on the potential value of the proposed approach, which is generic in nature. On the negative side, the experimental evaluation, although strengthened during the discussion, is not yet strong enough to have really convinced of the real merits of the method. In particular, comparisons with the state of the art still need to be improved. In addition, the paper would benefit from some rewriting, in particular of the mathematics (e.g. the d notation for task B should be avoided as suggested by one reviewer, there is a misplaced partial derivative in equation 6). The authors could also simplify their derivation by using the envelope theorem. I therefore recommend rejection, with an encouragement to strengthen the experimental part, and to improve the derivation of the proposed method.
The main goal of this paper is to introduce a simple methodology for optimizing transformer based models for efficiency and effectiveness. The paper introduces two main ideas: 1)A top-down strategy for pruning components of a transformer model: Given a specific focus, say speed, the strategy is to consider pruning large coarse-grained components first followed by smaller finer-grained components. The pruning decision is made based on a “significance analysis” -- a component is considered significant for pruning if it from the model does not result in a substantial increase in the model’s loss (as decided by a pruning threshold). 2) Pruning and approximating techniques for different components: For example feed-forward networks are pruned by removing weights in groups (determined via a hyperparameter). For approximating self-attention a sign-matching technique for deciding which top K keys to use for computing Query x Key dot products. The main strengths of this work are as follows: 1) The techniques do not require training networks from scratch and can be applied directly during fine-tuning. 2) The techniques are simple and should apply widely to most transformer-based models. 3) The empirical results support the claim that the technique can yield significant speed-up and memory-reductions while maintaining accuracy and even provide improvements in accuracy if that is the pruning goal. They show that technique is orthogonal to other models explicitly designed for speed and memory footprint (Q8BERT, DistillBERT) and can provide further improvements in both efficiency and effectiveness. 4) This is a practical and useful approach that should be widely applicable along with many useful insights about optimizing transformer-based systems. I appreciate that the experimental results are reported with averages across multiple runs! I don’t see any major weaknesses in the paper. Here are some areas that can be improved: 1) The description of the pruning strategies was hard to follow and needed to be tightened up. Possibly adding equations and some pseudo-code to the description should help. 2) I am curious to know what components get pruned cross the different models that were optimized. I wonder if there are systematic differences between original and distilled models and between auto-regressive (GPT) and auto-encoding style models. 3) Also some level of ablation analysis on the strategies used will be helpful. For example if the elements were not ordered based on the granularity would the results be any different? Since this is an iterative strategy the order should play an important role in selection and utility of the subsequent pruning steps. Same goes for the set of pruning strategies. A related question would be what gives the biggest gains. 4) What is the impact on the fine-tuning time? The baseline only requires one fine-tuning pass. Does this method require multiple fine-tuning passes? Or can the loss thresholds be computed on a smaller subset of the target data? This may be a good future work to look into for tasks where the training data is relatively large, where one cannot afford to exhaustively search through all the pruning strategies. <doc-sep>After reading the rebuttal, some of my concerns are addressed by the additional experiments. But I also agree with other reviewers that the result is not very surprising. As R4 mentioned, the proposed method depends on the a specific downstream task where the "small" "general" BERT can be further pruned. For a fair comparison to previous work, baselines that are applied to a specific fine-tuning task need to be compared. ===== This paper presents a new framework for creating small fine-tuned pre-trained language models. The framework has 3 components: 1. a set of transformer components to be pruned 2. a significant analysis for identifying unimportant elements. 3. a set of techniques to prune or approximate the transformer element. Pros: 1. The framework is very adaptive by considering different basic elements of the transformer. 2. The framework is very efficient by removing large components (e.g., layers, attention blocks, ffd layers) at first and small components (e.g., weight group) later. 3. The framework gathers multiple different pruning/approximation techniques and tries to explore the limit of pruning pre-trained models, which is appreciated. Cons/Questions: 1. Is the loss used in significant analysis computed using the development set? If the validation loss is used, the experiment results in Table 1 are not reliable. 2. There are many BERT pruning papers. Providing comparison to these papers is very important to evaluate the proposed method. Can the model prune more weight at the same performance level? or Can the model perform better at the same pruning ratio? 3. It is also helpful to present how much computing resource is needed to prune the network. E.g., how many prune-finetune cycles are needed. 4. Lack of results of pruning BERT-base on GLUE, which is a very standard and common setting. 5. In Figure 3, why Q8BERT + Speed Focus is even larger/slower than Q8BERT? With the same speed, Q8BERT + Speed Focus is significantly worse than Q8BERT. Minor: Page 5: less the minimum loss seen ==> less 'than' the minimum loss<doc-sep>This paper presents a method for improving a fine-turned Transformer in terms of a specific metric such as size, speed, or accuracy. The candidates of removed elements are considered hierarchically with some heuristics and are evaluated in terms of training and validation loss to determine whether they should actually be removed from the model. The authors apply their method to several state-of-the-art Transformer models and show that they can produce fast and compact models without losing much accuracy. Although the individual techniques employed to realize the whole pruning process are not particularly novel, the paper presents a well-thought-out approach to combine those and reports very promising experimental results. I think this is a nice contribution to the community, given that the computation cost is increasingly important in dealing with BERT-like models. It seems to me that the authors used transformers whose weights are shared between different layers like Universal Transformers or ALBERT. Maybe I missed something, but I think the authors should clarify if this is really the case in the manuscript. The entire process of pruning is a bit vague and hard to replicate. Would it be possible to describe the whole process in pseudo code? (Is Algorithm 1 the whole process?) I think the authors should also describe the computational cost (or maybe wallclock time) required to perform the proposed pruning processes. It seems to me that the search space is rather large and requires a considerable amount of computation. > p.5 … we prune the element only if the training/validation loss I think you should be more specific here. How did you actually use both the training and validation loss? Why do you need to look at the training loss when you are interested in the generalization error? > p.5 … weight groups of (Wn) … Why is this Wn? I thought this should be W. Minor comments: p.5 less the -> less than the? p.6 doesn’t -> does not p.6 ’’attention -> ``attention p.7 second order -> second-order? <doc-sep>Thanks to the authors for the detailed feedback! I still have concerns about the clarity of the presentation, and some contributions of the papers are not strong enough, so I'll keep my score. === Summary: This paper presents a framework to systematically perform pruning and layer approximation. The framework includes a queue of potential elements for compression. At each time step, the framework will evaluate the head element of the queue, try to prune the whole element or perform approximation (quantizing, pruning attention heads, and approximating with sign-matching attention), and keep the transformation only if the loss in performance is acceptable. The paper performs experiments with various models on GLUE and shows speedups or compressions compared to the original model. Reasons for score: The techniques used in the paper are not novel, and the choices on how to apply multiple compression techniques need more justification. The experiment results are okay but not surprising. The presentation of the paper needs to be polished. See below for more details. Pros: 1. I like the insight that {approximate, fine-tune, approximate} cycles doesn’t work for fine-tuning. 2. I like the insights used to determine which elements to be examined first: start from the larger blocks and later layers. I hope this point can be emphasized more and compared with more brute-force and less-efficient algorithms. For example, for each round, one can choose a layer that causes the least loss of performance to prune. You can compare your greedy algorithm with this algorithm to show the gain of using a less efficient algorithm is not significant. 3. The sign-matching attention proposed in the paper is new. I would like to see more emphasis and ablation studies on the effectiveness of this special module. Cons: 1. It is well-known that compressing the model is easier during the fine-tuning phase [1, 2]. I don’t think this should be a contribution to emphasize for the paper. 2. The whole compression framework has a single global error bound. Combining this with the greedy layer-by-layer approach taken by the framework, will the following case be possible: a layer that is early in the queue causes a huge drop of accuracy and thus makes all the future layers impossible to remove because the global error bound has been reached. A better way is to only remove the layer with the lowest loss reduction. It will be better to justify this point with an ablation study, or at least show the final pruned model doesn’t have this issue in the paper. 3. At the end of page 5: “When optimizing for speed, however, removing weight groups with low significance from arbitrary locations does not help, since it introduces unstructured sparsity in the weight matrix that can be difficult to exploit to achieve speedups.” It’s true that if you remove random entries in a matrix will not help for the actual speedups, but you can remove an arbitrary set of rows of the matrix, and then restructure the weight matrix (i.e. concatenate all the remaining rows to form a new matrix) to make it efficient for modern parallel hardwares. 4. I don’t really understand the point of using accuracy as the final goal. If the framework is for compression, the goal should be about speedup or size. If accuracy really matters, it should be enforced as the threshold instead of as the final goal. Also, I don’t see the difference in the framework between using speedup or size as the goal, since all the thresholds are defined by loss. 5. The results in the paper are okay, but compared to previous works in computer vision [3], it seems that the model size can be further compressed. 6. There are multiple places where the presentation can be improved: a. It’s more clear to use a pseudo-code instead of a diagram in Figure 2. b. It should be more clear to present Table 1 as multiple tables. c. It’s better to put the results comparing with previous works in a table (in the middle of page 8). Minor comments: - On page 5, 3rd paragraph from the bottom, “less the minimum loss” -> “less than minimum loss” References: [1] Jiao, Xiaoqi, et al. "Tinybert: Distilling bert for natural language understanding." arXiv preprint arXiv:1909.10351 (2019). [2] Shen, Sheng, et al. "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT." AAAI. 2020. [3] Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding." arXiv preprint arXiv:1510.00149 (2015).
This paper introduces a set of techniques that can be used to obtain smaller models on downstream tasks, when fine-tuning large pre-trained models such as BERT. Some reviewers have noted the limited technical novelty of the paper, which can be seen more as a combination of existing methods. This should not be a reason for rejection alone, but unfortunately, the results in the experimental section are also a bit weak (eg. see [1-4]), there are not very insightful analysis and it is hard to compare to existing work. For these reasons, I believe that the paper should be rejected. [1] DynaBERT: Dynamic BERT with Adaptive Width and Depth [2] Training with quantization noise for extreme model compression [3] MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices [4] SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
The paper is a natural extension of [1] which shows the importance of spectral normalization to encourage diversity of the discriminator weights in a GAN. A simple and effective parametrization of the weights similar to SVD is used: W = USV^T is used along with an orthonormal penalty on U and V and spectral penalty to control the decay of the spectrum. Unlike other parametrizations of orthogonal matrices which are exact but computationally expensive, the proposed one tends to be very accurate in practice and much faster. A generalization bound is provided that shows the benefit of controlling the spectral norm. Experimental results show that the method is accurate in constraining the orthonormality of U and V and in controlling the spectrum. The experiments also show a marginal improvement of the proposed method over SN-GAN [1]. However, the following it is unclear why one would want to control the whole spectrum when theorem 2 only involves the spectral norm. In [1], it is argued that this encourages diversity in the weights which seems intuitive. However, it seems enough to use Spectral Normalization to achieve such purpose empirically according to that same paper. It would be perhaps good to have an example where SN fails to control the spectrum in a way that significantly impacts the performance of the algorithm while the proposed method doesn't. Overall the paper is clearly written and the proposed algorithm effectively controls the spectrum as shown experimentally, however, given that the idea is rather simple, it is important to show its significance with examples that clearly emphasize the importance of controlling the whole spectrum versus the spectral norm only. Revision: Figure 1 is convincing and hints to why SN-GAN acheives slow decay while in principle it only tries to control the spectral norm. I think this paper is a good contribution as it provides a simple and efficient algorithm to precisely control the spectrum. Moreover, a recent work ([2], theorem 1 ) provides theoretical evidence for the importance of controling the whole spectrum which makes this contribution even more relevant. [1] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral Normalization for Generative Adversarial Networks. Feb. 2018. [2] M. Arbel, D. J. Sutherland, M. Bin ́kowski, and A. Gretton. On gradient regularizers for MMD GANs. NIPS 2018 <doc-sep>The paper builds on the experimental observations made in Miyato et al. (2018) in which the authors highlight the utility of spectral normalization of weight matrices in the discriminator of a GAN to improve the stability of the training process. The paper proposes to reparameterize the weight matrices by something that looks like the singular value decomposition, i.e. W = U E V^T. Four different techniques to control the spectrum of W by imposing various constraints on E have been discussed. For maintaining the orthonormality of U and V penalties are added to the cost function. The paper also derives a bound on the generalization error and experimentally shows the "desirable slow decay" of singular values in weight matrices of the discriminator. Other experiments which compare the proposed approach with the SN-GAN have also been given. (1)The paper puts a lot of stress on the stability of the training process in the beginning but clear experiments supporting their claim related to improved "stability" are lacking. (2)It would be helpful for the readers if more clarity is added to the paper with respect to the desirability of "slow decay of singular values" and spectral normalization. (3)The point regarding convolutional layers should be part of the main paper. <doc-sep>This paper proposes to parameterize the weight matrices of neural nets using the SVD, with approximate orthogonality enforced on the singular vectors using Orthogonal Regularization (as opposed to e.g. the Cayley transform or optimizing on the Stiefel manifold), allowing for direct, efficient control over the spectra. The method is applied to GAN discriminators to stabilize training as a natural extension of Spectral Normalization. This method incurs a slight memory and compute cost and achieves a minor performance improvement over Spectral Normalization on two benchmark image generation tasks. I'm a bit back and forth on this paper. On the one hand, I think the ideas this paper proposes are very interesting and could provide a strong basis off which future work can be built--the extension of spectral normalization to further study and manipulation of the spectra is natural and very promising. However, the results obtained are not particularly strong, and as they stand do not, in my opinion, justify the increased compute and memory cost of the proposed methods. The paper's presentation also wavers between being strong (there were some sections I read and immediately understood) and impenetrable (there were other sections which I had to read 5-10 times just to try and grip what was going on). Ultimately, my vote is for acceptance. I think that we should not throw out a work with interesting and potentially useful ideas just because it does not set a new SOTA, especially when the current trend with GANs seems to suggest that top performance comes at a compute cost that all but a few groups do not have access to. With another editing pass to improve language and presentation this would be a strong, relevant paper worthy of the attention of the ICLR community. My notes: -The key idea of parameterizing matrices as the SVD by construction, but using a regularizer to properly constrain U and V (instead of the expensive Cayley transform, or trying to pin the matrices to the Stifel manifold) is very intriguing, and I think there is a lot of potential here. -This paper suffers from a high degree of mathiness, substituting dense notation in places where verbal explanation would be more appropriate. There are several spots where explaining the intuition behind a given idea (particularly when proposing the various spectrum regularizers) would be far more effective than the huge amount of notation. In the author's defense, the notation is generally used as effectively as it could be. My issue is that it often is just insufficient, and communication would be better served with more illustrative figures and/or language. -I found the way the paper references Figure 1 confusing. The decays are substantially different for each layer--are these *all* supposed to be examples of slow decay? Layer 6 appears to have 90% of its singular values below 0.5, while layer 0 has more than 50%. If this is slow decay, what does an undesirable fast decay look like? Isn't the fast decay as shown in figure 2 almost exactly what we see for Layer 6 in figure 1? What is the significance of the sharp drop that occurs after some set number of singular values? The figure itself is easy to understand, but the way the authors repeatedly refer to it as an example of smooth singular decays is confusing. -what is D-optimal design? This is not something commonly known in the ML literature. The authors should explain what exactly that D-optimal regularizer does, and elucidate its backward dynamics (in an appendix if space does not permit it in the main body). Does it encourage all singular values to have similar values? Does it push them all towards 1? I found the brief explanation ("encourages a slow singular value decay") to be too brief--consider adding a plot of the D-optimal spectrum to Figure 1, so that the reader can easily see how it would compare to the observed spectra. Ideally, the authors would show an example of the target spectra for each of the proposed regularizers in Figure 1. This might also help elucidate what the authors consider a desirable singular value decay, and mollify some of the issues I take with the way the paper references figure 1. -The explanation of the Divergence Regularizer is similarly confusing and suffers from mathiness, a fact which I believe is further exacerbated by its somewhat odd motivation. Why, if the end result is a reference curve toward which the spectra will be regularized, do the authors propose (1) a random variable which is a transformation of a gaussian (2) to take the PDF of that random variable (3) discretize the PDF (4) take the KL between a uniform discrete distribution and the discretized PMF and (5) ignore the normalization term? If the authors were actually working with random variables and proposing a divergence this might make sense, but the items under consideration are singular values which are non-stochastic parameters of a model, so treating them this way seems very odd. Based on figure 2 it looks like the resulting reference curves are fine, but the explanation of how to arrive there is quite convoluted--I would honestly have been more satisfied if the authors had simply designed a function (a polynomial logarithmic function perhaps) with a hyperparameter or two to control the curvature. -"Our experimental results show that both combinations achieve an impressive results on CIFAR10 and STL-10 datasets" Please do not use subjective adjectives like "impressive." A 6.5% improvement is okay, but not very impressive, and when you use subjective language you run the risk of readers and reviewers subjectively disagreeing with you, as is the case with this reviewer. Please also fix the typo in this sentence, it should at least be "...achieve [impressive] results" or "achieve an [impressive] improvement on..." Section 3: -What is generalization supposed to mean in this context? It's unclear to me why this is at all relevant--is this supposed to be indicating the bounds for which the Discriminator will correctly distinguish real vs generated images? Or is there some other definition of generalization which is relevant? Does it actually matter for what we care about (training implicit generative models)? -What exactly is the use of this generalization bound? What does it tell us? What are the actual situations in which it holds? Is it possible that it will ever be relevant to training GANs or to developing new methods for training GANs? Experiments: -I appreciate that results are taken over 10 different random seeds. -If the choice of gamma is unimportant then why is it different for one experiment? I found footnote 4 confusing and contradictory. -For figure 3, I do not think that the margin is "significant"--it constitutes a relative 6.5% improvement, which I do not believe really justifies the increased complexity and compute cost of the method. -I appreciate Table 1 and Figure 4 for elucidating (a) how orthogonal the U and V matrices end up and (b) the observed decay of the spectra. Appendix: -Please change table 7 to be more readable, with captions underneath each figure rather than listed at the top and forcing readers to count the rows and match them to the caption. What is the difference between SN-GAN and Spectral Norm in this table? Or is that a typo, and it should be spectral-constraint? -I Would like to see a discussion of table 7 / interpretation of why the spectra look that way (and why they evolve that way over training) for each regularizer. Minor: -Typos and grammatical mistakes throughout. -As per the CIFAR-10/100 website (https://www.cs.toronto.edu/~kriz/cifar.html) the Torralba citation is not the proper one for the CIFAR datasets, despite several recent papers which have used it. -Intro, last paragraph, "Generation bound" should be generalization bound? -Page 4, paragraph 2, last sentence, problem is misspelled.
All the reviewers agree that the paper has an interesting idea on regularizing the spectral norm of the weight matrices in GANs, and a generalization bound has been shown. The empirical result shows that indeed regularization improves the performance of the GANs. Based on these the AC suggested acceptance.
The paper is about a method for synthesizing binaural audio from a mono recording of a single speaker's speech. First, I think the title is too general. The paper does not attempt to convert all possible sounds, but it tries to convert a single speaker's monaural speech signal to binaural audio where the speaker is moving. I think this inherent assumption is important since the method will probably not work for multiple overlapping audio sources. I suggest changing the title to "Neural synthesis of binaural speech of a single moving speaker." The first part of the network "neural time warping" is an interesting component that is capable of adjusting the delays conditioned on the location and orientations of the source and microphone such that a location dependent binaural audio is formed by estimating time-varying delays of the original mono recording separately for two channels. It is believable that such a module would be helpful for a single moving speaker. However, such a model would not help or work when there are more than two active audio sources. A separation module would be required for that scenario. Neural time warping is an autoregressive model which can work online. The second stage convolutional network which uses conditioned hyper convolutions is also an interesting architecture that takes the warped signals and applies time-convolutions with kernels obtained from the conditioning input which has the time-varying locations and orientations of the source and the microphone. The section about the loss function is also interesting in that, the time domain l2 loss is shown to not work well for accurate phase estimation, so the authors propose to add a separate phase loss term to compensate for that. I think it would be better if Figure 2 is replaced with a plot of epsilon/|yhat| versus amplitude error divided by |yhat| in (a) and versus the phase error in (b). It could be clearer than the current 2D figure which is hard to interpret. The use of "sine activation" is not well justified. "sine" activation is useful in the first layer of a "signal representation network" which is different from a signal prediction network. I do not see how and why that could be helpful here. In terms of comparisons, 2.5D method uses visual information as conditional information to generate complex masks to produce binaural audio. In this paper, visual information is replaced with the spatial conditioning information. It would help to get more information about the window size and hop size used in 2.5D since they may be an important factor that relates to the amount of delays they can introduce. For wavenet comparison, it was not clear how the wavenet model was trained to generate binaural data. Did it use any conditioning information? If so , how? Was it applied in an auto-regressive way with randomized sampling? The wavenet audio example sounded noisy which is not typical of wavenet generated audio. It looks like the DSP method can utilize a listener specific HRTF which may be difficult to incorporate for the proposed neural model. Is it an important issue? How does the model generalize to unseen speakers and rooms? The training and testing strategy uses the same room and the same speaker(s). Would we have any problem when the monaural audio is recorded in some other room with some other speaker? In Figure 8, maybe it is OK not to draw the original binaural signal for every method. In general, I liked the neural warping and conditional convolutional components which are interesting and I liked the analysis about the loss function. The approach is an interesting way to obtain binaural version of a monaural mono speaker recording in a room. The dataset produced for the paper would also be useful for research. **Update after revision** The revision improved the paper. Thanks for taking care of my comments. Justification of sine activations, generalization to unseen speakers experiment are nice additions. The new title is a bit better and I think it may be OK since the goal is to perform a moving source simulation for single speech sources. Multiple speech sources can be simulated separately and added together as mentioned. The authors may consider possibly a better name: "Neural binaural synthesis from mono speech" which emphasizes that the synthesized target is "binaural speech" from a single speech recording. Just a few more points. 1. I think it is essential in wavenet to apply the model in an auto-regressive fashion over samples. Just using the network architecture and the loss function from wavenet is not equivalent to "using a wavenet model" since an essential part of the model is the autoregressive sampling which makes sure the samples are dependent and coherent. Without auto-regressive sampling, the resulting sound is poor as observed by the authors. So, I suggest to emphasize that "autoregressive sampling" is not performed in the paper to avoid misleading the readers. 2. More explanation of 2.5D is appropriate. One wonders if using a larger STFT window size would improve its results.<doc-sep>Strengths: 1. The paper is well written. It includes clear math notations and figures. Readers can easily follow the thought process of the authors. For example, Figure 2 shows the relation of l2 loss and phase loss with respect to target energy, indicating the importance of penalizing phase loss in the end to end system. The same observation is supported by Figure 3. 2. Strong results. The proposed end2end model significantly outperforms previous SOTA in terms of objective measures and subject tests. The video demo is very convincing. The model improved spatialization and sound quality. 3. High novelty. This paper proposes to impose monotonicity and causality to the learned warping function, which incorporates the physics of sound propagation. I am excited to another example of applying domain knowledge to an end-to-end model. The model includes two novel components: the neural warp network compensates the errors from geometry warp, the temporal convolution works as a post processing module to account for reverberation and other effects. Ablation study shows both components are critical. To be improved: 1. The caption for Figure 4(a) seems to be incomplete. 2. It would be good to include a table to compare the proposed model with baselines in terms of model size and inference speed.<doc-sep>This paper presents a neural network-based model to generate binaural audio given a single-channel audio and positions of source/listener & their angles. The authors developed a dataset of binaural audio, which will be released on acceptance. Technical details and model architecture are available in the body of the paper, whereas additional details such as baseline DSP-based approach, proof, and dataset are available in the appendix. The model was evaluated using the dataset developed in this work. A demo video demonstrating the capability of the model is also provided as a supplementary material. There are a few parts need to be addressed. (1) it is unclear why DTW-based warping is required. IIRC the warpfield here can represent not only a shift but also other monotonic & causal such as repeating. If there is only delay between left and right, just having a shift is enough isn't it? It is great if the authors can explain the motivation to use warpfield more clearly. (2) The use of hyperconvolution is an interesting idea. The equation 5 uses conditional temporal convolution. However, audio generative models such as WaveNet uses a different architecture; gated convolution. The gating mechanism can give additional non-linearity and so I'm wondering if you can evaluate the performance of hyperconvolution against gated convolution. (3) too large confidence intervals in Table 4. Although there were many evaluations, the confidence intervals were pretty large and there were overlaps among them (e.g., small overlap between DSP and "ours" in cleanliness, large overlaps between spatialization and realism between DSP and ours). With this result it is difficult to claim that there was a significant improvement over the baseline system. Please check your results and design the experiment more carefully to figure out whether there is any significant difference between them. Conducting side-by-side comparision is one possiblity. Comments: - This paper claims that it works in real time but no information about speed such as real-time factor & hardware specification are provided. - Sampling rate information is not explicitly provided in the experiment section. - 0.6 MOS difference is large, not "a bit". - Modern WaveNet models often use mixture-of-logistics (refer to Parallel WaveNet paper for details) as output rather than mu-law to achieve better quality.
+ Interesting method for binaural synthesis from moving mono-audio + Nice insight into why l2 isn't the best loss for binaural reconstructions. + Interesting architectural choice with nice results. + Nicely motivated and clearly presented idea -- especially after addressing the reviewers comments. I agree with the idea of a title change. While I think its implied that the source is probably single source, making it explicit would make it clearer for those not working in a closely related topic. Hence, "Neural Synthesis of Binaural Speech from Mono Audio" as suggested in the review process sounds quite reasonable.
Summary: The paper considers the adversarial attacks via a surrogate model constructed using data from a different domain. The authors propose a defense from such attacks by a special kind of adversarial training inspired by the idea of domain adaptation. The idea can be useful but raises a lot of questions, especially when looking at the evaluation of the proposed approach. ########################################################################## Reasons for score: I vote for a reject, as some findings are intriguing, while the experimental results are questionable. The first major concern is, why do authors consider NLP models and attacks in the paper? It is much easier to work with Image datasets, and if the general idea is new, I suggest to start from this point to verify that the considered domain adaption works well in this scenario. Also, the proposed attack is not new. It is just a surrogate model attack but using a surrogate model training on the data from a different domain (as the authors suggest due to the unavailability of the initial domain data). Also, for this new attack, the authors don't compare a surrogate model attack trained using the same domain data, which would be interesting to compare. The authors use only one dataset, which is a bit strange for modern papers. For this dataset, they don't provide a full study, limiting the scope of experiments to particular pairs of source-target domains. From the paper, it is not clear how widely applicable are obtained results. The comparison is not full. There are a lot of options to be tuned for alternative approaches like Adversarial training or other defenses. The hyperparameter selection for them has a crucial effect on their success. So, we can't say that the proposed approach works better than others. ######################################################################### Major concerns: * Only one dataset considered. I think that the inclusion of additional datasets (at least three) would improve the paper and make the conclusion by the authors more solid * Usage of surrogate models trained on other dataset is not new for general adversarial attacks [1 (mentioned in the paper), 2] and for adversarial attacks in NLP [3] * LSTM is not the state of the art model for the processing of NLP data * 4.2. what attack do you use? not explicitly specified. so the results can't be verified by replication of the described experiments * Table 2 will benefit from adding after-attack accuracy for the original domain. If it is similar to the presented accuracies, then why bother with a new method? * Table 3 comparison is not fair, as we have no details about training for each approach, e.g. we don't know how many additional examples we add during adversarial training. Also note, that the state-of-the-art for adversarial training is different from described in the paper. See [4, 5] * Table 4 After-Defense Accuracy for what model is presented? because it should be different for LSTM/GRU/CNN attack model * Tables 2,3,4 - I suggest to keep the list of pairs (target domain, substitute domain) similar for all tables to be sure that the presented examples are not cherry-picked (also, please consider running your approach on all pairs (target domain, substitute domain) and aggregating all these results) * Domain adaptation models, from my experience, are not easy to train. It is interesting to access the quality of the models for different runs of Learn2Weight (is it stable? etc.) 1. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016a. 2. Cheng, S., Dong, Y., Pang, T., Su, H., & Zhu, J. (2019). Improving black-box adversarial attacks with a transfer-based prior. In Advances in Neural Information Processing Systems (pp. 10934-10944). 3. Fursov, I., Zaytsev, A., Kluchnikov, N., Kravchenko, A., & Burnaev, E. (2020). Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers. arXiv preprint arXiv:2006.11078. 4. Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., ... & Goldstein, T. (2019). Adversarial training for free!. In Advances in Neural Information Processing Systems (pp. 3358-3369). 5. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR, 2017. ######################################################################### Proposed minor improvements: Table 1: demonstrates one example that breaks the semantics of the attacked sentence. Can you provide good examples of why your approach work? Definition 1: is not a definition, is X one instance or many instances? in this definition also not specified that X and X' should be similar Equation 1: why you avoid standard number of equations \\begin{equation} \\label{eq:sample_equation} sample text \\end{equation}?<doc-sep>Summary: This paper is about generating adversarial examples for some target model and protecting from such attacks. Authors consider a setting when an adversary has access to some "similar to target " domain data, and can use this data to generate a surrogate model. Using this surrogate model an adversary can generate adversarial examples, that apparently also fool the target model. Then authors also propose a defense mechanism from this type of attack, Learn2Weight. This is a learnt network that, for a given example, returns perturbation of weights to the target model which will be applied to the target before inference. This model is trained by a defender on synthetic domains generated as perturbations to the target data Overall, this type of an attack is interesting. The paper is well organized and written, and easy to follow. Enough background is given for a reader to follow without the need to research around or going to appendix. Well done on clarity! I do have a problem understanding how effective this attack is (compared to other blackbox attacks) and how the proposed defense compares to standard domain generalization methods like learning domain invariant features. 1) One concern I have is about practicality an availability of such "similar" domains. For testing authors used Amazon multi-domain sentitment classification, where domains are easily available. But how would you attack a pre-trained Imagenet for example? - What domains are similar? - and further more, how much data for these similar domains you need to have to train a good enough surrogate model? - Also you don't really have a way to calculate that your data is close to the actual target data. 2) Definition 2: f(A, W_T) = f_S(A) requires an access to your model f, so I would not call this type of attack "without access to the target model" 3) How does this attack compares to any other black box attack that uses target model? It really should be in Table 2. If other attacks are able to make target model performance worse than this type of attack, it is of less value to defend from a weaker attack 4) Algo 3 - what are the adversarial perturbations you are talking about?  5) I am not sure algorithm 2 is the best way of doing it? Why not to try any of domain generalization techniques (e.g. train on all domains with an adversarial head tries to distinguish between domains, or MMD or whatever). May be this way you won't need Learn2Weight model at all (since you are already learning domain invariant features only) Minor: - Table 2: What are u boldening ? I would expect the bolden result to be per source model (book) and the worse performance you get (so dvd attack gives the lowest after attack accuracy). You are boldening "baby", which is the weakest domain (on which your attack mode is trained) for an attack.  - Algo 2 Compute weights of f trained on TY=W_T-W_T (just assign 0s?) <doc-sep>In this paper, the authors propose a learn2weight framework to defend against similar-domain adversarial attacks. Experimental studies on Amazon dataset are done to verify the proposed learn2 weight. The paper is not easy to follow. The presentation and organization should be further improved. Here are the detailed comments: (1) Adversarial attacks are widely used in various application domains, e.g., computer vision [ref1] and reinforcement learning [ref2]. It is necessary to discuss with these related works, and highlight the difference and importance of adversarial attack methods on NLP tasks. [ref1] Adversarial Examples that Fool both Computer Vision and Time-Limited Humans [ref2] Minimalistic Attacks: How Little it Takes to Fool Deep Reinforcement Learning Policies (2) The authors highlight “domain adaptation theory” several times. Please give a clear description on what it is. (3) Where is Table 1 used in the main content? (4) Regarding definition 2, the following two points are unclear: (1) is f_S(A) the true label of A. Based on the figure 1 (a), only correctly classified source samples are used while the definition does not show this. (2) why f(A,W_T) = f_S(A)? f is the target classifier, are you generating the domain-invariant samples? (5) The rationale of similar domain adversarial attack is confused. It is more reasonable to use source data to help generate target adversarial samples X’ which confuse the classifier to deviate the label f(X) \\neq f(X’) where X is the original target sample. However, the paper generates source adversarial samples, which naturally may confuse the target classifier due to the domain divergence. It is unclear why and how these source adversarial samples can contribute to the robustness of the target classifier. (6) Regarding the accuracy drops in Table 2, it is highly possible caused by the data shift between different domains. How to differentiate the importance of the data shift and adversarial in the accuracy drops? (7) The technical part is not easy to follow. The sections 5.1 to 5.3 are not linked well. It is necessary to give more contents on the motivation and flow of these algorithms instead of just putting them in algorithm charts. (8) Why target data is used in Algorithm 2 and also transfer loss optimization? In the introduction, target domain information is assumed to be unavailable. Moreover, algorithm 2 is to reduce the domain divergence (if I understand correctly). I am quite curious how the proposed method differentiates from other transfer learning methods. Update: Thanks for the authors' response. After reading the response and the other reviewers' comments, I think the paper needs to be further improved, and thus I will keep my score.
The submission considers a new attack model for adversarial perturbation in a framework where the attacker has neither access to the trained model nor the data used for training the model. The submission suggests a"domain adaptation inspired attack": learn a different model on a similar domain and generate the adversarial perturbations using that model. The authors then also develop a defense for this type of attack and provide some empirical evaluations of the resulting losses on a few NLP benchmark datasets. The paper refers to the literature on domain adaptation theory to motivate their suggested defense, but this analysis remains on an intuitive (rather than a formally rigorous) level. Furthermore, the empirical evaluation does not compare to a variety of attacks and the defense is only evaluated with respected to the self-suggested attack. This is a very minimal bar for a defense to meet. The reviewers have criticized the submission for the rather minimal extend of empirical evaluation. Given that the submission also doesn't provide a sound theoretical analysis for the proposed attack and defense, I agree with the reviewers that the submission does not provide sufficient novel insight for publication at ICLR. In contrast to some of the reviewers, I do find it legitimate (and maybe recommendable even) to focus on one chosen application area such as NLP. I don't see a requirement to also present experiments on image data or re-inforcement learning applciations. However, I would recommend that the authors highlight more explicitly what general lessons a reader would learn from their study. This could be done through a more extensive and systematic set of experiments or a through analysis in a well defined theoretical framework.
This paper proposes a detailed analysis on pruning heuristics, and its applications to early pruning. It thoroughly analyzed magnitude-based pruning, loss-preservation based pruning, and gradient-norm based pruning. The paper demonstrated the results on CIFAR-10 and CIFAR-100 datasets. it's very timely research to guide the audience which heuristic is better. My major concern is the novelty over existing pruning heuristics, since the techniques have all been proposed before. The other concern is the evaluation and the scale of the dataset. Given the results in table 2 different by less than a percent, and Cifar training is very noisy, it's hard to tell the difference. Just like the Lottery Ticket hypothesis works on Cifar but does not work on ImageNet, different pruning heuristics needs to be verified on the large scale ImageNet dataset in order to be convincing. <doc-sep>## Summary This paper studies different families of pruning criteria and their impact on training dynamics (especially early training). Stemming from the observations, authors provide improvements to the 1st and 2nd order saliency methods. ## Pros - Authors provide simple and useful explanations to various pruning criteria that are based on the Taylor approximation of the loss functions. - Even the authors don't mention this in the contributions, they propose some improved versions of existing criteria. For example the updated taylor score with $\\theta^2g(\\theta)$ or absolute valued GrasP. This is great and it might worth focusing on these criteria further providing further evidence on their usefulness. Currently, they seem a bit arbitrary. For example, why not third power $\\theta^3g(\\theta)$ or additive biasing of magnitude $(g(\\theta)+c)*\\theta$. I recommend authors to run their versions in unstructured setting too. ## Cons - Authors choose to focus on structured pruning since resulting networks are dense and acceleration is straight-forward. However, they miss an important work on structured pruning [1]. This relatively well-known work shows that pruned (structured) networks can be trained to full accuracy from scratch. In other words, their value lies on doing some kind of an architecture search over layer widths. The motivation of the work needs to be revisited in the light of these results. Since we can retrain pruned networks from scratch, it probably doesn't matter which neuron we choose and therefore which criteria is better. Unstructured pruning doesn't have this training from scratch issue, and I recommend authors to at least include and maybe shift the focus to unstructured pruning. - "but requires specially designed hardware (Han et al. (2016a)) or software (Elsen et al. (2020)). While results in this paper are applicable in both settings, our experimental evaluation focuses on structured pruning due to its higher relevance to practitioners." All neural networks require special hardware if you want to accelerate them. I think a better motivation here is to point out to the difficulties at accelerating sparse operations and limited availability/support for such operations in existing frameworks. And I am not sure how useful structured pruning algorithms are given the results of [1]. - "The larger the magnitude of parameters at a particular instant, the smaller the model loss at that instant will be." This is likely to be true in simple settings, however it is not a sufficient condition; especially for networks with batch norm. You can arbitrarily scale neurons if there is a batch-norm and you can come-up with arbitrary ordering if needed. I recommend re-phrasing this observation and/or stating the assumptions better (I don't remember seeing any assumption on the network itself). How the regularization or gradient noise will effect this statement? - "Thus, the parameter with the most negative value for Θ(t)g(Θ(t)) is likely to also have a large, negative value for Θ(t)H(Θ(t))g(Θ(t))" This is not clear to me. Assume 1d case where Θ(t)= -1; g(Θ(t))=2; H(Θ(t))=-1 -> Θ(t)g(Θ(t))=-2; Θ(t)H(Θ(t))g(Θ(t))=2. I can see the correlation in the figure but it doesn't seem like an obvious thing. Maybe because hessian don't have many negative eigenvalues? ## Rating I found the results and analysis interesting, however motivation needs to be updated. The work would also benefit from including unstructured pruning experiments. ## Minor Points - "Recent works focus on pruning models at initialization (Frankle & Carbin (2019);..." Lottery Ticket paper prunes after training and show existence of some initializations that achieve good performance.. - Equations 6/7 $\\frac{dL}{dt}= ||g(\\theta)||^2$ assuming gradient descent shouldn't be a learning rate? - "...than magnitude-agnostic techniques." Which methods are these? As far as I see, all methods use magnitude information in their formulas directly or indirectly. - In Table:1, I recommend authors to bold both scores if they lie within the std of each other; so that we can identify which improvements are significant. - It would be nice to show how the temperature parameter is used for GrasP [1] https://arxiv.org/abs/1810.05270 <doc-sep>Summary: The authors study proposed importance metrics for pruning neurons/channels in deep neural networks and analyze what properties of parameters are favored by each approach by studying the relationship between model parameters, gradients, 2nd order derivatives and loss. Through this analysis they develop a rich understanding of the consequences of different pruning criteria and use their understanding to propose modifications to existing techniques that produce higher quality models across different settings. Pros: The framework used by the authors is clear and easy to understand but also very general. The authors’ mix of empirical results and theoretical analysis makes a convincing case for the accuracy of their observations. The authors go beyond observation and analysis and use their insights to design new approaches to pruning that outperform existing techniques. The paper is well written and well organized. Cons: This paper has few limitations. The main limitation is that all experiments were conducted on relatively small datasets (CIFAR). Given that is has been shown that some techniques in model compression produce state-of-the-art results on small tasks but fail on larger models and datasets [1, 2], I’d encourage their authors to further validate their insights on a larger dataset (i.e., ImageNet). Comments: I found that the authors waited a long time to explain the term “gradient flow”, which was important in sections 1-3 but not fully detailed until the start of section 4. On page 1 the authors say in parenthesis that gradient flow is “gradient descent with infinitesimal learning rate”, but I found this explanation was not clear. The second sentence of section 4 “the evolution over time of model parameters, gradient, and loss” was much more clear. I’d encourage the authors to potentially work some of these details earlier into the text. References: 1. https://arxiv.org/abs/1902.09574 2. https://arxiv.org/abs/2003.03033 <doc-sep>The paper contributes to explaining why saliency measures used for pruning trained models may (or may not) also be effective for pruning untrained or minimally trained models, by developing the relationship between those saliency measures and different forms of the norm of model parameters based on the evolution of model parameters via gradient flow (basically derivatives w.r.t. time). This result leads to several interesting interpretations that could shed some light on on-going efforts to understand recent methods of pruning early-on (e.g., pruning at initialization or after minimal training) and potential extensions to existing saliency measures. The idea of employing gradient flow is novel for its purpose and seems to be accurately executed. The main concern is that there is a gap between the flow model and the actual optimization method used in this work (SGD with momentum), or more generally standard optimization methods for deep learning. In this regard, the claim of “evolution dynamics” seems a bit exaggerated and remains as theoretical; experiments are strictly speaking not entirely valid to support it either. (minor) Related work is written as if pruning is only done via saliency-based methods (e.g., “pruning frameworks generally define importance measures”) without taking into account various others such as optimization based methods employing sparsity inducing penalty terms. On a different but related note, the writing becomes a bit loose when it comes to referencing existing methods; it is worth correcting them and clarifying the scope/focus of this work. Further questions: - Why do you study structured pruning *only*? The provided reasons (“unstructured pruning requires specially designed hardwares or softwares” or “higher relevance to practitioners”) don’t seem valid enough if the purpose really lies in analyzing. Can you provide any results for unstructured pruning? - Can you provide evidence to support the claim “GraSP without large temperature chooses to prune earlier layers aggressively” (besides Raghu et al. 2017)? - Based on Tables 1 and 2 the proposed extension to loss-preservation method works the best, while the differences across different methods seem a bit marginal. Is my understanding correct?
This paper proposes a broad framework for unifying various pruning approaches and performs detailed analyses to make recommendations about the settings in which various approaches may be most useful. Reviewers were generally excited by the framework and analyses, but had some concerns regarding scale and the paper's focus on structured pruning. The authors included new experiments however, which mostly addressed reviewer concerns. Overall, I think is a strong paper which will likely be provide needed grounding for pruning frameworks and recommend acceptance.
Summary. This paper aims to explain dropout from the lens of game theoretic interactions. Let x denote the input of a deep neural net (DNN), intuitively, the interaction between two variables x_i and x_j quantifies how much the presence/absence of the j-th variable affects the contribution of the i-th variable to the output of the DNN. With the above definition in place, the authors show theoretically and empirically that dropout reduces the interactions between input variables of DNNs. As this type of interactions turn out to be strongly correlated with overfitting, the authors suggest that dropout alleviates overfitting by reducing interactions between input variables (or activation units) of DNNs. Based on this understanding of dropout, an alternative regularization technique is proposed, which explicitly penalizes pairwise interactions between variables. Strengths. 1. The paper is well written and clearly presented. 2. Although it is already well known (or at least widely accepted) in the community that dropout reduces dependencies among activation units in DNNs, the explanation of dropout from the perspective of game theoretic interactions is interesting, and it is supported both theoretically as well as empirically. Weakness. 1. Hyperparameter settings (e.g., optimization-related ones) to reproduce the results are not provided. It is not clear what dropout rate was used in the experiments. Is it 0.5? In all cases, different dropout rates should be investigated before claiming the superiority of the proposed interaction loss (regularization) over dropout. 2. Experiments are curried out on only one task (classification), one type of data (images), and one family of DNNs (convolutional neural nets). However, the paper draws quite general conclusions regarding the understanding of dropout from the perspective of game theoretic interactions. Therefore, considering at least one more task involving a different type of data and another family of DNNs would reinforce the findings of this paper. 3. Computational time analysis of the proposed interaction loss and training time comparisons with dropout are lacking. Additional comments 1. Dropout is used both at convolutional and at fully connected layers. However, one can argue that applying dropout to convolutional layers does not make sense owing to the sparsity of connections in this type of layers. 2. I would recommend revising the title of the paper. What is proposed is more of an alternative regularization form to dropout than an improvement for the latter. <doc-sep>*Paper Summary* The authors provide a novel interpretation of dropout regularization using Banzhaf interactions, a tool from game theory. *Pros* * The authors are able to mathematically prove that dropout is capable of suppressing neural co-adaptations, the latter being one of the reasons for overfitting. Visualizations are also provided in this respect on a dataset for face analysis. * Through their mathematical analysis, authors are able to improve upon the classical dropout training, by making it more compatible with batch normalization, so that these two classical regularization strategies show a better complementarity. *Cons* * Some of the results does not read well, like Table 3 or Figure 4, but this is really minor and fixable *Preliminary Evaluation* I believe that the overall analysis provided by authors is complete and interesting, so I am akin to call for a full acceptance of the paper which I deem suitable for such a venue like ICLR. In order to improve their paper, I would encourage authors to better investigate over the following aspect: since many times authors established a principled connections between dropout and neural activations, it would be very interesting to discuss the relationship with the present work and another paper [Gomez et al, Targeted Dropout, NeurIPS Workshops 2018] in which a computational variant of dropout is proposed, such that the dropout rate depends upon neural activations. *Post-Rebuttal Evaluation* I have carefully read the response provided by authors and checked the revised manuscript. I confirm my preliminary acceptance rate.<doc-sep>Summary: This paper analyzes the effect of dropout on interaction between units in a neural network. The strength of the interaction is measured using a metric that is used in game theory to quantify interaction between players in a co-operative game. The paper shows that dropout reduces high-order interaction (as measured by this metric), and that reduction in interaction is correlated with better generalization. The paper introduces a new regularizer that explicitly minimizes the metric and claims that using this regularizer instead of dropout has some advantages. Pros: - The idea that dropout reduces overfitting by breaking up complex co-adaptations and regularizing interactions is widely believed to be true. However, this paper tries to explicitly quantify the amount of interaction and presents theoretical and experimental evidence that interaction reduces as a result of having dropout. Cons: - The proposed metric is hard to compute exactly since it requires summing over exponentially many terms, each term requiring a forward prop through the network. - The assumptions made in computing this metric approximately seem unclear to me (Appendix H). I could not understand what probability distributions are being expressed and why. In particular, how is the term in Eq 38 approximated by the one in the first line of Eq 41. The paragraph after Eq 40 was also unclear. - It is not discussed how this metric for evaluating interaction strength compares to something conceptually simpler like the Hessian \\\\(\\nabla^2_{i,j} L\\\\) which directly measures the dependence of the network's loss on pairs of input variables, and its magnitude is proportional to the interaction strength. - The paper mentions that an advantage of the proposed loss is that the weight \\\\(\\lambda\\\\) applied to the interaction loss can be explicit controlled, whereas the strength of dropout cannot be controlled (Section 4 "advantages", "Unlike the interaction loss, people cannot explicitly control the strength of dropout .."). This does not seem correct. The dropout probability provides such as control mechanism for dropout. - For the experimental results in Table 3, it is not mentioned what value of the dropout probability was used, whether this value was tuned for each architecture, and which network layers was dropout applied in. These factors can have a significant impact on overall performance. On the other hand, the \\\\(\\lambda\\\\) parameter for the proposed interaction loss is tuned. So the resulting comparison is not fair. - It is not clear what additional insight this metric provides about dropout, beyond confirming what is intuitively apparent : that having randomly dropped neurons will make it harder for the network to learn high-order interactions. Other comments and suggestions: - The introduction includes a discussion around Banzhaf value, without describing what it means. The concept of Banzhaf value might be new to many readers in the ML community. I would suggest including a short explanation to give some intuition about what it means, before discussing it in more detail. - " the output of the DNN corresponds to the score f" : would it make sense to say that (negative) loss corresponds to the score f, rather than the output of the network ? - "award" -> "reward" or "utility" ? (I'm not familiar with game theory literature, so I'm not sure if "award" is a commonly used term there). - The title of the paper is a bit misleading as it seems to suggest that the paper is about using dropout in Game theory (i.e. solving problems in game theory using dropout). Post rebuttal The authors addressed the concerns around the clarity of the paper and added useful additional experiments. I will increase my score to 7.<doc-sep>Summary: The paper proves that dropout can suppress the strength of interactions between input variables from the perspective of game theory. It further improves the utility of dropout by introducing an explicit interaction loss. Experimental results verify the theoretic proof and the effectiveness of the proposed loss. Strengths: 1. The paper introduces a new perspective of game theory to understand dropout. 2. Experiments are conducted on various datasets to support the theoretic proof and the proposed interaction loss Concerns: 1. Although I have no background in game theory, I try my best to understand the terminology and the analysis. However, I do not have the ability to verify the correctness of its proof. Thus, I cannot evaluate the main contribution of this paper. For experimental results, the conclusion that dropout suppressing the input interactions is not a new story. 2. It would be more interesting if the author can further explain the disharmony between dropout and bn from the perspective of game theory.
The paper introduces a game-theoretic framework to improve our understanding of dropout. All reviewers appreciated the contribution of the paper. While they had a number of questions/suggestions, almost all of them were adequately addressed. Three reviewers are satisfied and recommend acceptance, while a lone reviewer is on the fence, he/she admits he/she is less knowledgeable about game theory. Overall, I think this paper makes a solid contribution to ICLR.
This paper first thoroughly analyzes the difference in distributions of weights and activations in AdderNet and then proposes a new quantization algorithm by redistributing the weights and the activations. Strengths: 1. This paper conducts a thorough study of the dilemma in AdderNet quantization, and proposes an effective method to solve this problem. 2. The paper is clearly presented. 3. I am glad to see that the accuracy drop is within 2% for ImageNet even at 4 bits. Weaknesses (suggestions): 1. The accuracy and energy comparisons with quantized CNN seem to be not very adequate. Better to compare the accuracy drops in quantized CNNs as well as the currently presented accuracy drop in AdderNet. So the reader can get the full information whether AdderNet is more advanced as compared to CNNs in terms of quantization. 2. AdderNet is a specific neural network, it is not clear whether the proposed methods can be generalized to other neural networks will similar distribution properties. 3. Only classification results are shown, how about other downstrem tasks, e.g., detection, segemetation. Or other network architectures, e.g., ViTs with adder kernels. 4. Does the AdderNet compatiable with SSL pretraining? e.g., MAE pretraining, and how the quantization scheme different for pretraining stage and fine-tuning or normal training stages? 5. How is the latency or throughput in real devices? I am curious of this since there is no available efficient CUDA implementation of AdderNet open-sourced as of now. Also, quantization is also not efficient for general CPU/GPU devices. Have you deployed the trained models to real devices yet? I am open to further boost the score or champion this paper if the rebuttal is also sound and can somewhat solve my quesions/concerns. The comparison with CNN quantization method seems to be not very adequate. <doc-sep>The paper proposes a new quantization scheme for Addernets. Specifically, the authors propose to cluster model weights in the channel dimension, where each cluster is assigned its own scale value for quantization. This ensures that the scales can better represent the range of values, which may be different for different weight channels. The authors further propose to absorb the error caused by clamping the weights inside the layer bias, which helps restore accuracy. Finally, the proposed method removes outliers when quantizing the activations, therefore tailoring the scale to better represent the valid range of data. Strengths: - The paper is well-written and the ideas are clearly explained. - The proposed method significantly improves the accuracy after quantization, compared to prior methods. Weaknesses: - Some of the claims are not backed up by the method. Specifically, the authors mention that a shortcoming of prior work is using the same scale for weights and activations which is decided based on either of the two and therefore may not best fit the other. The proposed method also adopts the same scheme, where the scales are still determined by either the weights or the activations with the only difference being the increased granularity of the scale choices due to the channel clustering. Please find more details on this in the next section. - Some questions remain regarding applying the method to new models, e.g., how to determine the number of clusters for new benchmarks. The authors have not discussed the limitations or potential negative social impact of their work. <doc-sep>This manuscript focuses on the problem of the quantization of AdderNet. The author has investigated the difference between AdderNet and traditional networks. Based on the differences, the dedicated quantization is achieved by redistributing the weights and activation of AdderNet. In the quantization method, three techniques are proposed to overcome the bit waste and over clamp problems, including clustering-based grouping quantization, range clamp of weights, and outlier clamp of activations. Experimental results show the effectiveness of the proposed method for quantizing AdderNet with different bit widths. Pros: - The manuscript is easy to follow. The analysis of the difference between conventional quantization methods for CNN and that for AdderNet is interesting. The statistics of the activations and weights of a pre-trained AdderNet are good. - As a new kind of efficient neural network (AdderNet), how to effectively quantize it is a challenging problem. Quantization of such NN would put forward a faster and more energy-efficient model. - The proposed method, includes clustering-based grouping quantization, range clamp of weights, and outlier clamp of activations, is promising for addressing the bit waste or over-clamp issues within AdderNet, which is also verified by the extension experiments. Cons: - Besides the FLOPs and energy, it would be great to report the inference time of the proposed method. - It is highly recommended to add a more detailed recap about AdderNet, which will make the whole manuscript smoother, especially for those who do not familiar with it. - In Fig. 3, how is the `-126.2` calculated? There is no detailed explanation about it. - In line 45, “L1-norm quantization” is unclear. Does it mean an L1-norm-based quantization method or quantization for L1-norm operation? Minor issues -There is a strange “rec” symbol in Line 193 - There are several minor grammar issues. For example, in line 21: “well-verified” should be “well verified”. Yes <doc-sep>Quantization is an effective method to further reduce the energy consumption of AdderNets. However, previous AdderNets quantization methods cannot properly handle the challenge of large differences in weights and activations and often lead to a large accuracy degradation, especially in the case of low-bit (4-bit). This paper first reveals the key reasons for the poor accuracy of previous AdderNets quantization methods, namely “over clamp” and “bits waste”. Then a novel quantization method for AdderNets is proposed. Experiments on several datasets and models demonstrate the effectiveness of the proposed method. Strengths 1. The paper is extremely well structured and easy to follow, with motivation well-explained. 2. To my knowledge, this paper is by far the most comprehensive and systematic study of the quantization of AdderNets. Through thorough analysis, this paper concludes two main reasons for the poor accuracy of previous AdderNets quantization methods, namely “over clamp” and “bits waste”, which are insightful. The proposed scheme of the clustering-based weights grouping and the lossless range clamp for weights are interesting and novel. 3. Extensive experiments on different models and datasets. Superior performance compared to other AdderNets quantization methods. The thorough ablation studies verifies the effectiveness of each components. The distributions of weights and activations (Fig.1 in Appendix) demonstrate that the proposed method can effectively solve the problem of “over clamp” and “bits waste”, leading to a higher quantized performance. Weaknesses 1. The values in Fig. 4 are too small to read. The authors are required to refine them. 2. The histogram for INT4 weights ​​adjacent to “over clamp” is significantly higher (Fig.1 in Appendix), however, this phenomenon is not expressed in the top of Fig.1 (c). The authors are advised to revise this detail for better presentation. The authors have discussed the limitations and potential negative societal impact of their work in Appendix.
The reviewers were mostly positive about this paper [8,6,6,4], while the negative reviewer did not update the review or respond after the author's response. I do not see any major issues remaining. The suggested method seems interesting, novel, and achieves good empirical results.
This paper introduces a new model architecture using LSTM for image classification. By adapting 2-dimensional LSTM (Bi-LSTM for vertical and horizontal directions) into the Transformer-like architecture, the model outperforms ViT-based and SOTA CNN-based architectures with less number of parameters. ## Strengths 1. The paper is clearly written. 2. The paper proposes a simple yet effective framework using LSTM. The model outperforms transformer and CNN-based models for image classification. This work provides a great alternative to Transformer and CNNs for image classification. 3. The proposed model is especially efficient for higher resolutions. ## Weaknesses 1. Lack of related work - There are a number of studies using multi-directional LSTM/RNN for vision tasks that are very relevant to this work e.g., [1-4]. The authors should cite and discuss the similarities and differences. - ReNet [69] is very relevant to this work. The authors pointed out that the major difference is to use a transformer-like block structure. However, the benefit of this structure and what it provides to the model compared to ReNet or other related works [1-4] are missing. 2. Due to LSTM's sequence nature, LSTM-based models are not easily parallelizable, especially compared to transformer and CNN-based models. I see that throughput is much worse than other models. I assume training time would be especially slow. It is unclear to me how throughput improves with higher resolutions. [1] "Multi-dimensional recurrent neural networks." ICANN 2007. [2] "Pixel recurrent neural networks." ICML 2016. [3] "Scene labeling with lstm recurrent neural networks." CVPR 2015. [4] "Semantic Object Parsing with Local-Global Long Short-Term Memory" CVPR 2016 The authors explained the limitations and potential negative societal impact of their work in the paper. <doc-sep>This paper proposes an architecture for image classification named Sequencer, which utilizes the BiLSTM module to replace the self-attention module in the vision transformer model. The BiLSTM module is further improved by processing the vertical and horizontal axes in parallel from top/botton and left/right directions. Experiments on image classification tasks demonstrate that the proposed method can acheive similar performance with existing classification models with similar number of paramters. Strengths: + This paper is well-written. The idea is easy to understand. + The proposed method is the first work to empirically show the effectiveness of LSTM modules in large scale image classification tasks, which would have a board impact in investigating the potential of LSTM-like architectures in the computer vision field. + Ablations and visualization results are rich, which present the validity of the proposed method in terms of the importance of each component. Weaknesses: - The novelty of the proposed method is limited. The proposed Sequencer replaces the self-attention module in ViT with the existing BiLSTM module. Besides, [r1] shows that the self-attention module in ViT can be replaced with a simple spatial pooling operator, which suggests that such replacement is incremental. - Although the proposed model can achieve similar performance with existing SOTA architecures, it requires much higher FLOPs and throughput as shown in Table 1. - Evaluation is only conducted on image classification. It would be better to evaluate the proposed architecture on more vision tasks such as detection and segmetation to show its generalization ability. [r1] MetaFormer Is Actually What You Need for Vision. CVPR 2022. The limitations are mainly about the limited novelty of the proposed method and the poor experimental results (much higher FLOPs, lack of experiments on other vision tasks). <doc-sep>This paper proposes a new Sequencer architecture that replaces self-attention in ViT with BiLSTM(2D) for the image classification task. On ImageNet-1K dataset, Sequencer achieves better performance than current other similar scale models. The authors also show Sequencer is more robust to resolution variation and suffers from less severe accuracy degradation when the input resolution is increased. Pros: 1. This paper makes an attempt to use LSTM, an unexplored inductive bias, to replace self-attention in ViT for image classification and shows its effectiveness. This line of research helps the community understand what is indeed essential for vision tasks. 2. Strong results and extensive experiments. It compares with a series of related works based on various inductive biases and shows that it has superior performance and transferability under a similar scale of parameters. Besides, ablation studies are conducted. Cons: 1. The computational cost is too high. As shown in Table 1, under a similar scale of model parameters, Sequencer usually needs 2x FLOPs and is 2x~10x lower throughput compared with other methods. Although this is not surprising due to the recursion in LSTM, I am still concerned about the practicality of this model with such a high computational cost. 2. Lack of reasoning on how using LSTM captures the spatial information and why it is so effective. In BiLSTM2D, it uses LSTM to capture dependencies from horizontal and vertical patches respectively. From my point of view, this design should not be as effective as global dependencies in self-attention since you may need to involve patches that are not necessarily in the same horizontal and vertical line to understand the objects in the images. Besides, I am also curious about what role the memory in LSTM plays in processing spatial information. The above analysis is critical for readers to understand the model but is missing in the paper. The authors have discussed the limitations in the conclusion. Actually, it would be better if the authors can test the model's effectiveness on tasks that require sequence modeling such as video action recognition in the main paper. <doc-sep>This paper proposed Sequencer by using deep LSTMs instead of self-attention for image classification. And many related works were compared in experiments to validate the performance of sequencer. Strength: This paper proposed Sequencer, which uses LSTM instead of the self-attention for sequence modeling. This paper also proposed a two-dimensional version of Sequencer module, where an LSTM is decomposed into vertical and horizontal LSTMs to enhance performance. Experiments showed the advantages of Sequencers compared to the self-attention mechanism in transferability and resolution adaptability. The work is clearly stated, and the manuscript is well written. Weakness: Some experimental results are not clearly explained. Some experimental results are not clearly explained.
Four reviewers provided detailed feedback on this paper. The authors responded to the reviews and I appreciate the authors' comments and clarifications, specifically that each question/comment is addressed in detail. The authors also uploaded a revised version of the paper. After the two discussion periods, all four reviewers suggest to accept the paper (although the scores do not exceed a "weak accept"). After considering the reviewers' and authors' comments, I believe that the paper should be accepted to NeurIPS. Weaknesses include: * Some concerns about experimental results, e.g. highlighting accuracy vs. number of parameters but not also highlighting limitations when looking throughput (comparing only parameters (or FLOPS) can sometimes be misleading, see also [The efficiency misnomer, ICLR22](https://arxiv.org/abs/2110.12894)). But it's good that throughput numbers are presented in the paper and the paper acknowledges this limitation. Related: concerns about computational cost. * Some concerns regarding relevant related literature (addressed in comments and revision) and novelty of the approach. * Limitation to image classification only in the experiments (partially addressed in comments and revision). * More interpretation of the effect of using LSTMs could be helpful to the reader (partially addressed in comments). Strengths include: * Interesting, conceptually simple approach that revisits LSTMs for images, which could be specifically useful for high resolution images. * Reviewers agree that the paper is well-written. * Experimental results and ablations are strong with respect to the claims made. Minor points (not affecting this decision, but potentially useful to authors when preparing the final revision): * MLP-based methods "cannot cope with flexible input sizes during inference" - I think this is only partially true, even the original MLP-Mixer paper shows how this can be solved e.g. in fine-tuning by "modifying the shape of Mixer’s token-mixing MLP blocks" * minor typo I randomly encountered: Table 3, row 3, column "Flowers" 89.5 -> 98.5 * "It is demonstrated that modeling long-range dependencies by self-attention is not necessarily essential in computer vision" - To some degree similar "demonstrations" are visible in CNNs and MLP-Mixers, so this claim seems a bit strong, maybe?
Nocturne introduces a 2D simulation environment to support RL/IL approaches to multiagent planning in the context of autonomous driving. Nocturne seeks to improve on other work in two key ways. First, Nocturne is able to generate 2D views of the world that are visible to any actor. Second, Nocturne is fast ... allowing for 2000+ steps/second. Speed is absolutely essential due to the sample inefficiency of RL algorithms. Nocture achieves these fast visibility queries by leveraging techniques from the computer graphics community. While Nocturne can be applied to multiple datasets, the authors introduce a benchmark based on scenarios from the Waymo Open Motion dataset. The Nocturne dataset includes all scenarios which do not interact with a traffic light (134k/487k scenarios). The authors then train RL and IL agents on the benchmark, investigating the performance of the agents as a function of the number of training scenarios. - Nocturne is highly performant ... 2000+ steps/second makes this a feasible environment for RL learning. - Nocturne supports both vectorized and rasterized representations, this allows for a greater diversity of approaches to be developed. - The Nocture benchmark is built upon a well-respected and utilized self-driving dataset. - The visibility map seems pessimistic. It precludes the ability of a driver to see over the hood of a neighboring car (for example). This is not discussed and constitutes a real weakness of the fast method introduced here in comparison with those generated from imagery or lidar. - Given the source of the data, there are regions of space that are visible to actor "A" but were occluded from the Waymo vehicle that collected the data. There does not appear to be any way to represent potentially missing data within Nocturne. - I would like to see a more compelling argument for the value added by a dynamic visibility query. It seems a reasonable alternative would be to precompute a visibility map as a function of time given the expert trajectory. Clearly, these could diverge, but do they? Are the differences between a dynamic vs. a static visibility map significant in this context? - At this point, BC seems to be quite a weak baseline. There are a plethora of goal-conditioned forecasting methods which would be more appropriate here. - Intersections as a proxy for interactions seem to be a weak proxy. Vehicles that merge into the same lane 8s apart will be considered interacting ... while two vehicles performing complementary left turns will not. - The Zero Shot learning section seems like it should be supported by a simple statistical test. (i.e. there is no significant difference between self-play and cross-play). - I would really like to see some discussion or perhaps ablation experiments w.r.t. to the conditioning information provided. i.e. what happens to performance if we do not provide the final velocity? <doc-sep>The authors introduce a new 2D driving simulator, with a focus on multi-agent coordination under partial observability (hence the name ‘Nocturne’). Crucially, unlike previously published driving simulators, Nocturne makes use of state-based partial observability, computed through efficient intersection methods, disposing of the need of rendering camera images to acquire the set of visible objects in a given time. Thus, the simulator is able to run at over 2000 steps-per-second using real-world data. * The simulator is run on real-world data at a high-frequency rate, providing an accurate account of driving situations. * It combines coordination and cooperation, where the agents’ parameters are tuned to match experts’ capabilities. * The work probes further into the ability of baseline RL multi-agents to handle complex scenes and provides an overview of their limitations, mainly during cooperation tasks. * Given the simulator builds a 2D birds-eye view of the scene, it is inherently limited when datasets built from cameras of driving cars (such as WayMo) are employed, as, for instance, pedestrians had to be excluded, even though they represent one of the main challenges to tackle in autonomous driving. <doc-sep>Nocturne is a 2D-driving simulator constructed on real-world data and designed for partially observed MARL research. It first reconstructs maps and replay objects' trajectories contained in real-world datasets, such as the Waymo Motion dataset. After that, traffic vehicles are turned into controllable agents with partial observability, and actuated to arrive goal region according to a learnable policy. The policy will control vehicles with discrete actions and follow the dynamics of the bicycle model. Due to the efficient ray casting implemented by C++backend, Nocturne is efficient and can run up to 2000+ FPS with rasterized image observation or vectorized observation. In the experiment, the authors conduct generalization experiments on the scenarios imported from the Waymo Motion dataset with RL and IL methods. The experiment shows that increasing the number of scenarios contained in training set will improve the test performance on the unseen holdout set. Also, the failure mode analysis and ZSC test indicate that improvement on current algorithms is required to solve the partially-observed coordination problem. 1. Designing MARL methods under partially-observed real-world scenarios can be challenging and valuable. Nocturne provides a good starting point and provides adequate benchmark results. 2. According to the benchmark result of SMARTS, increasing the number of agents will degrade the simulation efficiency. Also, generalization experiments usually take billions of steps for each training dataset. Therefore, the simulator can run up to 2000+ FPS, which is important in MARL generalization experiments. 3. The experiment results are sound and insightful. 1. The reward function encourages the agent to follow the expert trajectories, which are produced with the observation of all traffic participants. Therefore, I still suggest including pedestrians and cyclists in scenarios in the next version. 2. Waymo motion dataset has a 20s version where 20s trajectories are further divided into 9s fragments for motion prediction, see: https://waymo.com/open/data/motion/#overview Consider using the 20s version to tame the short trajectory problem. 3. I am not sure whether the maps in Waymo data contain overpass bridges or not. If so, please filter these maps, since Nocturne is a 2D simulator. 4. Experiments changing the ratio of controllable vehicles in the scene could be conducted in the future, i.e. 50 % replay from data, 50 % MARL agents. It would be interesting to discuss the similarity and differences between agents trained in heterogeneous and homogeneous populations. 5. For picking up trajectory intersection, I suggest using Time to Collision (TTC), which additionally considers the temporal intersection. typo: In Table. 1, the reference of VISTA simulator is incorrect. <doc-sep>This paper introduces a driving benchmark. Relative to previous work, the authors claim Nocturne is the only available simulator that can compute an agent’s visible objects and step the agents dynamics at above 2000+ steps-per-second. The paper compares the performance of Expert Playback, APPO, and BC in different driving scenarios. - Paper makes a good case for the importance of studying the driving benchmark. Compared to previous works, Nocturne has more efficient environment interaction efficiency. - The paper describes the task, state action space design, and algorithm details. It is convenient for readers to reproduce the result and adopt this benchmark. - The paper lacks the necessary algorithm comparison. The author claims it is a multi-agent benchmark, but the paper only includes PPO and BC. - The paper mentions that the purpose of this benchmark is to study the multi-agent learning process in the real world, but does not clearly point out what advantages this paper has over previous simulators for this purpose.
Overall, this paper provides a great starting point for future benchmarking experiments. The reviewers engaged in a lively discussion with the authors and provided valuable suggestions for future improvements, which the authors have integrated in their submission.
This paper leverages CLIP for Zero-shot segmentation, which is a very hot topic currently. The authors proposed a CLIP-retrieval-based way to build gallery candidates for the semantics segmentation class, and then use dense-clip to generate reference image embedding. Then they proposed several way to boost the performances, including language-guided co-segment and context elimination to remove the bias of background. Experiments show the proposed method achieves state-of-the-art performance. Strength: - Outperforming State-of-the-art performance. Weaknesses: - 1. The paper is not well written. The authors make it hard to understand even for some very clear concepts. - 2. Novelty is somehow limited. Technical contribution is not enough. It is just like to find a prototype for a class and then use it for normal clip inference. Although there are some modifications, such as context elimination, however, these are more like tricks which does not have technical depth. - 3. Why the used numbers in Table 2 are different for DenseCLIP [92] (Table 1 and Table 3 in original DenseCLIP [92])? Yes <doc-sep>This paper proposes a retrieve and co-segment approach that leverages a pretrained image-text model (e.g. CLIP) for unsupervised semantic segmentation. The results on existing benchmarks are good compared to other unsupervised segmentation approaches. Strength * Retrieve and co-segment is an intuitive and reasonable approach for unsupervised segmentation. * Using image-text models e.g. CLIP for retrieval makes sense and is effective. * Adaptation to target distribution by training on pseudo labels is reasonable and effective. * Performance of ReCo+ seems better than existing unsupervised segmentation approaches at system level. Weakness * Compared with the unsupervised segmentation approaches, I think ReCo has a clear advantage by using CLIP which makes the approaches not directly comparable. CLIP has seen lots of image-text pairs and acquired reasonable pixel localization ability, while existing approaches such as PiCIE have no access to this kind of knowledge. In addition, if I understand correctly, ReCo has access to the category names of the target dataset while existing approaches do not. * ReCo uses ViT-L/14 for retrieval, which is larger and stronger than the models used by existing works (e.g. ResNet18 of PiCIE). How does the performance of ReCo compare to existing unsupervised segmentation methods if we use smaller CLIP models (e.g. R50 or ViT-B/32) for retrieval and inference? * How does the performance change if you pick more than one seed pixel per image? * The steps to identify seed pixels (L158-168) seem highly heuristics-based. Alternatively, would clustering approaches work there? Yes. <doc-sep>This paper addresses the task of zero-shot segmentation in images by leveraging powerful large-scale pretrained vision-and-language models such as CLIP. Interestingly, the proposed approach does not require costly and time-consuming pixel-wise annotations for training. Instead, it uses CLIP to select groups of relevant images that correspond to the natural language queries, based on nearest neighbors. Next, it uses pretrained visual encoders to identify seed pixels in the images that have strong support across the entire group of relevant images. These seed pixels are used to compute a reference feature for each language query to produce a segmentation attention map for each new query image, which is further refined by another segmentation mask that is computed by CLIP. Strengths: 1) The paper is largely well-written and easy to follow. In particular, the mathematical definitions that are provided are very helpful for understanding the proposed approach. 2) The proposed approach is theoretically sound and intuitive. While it is not entirely original due to the existence of approaches including DenseCLIP, the idea of discovering common spatial regions that occur in images containing the same concept is very interesting. More importantly, it leverages the large-scale pretrained CLIP model to retrieve related images for a language query. This allows the proposed approach to be trained on any unlabeled image sets. 3) The task of image segmentation often requires fine-grained pixel-wise annotations which is an especially costly process. Being able to leverage powerful and large-scale pretrained models to circumvent this process is especially significant. Coupled with the empirical evidence that it outperforms state-of-the-art approaches, this can be an important area of research, given the availability of increasingly larger multimodal datasets such as LAION-5B. Weaknesses: It would be helpful to see some qualitative visualizations of co-segmentation with seed pixels. Given that these seed pixels are used to compute a reference embedding for new query images and concepts during inference time, it seems to be a very important component of the proposed approach. It may help a reader to determine if the regions selected by the seed pixels are consistent across most images that contain a concept. Yes, the authors have addressed the limitations. <doc-sep>This paper proposed a method for zero-shot transfer in semantic segmentation. To solve this problem, it first performs a image-text retrieval by CLIP to get image archive. Then it use a pre-trained encoder to perform co-segmentation. During inference, it combines the results from reference image embedding and Dense CLIP to get the final segmentation results. Strengths: * The proposed pipeline which combines retrieval and co-segmentation is novel. * It outperforms the compared methods significantly. Weakness: * The method is complicated and requires two encoders during inference, which slow down the speed. I want to know the comparisons in FPS when compared to other methods. * Missing important citations: there are some concurrent works for open vocabulary semantic segmentation[1, 2], which are not cited. It would be better if related discussions are included. [1] A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model. Arxiv. [2] Decoupling Zero-Shot Semantic Segmentation. CVPR 2022. Yes. <doc-sep>This paper utilizes the CLIP model for zero-shot transfer. At first, they leverage the CLIP to dynamically curate training sets from unlabelled images for arbitrary collections of concept names and leverage the robust correspondences offered by modern image representations to co-segment entities among the resulting collections. The synthetic segment collections are then employed to construct a segmentation model whose knowledge of concepts is inherited from the scalable pre-training process of CLIP. In this way, the proposed method could perform unsupervised segmentation approaches while inheriting the convenience of nameable predictions and zero-shot transfer. strengths: 1, The paper is well written and easy to understand. 2, Leveraging the CLIP for unsupervised segmentation is interesting. 3, The proposed training pipeline is reasonable. 3, The experiments are sufficient to show the effectiveness of the proposed method. weaknesses: 1, The whole training pipeline seems a little complex. For example, the proposed method should utilize CLIP to filter some candidate images from numerous unlabeled data. And the identification of seed pixels includes four steps 2, Also, the adjacency matrix A is the computation cost and is sensitive to k in the first step of the identification of seed pixels. I am very curious about the potential of employing the proposed method in instance segmentation. And the whole training pipeline is a little complicated.
After author response and the discussion the paper received 1x borderline reject, 1x borderline accept, 3x weak accept [note that one reviewer mentioned the score increase only in the discussion]. The main strength are: - Overall novel framework for zero-shot segmentation - Strong performance - The authors revised the paper and addressed many/most of the reviewer's concerns/suggestions in the author response. I recommend acceptance, with the expectation * the authors provide the additional revisions as promised * If possible address the comment of reviewer 1QtT "what if remove Eq. (3)? It seems P^c_{new} is already good enough from Figure 2."
**Problem**: The paper addresses the problem of discovering and segmenting multiple foreground objects in videos without using supervision. **Solution**: The proposed solution involves training on synthetic data and only leveraging optical flow (not RGB) to facilitate easier sim2real generalization. The paper introduces a novel model architecture which tasks as input a sequence of optical flow frames and produces $K$ amodal segmentation masks each associated with an estimated depth ordering. These amodal segmentation maps can be combined using the ordering information to form the final estimated multi-object segmentation map. The architecture involves a transformer based model inspired from DETR which uses K learned queries to produce the K unique outputs. ## Strengths ### Writing The writing of the paper is very clear and easy to follow. The paper is organized very well to facilitate easier understanding of the details of the model and proposed data generation pipeline. The details of the proposed model is presented in a concise manner covering most necessary aspects. The paper includes detailed discussions of the pros and cons of most design choices. These discussions make it much easier to follow and verify the claims. ### Proposed Model The paper proposes a novel architecture that is interesting and might be applicable to a wider range of domains. The model makes several interesting design choices: - The choice of leveraging optical flow only from synthetic data is not novel and has been used in other domains. However, the application to the problem of multi-object segmentation is unique. - The model goes beyond segmenting objects in individual frames and produces *amodal* segmentation maps for each frame. This is an interesting design choice that seems to be very effective (see more about this in weakness below). - The core of the model is heavily inspired from DETR, but the ideas of using this architecture to estimate layer depth and amodal masks is still novel and interesting. In addition to modifying and adopting the objective of DETR for estimating the masks, the paper introduces a layer ordering loss to accurately estimate the depth order of each object. ### Synthetic Data Generation The data generation pipeline proposed in this work is novel. The utility of this data beyond the task of multi-object segmentation is unclear. However, I believe this pipeline could be adopted by other researchers in this domain. ### Evaluation The experimental evaluation in this work exhaustively covers standard benchmarks and the necessary ablative studies to verify the claims. Across two tasks (single object and multi-object segmentation), the proposed model outperforms existing unsupervised learning methods. The ablative studies show the benefits of using amodal segmentation maps as the intermediate output which is a key design choice of the proposed model. ## Weakness ### Supervision and Comparisons - The paper claims to be an **unsupervised** method for video object segmentation. I'm not sure if this is true based on the conventional usage of the term unsupervised. The proposed method uses synthetic supervision. This can obviously be corrected in the text. However, the bigger issue is the comparisons to existing work. If this work were to be categorized as a supervised learning method, the single object results are only as good as other supervised learning methods and in the multi-object case there is no comparison to supervised learning methods. - Since the proposed method uses synthetic supervision, it is also *somewhat* unfair to compare to a model trained with real-world human supervision. But I think it is at least important to demonstrate the benefit of synthetic data *i.e.* scalability. Since generating synthetic data is not expensive, if the proposed model can scale in performance with volume of synthetic data and outperform supervised methods that use limited supervision, that would be a compelling result for adopting the proposed model. ### Amodal Evaluation One of the interesting aspects of the proposed model is the choice of producing amodal maps as the intermediate step. However, the evaluation of this output is limited to the ablation of comparing to a model using modal maps. It would have been interesting to see how well this model performs on the amodal segmentation task. See the following for evaluation protocol: >Zhan, Xiaohang, et al. "Self-supervised scene de-occlusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. > Xiao, Yuting, et al. "Amodal segmentation based on visible region segmentation and shape prior." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 4. 2021. Yes, the limitations have been discussion fairly well. There is no discussion on the societal impact. <doc-sep>The authors propose a new model for amodal segmentation of videos. Given optical flow, a Transformer-based neural network predicts an amodal (i.e., unoccluded) segmentation mask and depth for each object in the input frame. The model is trained using ground truth supervision on a synthetic dataset created by the authors. When evaluated on common video instance segmentation benchmarks, the model shows promising results without requiring further training. The proposed model clearly outperforms previous unsupervised segmentation methods and even outperform some supervised methods trained on the respective benchmarks. Video object segmentation is an active area of research. Predicting layered amodal segmentation masks that can be combined into a modal segmentation is an uncommon but very natural approach. With this approach, the model is tasked to not only learn about visible object parts but complete objects. Occlusions arise naturally when masks are combined. Compared to the more common approach of only reasoning about visible object fragments this may result in a more useful object representation. The main weakness of the paper in my view is the comparison with other models. The proposed model is trained supervisedly on synthetic data and then transferred without supervision. None of the other models is trained in this way, but either completely unsupervised or trained supervisedly on the respective benchmark. The performance improvements on the benchmark are therefore not necessarily due to the proposed model architecture, but might as well arise due to the different data seen during training. A fair comparison would be to also train the other methods on the synthetic dataset and evaluate on the benchmarks. The key question then would be whether the transfer performance of the proposed model is better than that of previous models, which would be especially interesting for other models using optical flow. One limitation of the model arises due to using optical flow as primary input, which makes it impossible to segment non-moving objects. This limitation is discussed openly by the authors and subjected to future work, what I believe is justified. <doc-sep>This paper considers the problem of segmeting multiple moving objects in a scene by taking optical flow as inputs. The solution is composed of two key ideas: First, the inputs are optical flow, so they can employ simulated videos for training, and the learned model is able to generalize to real videos. Second, a depth-oredered layered representation is used to handle mutual occlusion. Experimental results show improvements upon methods with non-layered representation on single/multiple object segmentation datasets. Originality: Good. The key innovations are three fold: (1) object-centric representation; (2) layered representation; (3) amodel representation. Although all three techniques have already been explored in previous literature, there is no such an attempt before to combine them for video object segmentation. Furthermore, most relevant works experiment on simulated datasets, while in this work all experiments are conducted on real world videos, which is much more difficult and more convincing. Quality: Good. The proposed method is easy to understand and shows good performance on all dataset considered, surpassing the most relevant method, Motion Grouping, by a significant margin. Clarity: Good but can be improved. In general the paper reads smoothly and is easy to follow. But, perhaps due to limited space, some important details are defered to the supplementary material, which from my point of view, would be better to be placed in the main text. For example, test-time adaptation seems bring large improvements and is important to the final performance, it would be clearer to put relevant details in the main text. Significance: The sutdied topic is of great importance. Object-centric, layered repersentation shows promise to become the next generation of vision paradigm. The authors have discussed the limitations of their work. <doc-sep>This paper proposes a U-net for generating a motion segmentation of a given motion field. In particular the method assumes T snapshots of the field and generates T amodal segmentation masks. The novelty of the paper lies in combining the U-net with a Transformer, i.e. the Transformer receives the latent embeddings of the U-Net encoder and generates output embeddings from which the segmentations masks and an ordering of these masks is inferred. The authors follow the idea of layered motion and assume that each motion segment has unique depth by that the ordering is interpreted as a depth ordering. The model is limited to a fixed size of layers (queries), in this paper three layers. The method is then trained on new synthetic flow data without any manual labelling and applied to a variety of existing datasets and also to new data. The paper is very well written, very well structured. The methodological approach sounds appropriate and the presentation of the results is well done. I am not working on this specific problem but I believe the authors that the proposed neural network model is novel and not published before. It shows how this generative problem can be elegantly solved by combining a U-Net for the task of segmentation together with a Transformer for the task of inferring spatiotemporal correlations to guide the segmentation. The evaluation seems properly done with partially very good results, although some statements go to far, e.g. in line 227 where it is argued “…large margin …”, although improvements are not that large e.g.for FBMS-59. I believe that it has been shown that motion segmentation is basically learnable from synthetic data only. If so, one of the co-reviewers more expert than I will definitely refer to this. What I miss in the paper is a clear distinction between motion segmentation and occlusion reasoning (2.1D sketch). As the model can only infer motion boundaries, it might miss some of the occlusion boundaries especially for surfaces with interior occlusion boundaries. It is also not clear to the reader how the model behaves for corrupted flow inputs or for more than three motions, e.g. also for a moving camera. Finally, the reader might also be interested to understand why longer sequences give better results or why object boundaries can help improving the layer ordering (line 223). Monet is also a new method to infer occlusion regions and motion boundaries (H. Kim et al., BMVC’21). I suggest to clarify this in the paper. I also suggest to add X. Zhan et al.’s, CVPR’20 paper to the related work that shows a nice way to compute amodal maps. Yes.
This paper uses synthetic data to train a CNN + transformer architecture for amodal object segmentation from optical flow input. The model architecture can be viewed as an adaptation of DETR [12] to a different task. Reviewer ratings lean positive, although there are concerns about experimental validation, as the combination of training regime (using synthetic data) and input modality (optical flow) does not match that of other methods tested on the same datasets; the proposed OCLR system outperforms self-supervised methods, but falls behind the state-of-the-art systems trained on real data, while using different training resources than either class. The author response partially alleviates this ambiguity, with an additional ablation study comparing to an optical flow based Mask R-CNN model trained on synthetic data.
The paper studies differential privacy for pre-trained certifiers that offer certified robustness through input perturbation. The key insight is to analyze differential privacy afforded by the input perturbation by noise transformation (computing corresponding gradient noise), and augmenting with gradient perturbation when needed for differential privacy guarantees. This improves upon past work by reducing the differential privacy budget needed by showing that some differential privacy is obtained from the input perturbation. In addition to new analysis techniques for providing the privacy bounds, experiments show gains over prior methods, e.g. better differential privacy under adversarial attack. Strengths: It may be desirable to defend the same classifier against adversarial input perturbations and privacy attacks, so approaches which simultaneously guarantee certified robustness and differential privacy are interesting and useful. Prior work has noted that certified robustness techniques provide differential privacy, current work provides a quantification of this claim. Experiments show gain over prior methods. Weaknesses: The proposed perturbation mechanism is a simple combination of input and gradient noise, the transformation process is only for analyzing the Differential Privacy guarantee and involves a simple Taylor estimate and weakly uses the properties of the loss function. The multivariate gaussian mechanism is a simple generalization of known mechanism in prior work. Questions: It is not clear what is the proportion of negative and non-negative examples, or what effect this proportion has on differential privacy. How are the perturbation scale thresholds xi_low and xi_up set in the experiments? Can gradient perturbation be useful for certified robustness? Experiments compare with other approaches that simultaneously try certified robustness and DP, but what is the gap from approaches that optimize for exactly one or the other? Other suggestions/comments: Formal statements are sometimes unclear or without sufficient detail. E.g. what is the domain of input x_(i). The readability of the paper would benefit from a brief consolidated notation/terminology section, e.g. defining (epsilon,delta)-dp Theorem 1 simultaneously defines Multivariate Gaussian Mechanism and proves a property for it, would be clearer to separate out a complete formal definition. In equation (2) is o the little-oh notation? It appears to not be the case in Algorithm 1? A clarification would be useful if there is abuse of notation, or if the little-oh choice is just coincidental. In abstract and elsewhere the DP bounds with Moments accountant are said to be 'tight' but perhaps better to just say relatively tighter. The paper considers an interesting and relevant question but the technical novelty and significance of the provided results does not seem to be sufficient. Algorithmic additions over prior work include using a combination of input and gradient noise and adding general multivariate Gaussian noise. The key analytic insight is to use Taylor approximation to estimate DP afforded by input noise. It is not clear how to set the hyperparameters of the proposed algorithm. Also formal presentation is somewhat lacking. Even though the approach beats some prior works empirically, I believe the work is marginally below acceptance threshold due to above issues. <doc-sep>This paper focuses on providing both differential privacy and certified adversarial robustness to machine learning models. The authors propose an algorithm called TransDenoiser to achieve such a goal. TransDenoiser consists a denoiser through both input and gradient perturbation for achieving DP and certified robustness, and following by a pre-trained classifier for classification. The privacy guarantee is carefully analyzed. Extensive experiments demonstrate the effectiveness of proposed method from model utility and adversarial robustness. strengths: 1). This paper considers transforming input perturbation into gradient perturbation, then the noise introduced by random smoothing can be quantified with the explicit gradient perturbation for the privacy guarantee. 2). To analyze the privacy guarantee, Multivariate Gaussian mechanism is proposed by considering multivariate Gaussian perturbation 3). The proposed method are conducted on various datasets and adversial attacks to show the effectiveness. weaknesses: 1), Multivariate Gaussian Mechanism is not new, and many previous works are also investigated multivariate Gaussian differential privacy to achieve DP. For example, Chanyaswad et al in [1] proposed a MVG mechanism, which adds a matrix-valued noise drawn from a matrix-variate Gaussian distribution, and also introduce the directional noise in MVG that can further imporve the utility. Further, Yang et al in [2] proposed a Matrix Gaussian Mechanisms for matrix value with better utility. [1] Chanyaswad, Thee, Alex Dytso, H. Vincent Poor, and Prateek Mittal. "Mvg mechanism: Differential privacy under matrix-valued query." In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 230-246. 2018. [2] Yang, Jungang, Liyao Xiang, Jiahao Yu, Xinbing Wang, Bin Guo, Zhetao Li, and Baochun Li. "Matrix Gaussian Mechanisms for Differentially-Private Learning." IEEE Transactions on Mobile Computing (2021). This paper investigates how the random smoothing noise can be transformed into gradient perturbation, and then carefully compute the privacy loss, which seems an interesting method. <doc-sep>In this paper, authors studied the problem of achieving both the overall differential privacy and certified robustness simultaneously for pre-trained models. They proposed a framework called TransDenoiser based on an existing framework (Salman et al, 2020) [1] by adding additional and transformed gradient perturbations for the overall DP. Authors analyzed DP guarantee provided by these perturbations and empirically evaluate their methods on MNIST and CIFAR-10, and shown that TransDenoiser is effective against FGSM and PGD attacks with guaranteed DP. Strengths: * The idea is well described. * The authors provided detailed analysis both theoretically and experimentally. Weaknesses / discussion questions: * The proposed TransDenoiser is lack of novelty. This paper mainly builds off the work by (Salman et al, 2020) [1], which proposed to train a denoiser on input perturbations and leveraged randomized smoothing to achieve certified robustness. The main difference is that in TransDenoiser, additional gradient perturbations are generated. However, the comparison with [1] is missing. The lack of comparison with the most relevant baseline reduces the confidence. * Throughout the paper, it is not made clear why achieving DP is important, and what is the difference between "partial" and "overall" DP. I think it would be good to include some brief background on DP in the related work section instead of in the appendix. A good chunk of your introduction could be moved to the related work section as well. * In terms of clarity, the overall writing could be greatly improved. There are several typos, confusing sentences and symbol choices (I listed several in minor issues). Specifically, it’s hard to follow your TransDenoiser Training Algorithm. * Does the proposed method work for L-infinity norm as well, both theoretically and experimentally? * The experimental results are not entirely convincing to me. For a thorough evaluation, it would be better to report robust accuracy against PGD (using MadryEtAl is very confucing...), CW and Auto Attack. My specific concerns are the following: * All the methods should be given with a better name, current versions, like "xxx_sct", "xxx_prt", "xxx_sepdp" are not easy to follow. * The captions of Figure 2 are mixed together and confusing. * I’m not sure what's going on in Figure 3 * there is no figure reference in main text. I just suppose the corresponding explanations are under "Empirical defense", please correct me if I am wrong. * only 'clean example' for TransDenoiser is provided, what about other methods? * besides, the better robustness may be caused by the trade-off between natural accuracy and robustness, with lack of the 'clean example' given by other baselines, to me, it is not convincing to directly draw the conclusion that "the certified accuracy on clean examples provides a good estimation for the empirical robustness of the model". Minor: * I found authors make their statements misleading. For example, * in page 2, the paper says "compared with [1], TransDenoiser can .... without retraining the pre-trained models", however [1] fixed the pre-trained models instead of retraining. * in page 3, the paper says "different from [1], ..., the objective function we use to optimize the denoiser contains the standard reconstruction MSE", however [1] also used MSE. * Under "Empirical defense", the authors are mainly describe figures in the Appendix. * The proposed methods should be evaluated on larger datasets (e.g., CIFAR-100) and more "popular" models (e.g., ResNet-xx, WRN-xx) to demonstrate its effectiveness thoroughly. This paper mainly builds off the work by (Salman et al, 2020) [1]. Although DP analysis and tighter bound on DP guarantee are of some significance, the authors are suggested to 1) compare their proposed method with [1], 2) improve overall writing clarity, and 3) significant improvements over experiment settings. <doc-sep>This paper studies the problem of integrating differential privacy and robustness to adversarial examples for pre-trained machine learning models. Specifically, this work aims at designing methods that guarantee both privacy and robustness without having to re-train the model at hand. To achieve this goal, the authors build upon an existing technique in the adversarial example literature that involves placing a denoising auto-encoder in front of a pre-trained model before applying a noise injection scheme known as "randomized smoothing" [1]. While this technique is known to provide state-of-the-art "certified accuracy" against adversarial examples, its privacy guarantees remained to be studied. This work proposes to do just that by adapting the algorithm to guarantee differential privacy for the dataset used to train the auto-encoder. The authors claim three main contributions: 1. Exploiting the intrinsic train-time input perturbation that existed in the previous implementation of the algorithm and composing it with an explicit gradient perturbation to satisfy differential privacy. The authors claim that their treatment of this input perturbation allows a finer analysis of the algorithm's privacy, which ultimately leads to better accuracy, for the same privacy guarantees. 2. Introducing two new analytical tools, namely MGM and MMGA, for analyzing the privacy guarantees of multivariate Gaussian noise injection. 3. Conducting extensive experiments on several benchmark datasets to demonstrate that their algorithm, called « TransDenoiser », provides better privacy guarantees and achieves similar level of certified robustness compared to previous works. [1] Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, Sebastien Bubeck **Strengths** Privacy and robustness to adversarial examples are two hot topics within the ML community, especially when considering large models for image or speech recognition. Therefore, I believe that the main focus of this paper, i.e., the integration of these two notions for pre-trained classifiers, is very relevant to the ICLR community. Also, the main point of the article is quite simple and easy to understand from a high-level perspective. Finally, the idea of trying to translate the input noise injection into gradient perturbation to simplify the privacy analysis seems interesting. **Weaknesses** My main concern is with the technical quality of the article. In fact, I am not sure that the claims of the paper are technically correct, especially with respect to Lemmas 1 and 2. Although, from a general perspective, the concepts of input noise and gradient perturbation seem related, in my opinion, neither Lemma 1 nor Lemma 2 demonstrate a clear connection. I provide some details below. 1. Lemma 1 states that for a certain type of perturbed examples $z_{(i)}^{non}$ (defined in section 2.2), the gradient computed at $z_{(i)}^{non}$ can be lower bounded by the gradient computed at the initial point $x_{(i)}^{non}$ plus some noise that depends on the Jacobian matrix of the loss function at $x_{(i)}^{non}$. First, the statement itself appears to be very confusing to me because the authors compare two random vectors (with infinite support) without explaining the meaning of the term $\\geq$. Second, the analysis that the authors provide by stating that, according to Lemma 1, "the DP guarantee provided by the perturbation of the transformed gradient is the lower bound of the one provided by the perturbation of the input" lacks justification, especially since the lemma only holds for a specific family of perturbed inputs. Finally, looking at the proof, I have some additional concerns, among which a) the reason for the jump from (9) to (10) is not clear to me, and b) the transition from (10) to the equality between the gradient of $z_{(i)}$ and the perturbed gradient of $x_{(i)}$ is also not clear to me. 2. Lemma 2 provides a similar statement with an equality but was not provided with a formal proof. Instead, the authors present another lemma in the appendices (Lemma 3, also without a proof), which is very similar to Lemma 1 and claim that Lemma 2 can be derived from Lemma 3. This claim does not seem sufficiently conclusive to me. Finally, I have the impression that the paper claims too much its technical contribution on the analysis of multivariate Gaussian noise injection. In fact, as I understand it, this work can be considered as a special case of previous work studying matrix-valued Gaussian mechanisms [2]. I think the authors should compare with this previous work. **Additional comments and questions** In experiments, I am not sure that the comparison of TransDenoiser with previous methods is fair in terms of privacy preservation. As I understand it in [3] and [4], the model is directly learned with differential privacy, thus protecting the dataset used to learn the model. However, in this paper, the authors only claim to preserve privacy on the fine-tuning dataset, thus leaving the dataset used in the pre-trained model unprotected. I have two concerns with this: a) the authors are comparing methods that do not preserve privacy on the same dataset, which makes the comparison unfair compared to previous methods, and b) I checked the pre-trained classifiers and it appears that they use the same datasets as the authors' trained auto-encoder (since these models are not trained with privacy, I think this represents a clear privacy breach). In Thereom 3, the authors present a result on privacy preservation for Algorithm 1 based on a previous result in [5]. This result is only valid if we consider algorithms that use Poisson sampling to select the mini-batch at each round. However, Algorithm 1 does not seem to be using Poisson sampling since the size of the mini-batch is constant and equal to B (see line 25 of the algorithm). I think the presentation of the technical contribution could be improved. The appendix presents several statements re-demonstrating existing results in the literature on privacy and adversarial robustness (Theorem 5, 6 and Appendix B). From my point of view, these do not help the overall understanding of the contributions of the paper. I suggest presenting only the proofs of the original contributions in the appendix, and simply citing the existing papers for the earlier work when needed. [2] MVG Mechanism: Differential Privacy under Matrix-Valued Query Thee Chanyaswad, Alex Dytso, H. Vincent Poor, Prateek Mittal [3] Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness NhatHai Phan, Minh Vu, Yang Liu, Ruoming Jin, Dejing Dou, Xintao Wu, My T. Thai [4] Scalable Differential Privacy with Certified Robustness in Adversarial Learning NhatHai Phan, My T. Thai, Han Hu, Ruoming Jin, Tong Sun, Dejing Dou [5] Deep Learning with Differential Privacy Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang While I think this article studies an interesting problem, I do not think it presents its contributions convincingly enough. In particular, I have concerns about the technical quality and the novelty of the article that lead me to recommend its rejection.
This paper develops a technique to provide both privacy and robustness at the same time using differential privacy. Unfortunately the paper in its current form does not have meaningfully interpretable security or privacy claims. The reviewers point at a number of these flaws that the authors do not address to the satisfaction of the reviewers, but there are a few others as well. - What is actually private, at the end of this whole procedure? If the actual "pretrained classifier" is not made private, then what's the purpose of the entire privacy setup in this paper? Why does the denoiser need to be private if the classifier isn't? - The proof of Lemma 1 appears incorrect. The proof in Appendix E says that Equation 10 is true, but this sweeps all of the remaining Taylor series terms under the rug and doesn't deal with them. How are they handled? - In Figure 4(a), what does it even mean to have a "FGSM privacy budget epsilon"? Or a "MIM privacy budget epsilon"? A privacy budget is almost always something defined with respect to the *training data privacy*, how does this relate to the attack in this paper? - How does this paper compare prior *canonical* defenses, both on the robustness and privacy side? In particular, comparisons to adversarial training on the robustness side, and some recent DPSGD result on the privacy side?
1. Important step in understanding the inductive bias of modern neural networks. 2. Convincing theoretical and empirical analysis. 3. Excellent presentation. 1. Only one hidden layer is considered, but this is sufficient to represent read-once DNF. 2. Only uniform distributions. 3. No sample complexity results. I really enjoyed reading your paper and as a non-expert got a lot out of it. It is very well-presented and I have only a few minor comments. I don't think the abbreviation KKT is explained anywhere in the paper. At the bottom right of p.3 the citation "Amos et al. [2017]" should be [Amos et al. 2017]. The paper makes quite specific contributions and I think the title doesn't adequately reflect this. <doc-sep>The authors contribution rests both on experimental evaluation and on new theoretical results. Experimentally, the authors show that the studied networks converge to solutions which generalize well (in particular better than two-layer MLP and another method for learning DNFs). Moreover, the trained networks have neurons aligned with terms of the DNFs. Theoretically, the authors show, under certain assumptions that (i) gradient flow does not learn solutions that “memorize” the data, and (ii) that the learned network learns to reconstruct the DNF. Interesting question: The paper studies an interesting question, which is the inductive bias for learning read-once DNFs. I really appreciate not only the results but also the problem and its specification. Nice mix of theory and experiments: The theory and experiments nicely complement each other. It is nice that the authors also study settings that go beyond their theoretical analysis (DNFs that violate the read-once property). Especially, for the second result, however, there are stronger assumptions used. First, the sample is assumed to be the whole set X (i.e. all possible instances) and (ii) it is assumed that the learned solution will be a minimum-norm solution (the authors give compelling evidence from the literature but a proof does not exist—it may likely be very difficult, so this is not a criticism. The assumption on the “population setting” is quite strong. However, I think that such assumptions might be necessary for a work like this, so I do not consider these to be big weaknesses. Overall, the paper is well written. I was just wondering what happens if the DNF contains just one term that contains all propositional variables. Will the solution not be memorizing? If not then why not? <doc-sep>* There is a good theoretical analysis of the behavior of neural networks on this type of problem, and experiments that align with this theory. * There are also experiments that show that the 'read-once' restriction is needed, which means that the results are 'tight', and it gives an interesting example of an unlearnable function. * It would be helpful to define "DNF". I think the authors are talking about boolean functions in disjunctive normal form, but they never explicitly say. * The problem of learning read-once DNFs seems to be mostly theoretical. * The experiments are limited to a a few fixed DNFs. * The experiments are not described in enough detail. What learning method is used? What parameters? What are the lines in figure 2a? * Notation is not introduced properly * Some of the related work seems only distantly related * The theorems are obvious/not very surprising * Theorem 3.1 looks like a variant of the universal approximation theorem, * Theorem 5.1 is an obvious consequence from gradient flow being a norm minimizing solution, as mentioned in section 3 Notation is not introduced properly: * [D], which seems to mean the set {1,2,..,D} * "x ∼ \\prod Bernoulli(0.5)" but x has entries in {-1,+1}, Bernoulli usually implies {0,1} Figure 1: how were these networks constructed? There are no weights for second network layer in (1). Is that too limiting? Figure 3: what is "small Gaussian initialization"? Definition 6.2 "and all other neurons are zero." What does that mean? Theorem 6.1: is read-once an implicit assumption here? It should be explicit in the statement of the theorem. The experiment in figure 4a is the most interesting part of the paper to me, showing a big contrast between the ability of a neural network to learn general DNFs vs read-once DNFs. <doc-sep>+ The work seems thorough: it answers several questions and also (empirically) covers cases where the initial assumptions do not hold. + Novel + Paper is well structured + Reproducibility: provided code and mentioned experimental parameters (in the appendix). - The problem formulation is not clearly written (cf. detailed comments). - While the work appears thorough, details (of proofs and experiments) are often referred to the appendix which makes the paper a lot less self-contained and harder to read/verify. - The theoretical contributions are mostly limited to read-once DNFs, which appears very constraining for applications. ## Comments **The problem formulation is not clear.** - Start with formally defining (read-once) DNF in logical terminology (literals, term, disjunction of terms) before describing the more numeric encoding of it. The current problem formulation is quite hard to read. Also write at least once in full all used abbreviations; 'disjunctive normal form (DNF)' does not occur in the text but it really should. - Is $n$ in $x \\sim \\prod_{i=1}^n Bernoulli(0.5)$ supposed to be $D$? - Mention $D$ = number of variables - The task is to learn $f^*(x)$. Does that mean that the output must be a Boolean function in read-once DNF format, or does it mean that it must be a Boolean function (in any representational form) that can be represented as a read-once DNF (so the learned function $f^*(x)$ is not necessarily in a DNF format)? While the overall work appears thorough, very often the main text refers to the appendix for details/proofs. The main text on its own becomes a lot less self-contained. In case the population size is too large to check the actual accuracy it would be insightful to also report, on top of the sample based test accuracy, the model count of the actual DNF $f$, the learned DNF $f'$, and $f \\vee f'$. I did not verify the proofs in the appendix, and found the proofs within the paper not always easy to follow. In particular for the proof of Theorem 6.1 I would have liked to see more details within the main paper itself. ## Questions Q1) Figure 1's axes are not labelled. What are they? I assumed x-axis is the hidden units and y-axis is all potential inputs but there should be $2^9$ of those instead of $600$, so I'm confused. Q2) "nonlinear read-once DNF" was never defined. What is a *nonlinear* read-once DNF, in logical terminology (literals, terms, ...)? Q3) Sect 7. "The fact that SGD recovers simple Boolean formulas is very attractive in the context of interpretability" - but the previous paragraph empirically showed that when learning from data sourced from a DNF with overlapping terms, the solution is not a DNF recovery solution? Those would not be easily interpretable? Q4) Sect 7. "Non read-once DNFs" - are the literals still all positive or both negatives and positives? Q5) "To show each property we assume by contradiction that it holds and construct a perfect solution with lower norm. This leads to a contradiction since the solution has minimal norm." - "To show **that** each property **holds**"? The proof is not very clear to me, how does it rely on the properties? Q6) The focus is on learning a read-once DNF from training data (problem formulation). If the learned function did not have to be a read-once DNF, then the problem reads like a binary classification problem for which methods exist (e.g. a learned decision tree can easily be turned in a DNF)? What makes the read-once DNF constraint specifically interesting? Are those common/a good approximation in practice? I understand that from a theoretical perspective it could be interesting to initially restrict the input to read-once DNFs, to make the analysis easier, but why constrain the output to a read-once DNF too (do-we? See comment later on problem formulation)? Q7) Which proofs/theoretical statements rely on $\\mathcal{D}$ being uniform? Q8) Which proofs/theoretical statements rely on the read-once restriction? Q9) In Figure 2a, only with Training set size $> 2000$ does the SQ algorithm seem to more consistently achieve 100% accuracy. However, I had the impression that if SQ was given the entire population ($2^9$ samples here) it would be 100% accurate. Is this not true or did the training set just not happen to contain the entire population? Q10) In Figure 2a: did each run only differ in initialization, or also a different training and test set? Did the 3 approaches learn from the same training set and evaluated on the same test set? ## Textual remarks: * Fig 3b (and similar) are missing axis labels * "Learning DNFs is hard [Pitt and Valiant, 1998]" -> "Learning read-once DNFs is hard", to be more specific * 'the a set' -> 'the set' * "We perform experiments on DNFs of higher dimension" - 'higher dimension' is not clear, does this mean more terms or more variables or ? Should this be 'read-once' DNF? * Fig 4 caption, "The training size was 8, 500 for all DNFs." - is this 8500 (without space)? * "Their results suggest that the inductive bias of GF is to KKT points or global solutions of minimum norm problems" and later "In our theoretical analysis, we apply the results of Lyu and Li [2020], Ji and Telgarsky [2020], which show that GF is biased towards KKT points of min-norm problems." - Did they proof (~show) this or 'suggest' this? * In Assumption 5.1, what is $n$?
Meta Review: Most reviewers appreciated the insights provided on learning neural networks. Some reviewers also had some concerns about readability; hopefully they provided enough feedback to improve the paper.
The paper considers the problem that personalization methods in federated learning may cause the personalized models to overfit on spurious features, thereby increasing the accuracy disparity compared to the global model. To mitigate this accuracy disparity, the paper investigates adversarial transferability, which is shown to correlate with disparity. Thus, the paper proposes a federated personalization approach based on adversarial transferability and catastrophic forgetting that reduces accuracy disparity to the level of the global model while maintaining the higher accuracy of prior personalization methods. The paper evaluates the approach on three real-world datasets. **Strengths** - `The paper identifies a relevant problem.` The paper empirically shows that prior work on personalized federated learning induces high accuracy disparity, which is a relevant problem to the fairness community. - `The method is somewhat novel and technically sound.` The paper demonstrates a link between adversarial transferability and accuracy disparity of personalized models. Thus, the paper employs adversarial transferability and weight regularization (which has already been proposed by prior work, as the paper acknowledges) to learn personalized federated models that achieve high accuracy and lower accuracy disparity than prior work. **Weaknesses** - `Certain presentational aspects could be improved.` For example, "bias-conflicting examples" are mentioned in the introduction but only defined in section 4 (leaving the reader to guess what these "bias-conflicting examples" are). Furthermore, the experimental setup for figures 4 and 5 is not provided (e.g., which personalization method was used?). Moreover, in section 6.1 two sentences are repeated (“there are 650 blond…” and “there are 650 blond...”). Finally, there are a few typos (Section 4.1 “models on bias**ed** and bias-conflicting”, section 6.1 “through**out** our experiments”, section 6.1 “residu**a**l”). - `The method lacks key motivation.` Adversarial transferability is introduced to reduce the accuracy disparity of the personalized models by forcing them to be vulnerable to the same adversarial examples as the global model. However, the adversarial transferability between the global and local models is just a proxy for the similarity of the two models, similar to, e.g., KL divergence. - `Baselines are misrepresented.` In figure 3, the centralized model is trained on a dataset where the spurious correlations are fixed, whereas the federated model is trained on multiple client datasets, each with different spurious correlations. Accordingly, it is unsurprising that the centralized model has a significantly lower accuracy on the bias-conflicting dataset, as it was not exposed to the same data distribution shifts during training as the federated model. I would expect the centralized model to perform on par with the federated model when trained on the same data, invalidating the paper’s claim. - `Results are misrepresented.` In section 5, the paper states that low adversarial transferability indicates high accuracy disparity for the personalized models. However, the paper merely shows a correlation between the two metrics (as they both estimate the similarity between the global and personalized models). I would be very surprised if one could not train global and personalized models that achieve “high transferability and high disparity” or “low transferability but high disparity” (as I believe that transferability and accuracy disparity are only weakly correlated). Unfortunately, the paper does not provide empirical evidence to substantiate these claims. (In fact, it provides evidence for the opposite as “accuracy disparity still increases, even if the adversarial transferability remains high”). - `The paper makes unsubstantiated claims.` In section 5, the paper claims that “Both methods are relatively light-weight from a computational perspective” but does not provide any further evidence for or analysis of this statement. - `Results are unclear.` Figure 7 compares the losses on the biased and bias-conflicting datasets. However, the loss is only an approximation of the accuracy, which is the quantity that we are ultimately interested in. Therefore, it would be more meaningful to compare the accuracies on the different datasets. - `Results are insignificant.` Comparing the results for the global model and the proposed method in table 1, the method achieves roughly the same accuracy on the bias-conflicting dataset and only a minor increase in accuracy on the biased dataset (except for the MNIST dataset). Moreover, the prior personalization approaches achieve significantly higher accuracies on the biased datasets. Thus, the method just provides a “little less” personalization, yielding personalized models that are more similar to the global model (with corresponding performance). The paper identifies an interesting shortcoming of prior personalization approaches. However, given the various weaknesses outlined above (e.g., insignificant results, misrepresentation of results) and the limited novelty, I do not believe that the paper meets the bar for publication in its current form. <doc-sep>This work explores the possibility of personalisation methods entangling spurious features that can undermine their generalization in case of federated learning. It proposes to use a combination of a consistency term for adversarial transferability and an L2 regularisation term to help reduce this disparity. The approach is evaluated on artificial settings with spurious features. Strength - The work exposes a possible generalisation issues in personalised federated learning and proposes a novel approach to tackle it - The idea is well motivated, paper is generally well written and experiments are provided to substantiate the claims Weakness - Use of some non-standard hyperparams like 0.031 eps budget for MNIST and the batches of 96, 40, 30. Similarly the 5 epochs of local training seem larger than the conventional 1 or 2. Could the authors provide an explanation? - Doesn’t include exploration for other modalities like text or large scale setups like FEMNIST Few open questions - Any thoughts on how the method could compare to say the adversarial training objective in combination with personalisation? I think this work tackles an interesting hypothesis that can limit generalization in case of personalised FL. The proposed solution is principled and well motivated with appropriate ablation study. The only drawback would be lack of experimentation on large scale problems which would certainly make this a valuable piece of work. <doc-sep>The authors have proposed a new FL training strategy to reduce the performance discrepancy between the central model and the client models. The PGD generated adversarial examples are fed into both central model and client models and their outputs are used to minimize the entropy loss. The computation is further simplified using Taylor expansion. Strength: - The idea of using adversarial examples and minimizing entropy loss of global and local models' outputs is normal. The intuitions are straightforward to understand. - Authors have spent a decent amount of effort explaining the relationship between the spurious features and adversarial transferability. It is helpful for audiences. - The results do show certain improvements over the other baselines. Weakness: - The presentation has space for improvement, please explicitly explain all the critical terms (accuracy disparity, bias-conflicting, etc) at the first occurrence in this paper. - Many subjective descriptions exist in the paper, e.g., if you claim the distribution shift of spurious features is a major effect of accuracy disparity, it would be really important that you give theoretical proof and/or empirical support to verify your claim. - It seems like the spurious features are handcrafted, and we don't have a clear solution of how to automatically choose the spurious features in real applications. - The adversarial examples are generated using a global model, however, the way of generating adversarial examples in FL worth a lot of analysis and description of the details. There exist some papers that discuss the best way of improving adversarial robustness with adversarial training. Here, the same strategy should be applied to compare. - The experiments are a bit disappointing. Without a comprehensive comparison, a replication of the results is almost impossible. So many factors in FL can dramatically change the results. The authors didn't provide a fair and reproducible setting for the results. The experiment is incomplete and the results are not convincing. The idea is good and novel, however, the presentation is disappointing and the experiments are weak. I recommend rejection.
The paper talks about a novel setting in Federated Learning and argues that personalization methods may cause the personalized models to overfit on spurious features, thereby increasing the accuracy disparity compared to the global model. To this end the authors propose a debiasing strategy using a global model and adversarial tranferability. There were some positive opinion about the problem being interesting .However reviewers had several concerns about the validity of assumption and hand wavy arguments used in the solutions for existence adversarial tranferability. Overall, the settings and the need for removing personalization bias needs to be validated more convincingly and rigorously, with concrete real scenarios and experiments.
This paper presents a HyperGrid Transformer approach to fine-tuning, where one takes a pre-trained transformer model and then modifies it by introducing hypernetworks that modify the 2nd FFN in each transformer block by generating additional weights conditioned on input. These hyper-networks are trained on all tasks in GLUE/SuperGLUE datasets simultaneously and are task aware through prefixing of a task specific token to input. This allows one to fine-tune only a small number of parameters and end up with a model that performs quite well on all tasks at the same time, not much worse than fine-tuning the entire transformer model on all of these tasks. This is an interesting paper in the area of hypernetworks with results suggesting potential for impact where one can achieve good accuracies on tasks without having to fully fine-tune gigantic pre-trained models. I do have some questions for the authors though: - local vs global - global seems to be just like learning another weight matrix not conditioned on anything? from the name one would expect local to be conditioned on some specific parts of the input while global is conditioned on entire input. what is the intuition on why they help? - what's the intuition between the differences of performance of the various setups (LG, GL, L^2, etc) - figure 9 doesn't exist. if coarser variants are better, what happens when the entire weight matrix is treated as 1 block (so you just learn a scalar weight)? what about learning a single scalar for each weight in the FFN (i.e. block size is 1)? - have you tried adding dynamic weights to projections in the multi head self attention modules (e.g. to projections for q,k,v) - table 1 - why are QNLI results so much worse for HGT(LG) than all other results, but seems to be doing better on most other tasks; how stable are all of your results (i.e. what is the variance across seeds?) - parameter counts are confusing - how does one compute that a "multiple networks" needs 3.2b params, but a "single network" needs 0.2b params? are these the trainable weights for each setup, so you count 16 tasks x 0.2b weights? maybe it is better to report total num of params + trained params or else somehow make it more clear what the number of parameters means - have you tried finetuning the hypernetworks on individual tasks? ---- update: Thanks for the update. I guess the "intuition" is driven mostly by empirical results, which I suppose is ok but may be worth digging into a bit more. I have updated my rating. <doc-sep>This manuscript presents a HyperGrid Transformer, which is engaged in learning a single model to account for multi-tasks in NLP. The core idea of HyperGrid Transformer is to learn task-conditional dynamic weights in a grid-wise manner in the feed-forward layers, where the weights are factorized in local and global components. This idea is simple, materializing the goal of reducing the parameter cost for the used multi-task network. However, the conducted experiments look nice, showing promising performance on GLUE/SuperGLUE. Therefore, from my point of view, this work is worthy of a publication at ICLR. <doc-sep>The authors propose HyperGrid Transformers with a decomposable hypernet-work that learns grid-wise projections to specialize regions in weight matrices for different tasks. Usually, people would use different models to solve different tasks respectively. In this paper, the authors focus on using a single model to solve all tasks and it will save a lot of model parameters for natural language understanding. And the authors have done comprehensive experiments on GLUE and SuperGLUE, and prove that the proposed single model can achieve much better performance than baseline and competitive performance with multiple task-specific models. Pros: 1. The idea to make use of decomposable grid wise projection is interesting. This is to somehow add regularization to the weights. 2. The proposed method has been widely evaluated on GLUE/SuperGLUE tasks, and achieve good performance. Cons: 1. The baseline details are not clear. When using a single model as baseline, how many layers are shared across tasks? What's the sample strategy for different tasks? Is it possible to train a single model on multi-tasks for some steps, then fix most layers and only finetune some task specific layers? I feel the baseline is a bit weak, although I cannot come up with a stronger one that can be easily adapted to the pertained model. 2. It seems "Task Conditioning" is a very important trick. The authors should have some ablation study on it. Or maybe add it to the baseline. Overall, it's great to see people working a single model to solve all tasks. And I would be happy to increase my score if the authors could convince me regarding the baseline which is quite tricky. ####update#### The experiment results are not surprised, but strong enough. Still no very strong baseline provided in this submission, but it might be good to set up a benchmark in this direction. However, T5 model needs more computational resource and the experiment results are hard to replicate. Overall, I would like to keep rating.
The paper proposes "HyperGrid Transformers" a modified transformer architecture for learning a single model for multiple tasks in NLP. The proposed method was evaluated on popular GLUE/SuperGLUE tasks and reported competitive results with the baselines (the improvements are somewhat marginal). The paper contains some interesting idea of using a decomposable hypernetwork to learn grid-wise projections for different tasks, which may not be particularly novel in machine learning context but seems new for multitask NLP. Reviewers generally agree the paper is above acceptance bar, however some concerns were raised about clarity of baselines and fairness of experimental comparison as well as stronger baselines. Authors improved some of them in the rebuttal, but there is still some room to further improve the quality of presentation and writing in the final version.
This work provides lower bounds in the noise free setting for learning two hidden layer networks in the Gaussian space. Basically, the whole concept is to embed hard problems over the uniform in hypercube to the Gaussian space. This is not something new, this has been done before in [1] for proving lower bound in the agnostic learning halfspaces over the Gaussian distribution. In contrast, this work provides noise-free lower bounds. They provide a super-polynomial SQ lower bound, a cryptographic lower bound under the LWR assumption. Moreover, they also provide lower bounds for the query model which is more powerful than the PAC learning model. The whole concept is the following: To embed hard problems from the hypercube to the Gaussian space, we can use a similar idea like in [1], i.e., using the sign function. The DV lift basically does that by adding 2 more hidden layers (with ReLU components). So, a hard problem with L-Layers can provide lower bounds for L+2 layers in Gaussian space. The authors decrease the number of layers from +2 Layers to +1. To do that, first they show a way to do it using a very large network, basically, they start from an exponential construction Eq.(11) and then they decrease it to $d^m$ by make the network more sparse, using the distributional properties of the Gaussian. They introduce some error in the construction but they show that this is indeed very small. After that the hardness proofs follow from a reduction. [1] Adam Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space. # Pros 1. This is good result. The authors provide lower bounds under several assumptions/models. 2. This work is very well-written. Checked almost all the proofs and the claims are sound. # Cons Not really a con just a comment. The lower bounds are for 2-hidden layer networks where there are results for 1-hidden layer networks for the CSQ model. [GGJ+20],[DKKZ20] Basically, the trade-off is stronger model and 2-hidden layer instead of 1. In general, I would expect stronger lower bounds for 2-hidden layer network. Overall, I recommend for acceptance. everything is good. <doc-sep>The authors prove statistical query lower bounds for learning polynomial-sized neural networks with two hidden layers. The bound is superpolynomial in the input dimension $d$ (or the query tolerance is negligible in $d$). No cryptographic assumptions are needed for these bounds to hold. The authors also show that, under the learning with rounding with polynomial modulus cryptographic assumption, no polynomial-time algorithm can learn neural networks with two hidden layers from Gaussian examples. The result is extended to neural networks with one hidden layer over the uniform distribution on the boolean hypercube. The paper is well-written and the review of the related work is thorough. A great deal of effort has been put into making sure that the paper is clear and accessible for a wide audience, particularly in the technical overview. This paper is a bit outside my expertise, but the results and techniques used are interesting and could be of independent interest. I believe this paper is relevant to the learning theory community. Limitations: yes Impact: N/A <doc-sep>The paper establishes hardness of learning neural networks with Gaussian inputs under various assumptions: - statistical query learning, two hidden layers; - cryptographic hardness of learning with rounding, two hidden layers; - label query learning, existence of a family of pseudo-random functions, any fixed number of hidden layers. The main tool for obtaining these results is a modified Danieli-Vardy transform (2021) that maps a Boolean example (x, y) to a Gaussian example (z, y') while remaining in the realizability setup. **Strengths** - A solid theoretical work that establishes new hardness results for the now ubiquitous neural networks. **Weaknesses** - The paper does not explicitly indicate the limitations of the results. For example, lines 67-68 say that "Theorem 1.1 rules out almost all known approaches for provably learning neural networks", but the most well-known approach to learn neural networks---SGD---is not ruled out by Thm 1.1. See weaknesses above.
This work provides lower bounds in the noise free setting for learning two hidden layer networks in the Gaussian space. Overall it is a fundamental result well within the scope of Neurips, continuing a solid line of work and I cannot see any reason for rejection. The authors have engaged with the reviewers, and have committed to make minor revisions and clarifications which I am sure they will do.
### Summary This paper provided the definition of the temporally-contingent planning (TCP) problem together with some concrete planning domains modeled in terms of this framework. On top of that, the authors pointed out several directions toward how to provide explanations about some important questions that a user may raise in the context of TCP. ### I found this paper is easy to read and follow in general. However, there are some concepts that are not clearly specified, which may result in some reader being confused. Apart from this, the remaining part is sound. Thus, from my perspective, I think this paper can be accepted. ### Detailed Comments (1) In the definition of the temporally-contingent planning problem, the authors said that the problem $P_{tc}$ is defined as $P_{t} \\cup P_{c}$, but the formal definitions of $P_{t}$ and $P_{c}$ are not given. (2) In Definition 2, the set of physical actions consists of instantaneous actions and duration actions which both are the tuple $(a_{pre}, a_{eff}, a_{dur})$. Based upon this definition, it seems that there is no difference between an instantaneous action and a duration action. Moreover, the authors said that $a_{dur}$ is a set of duration constraints, but they did not clarify what are duration constraints. I guess $a_{dur}$ is a set of variables which must hold when the action is executing? Since those definitions play an important role in this paper, I think it would be better if the authors can clarify these concepts.<doc-sep>## Summary This paper analyses the explainability of temporal and contingent planning problems for settings with noisy sensing and incomplete knowledge. The work defines what a temporally-contingent planning problem is, and what its solution can look like. It then defines the notion of explainable planning for such problems and discusses possible questions and answers for these settings. ## Feedback The paper is well-written, easy to read, and relevant to the XAIP community. The family of XAIP-TC Problems extends the current notions of explainable planning to domains with numeric, temporal, and contingent features. The authors comprehensively discuss the type of questions and answers that might be needed to facilitate the explainable planning for such domains. In my opinion, the discussion of complexity in section 5 can be extended as the current discussion seems informal. More importantly, what will be the complexity of reasoning generated by the AI planner? Also, planning with $K+$ propositions seems computationally hard. The work can include a formal discussion of this. Minor edits: 1. At many places I think \\citet{} would be a better choice to use instead of \\cite{}. E.g., section 1, para3, line1; section 1, para4, line1; etc. 2. Def. 1, last sentence. It is not clear at this point what is p in TIL. It becomes clear later, but this def. is incomplete without it. 3. Def. 1 does not talk about $\\mathcal{P}_c$ or $\\mathcal{P}_t$ as mentioned in the paragraph just before Def. 1. 4. Use \\emph{eff} in math mode instead of $eff$. E.g., Def. 2. 5. “AI solver” is used on page 3 (Domain 2’s description) directly, with no reference to it before this point. 6. What is $\\delta_d$ in Def. 4? Seems to be a typo. 7. The paper consistently used the phrase “TCP problem”. I think it should either be “TC problem” or just “TCP”. 8. Text in Fig. 1 is difficult to read. 9. Page 4, left side, last para: “The second example is …. is a fixed set”, and “ The second question’s … in each branch”. These sentences seem to be incomplete/incorrect.
We thank the authors for their contribution. It’ll be a great addition to the workshop program. Please refer to the feedback provided by the reviewers when creating your camera-ready version. Particularly, to improve the clarity of the paper, consider formally defining important terms/notions used throughout the paper, such as the problem $P_{tc}$, which is the main consideration in this paper. It will also be interesting to consider expanding the discussion in Section 5, as suggested by reviewer 2, to include some analysis on the practical aspect of this work (i.e., what is the overall complexity involved in finding solutions for some of the queries considered in this work). We are looking forward to an interesting and fruitful discussion at the workshop.
The paper studies the use of random weights together with learnable masks. The learnable masks are learned with straight through operator. The authors argue that such training approach for neural network would reduce the model storage requirements and has applications to network compression. The model is validated on cifar 10 and cifar 100 datasets showing that the proposed layers underperforms (approximately) between 1 and 10 accuracy points wrt dense layers depending on the model architecture and number of parameters. ------- Post rebuttal: Based on authors responses I updated the overall score from 3 to 6 and increased soundness and presentation both by 1. Overall, the discussed ideas are interesting – using a random layer with learnable masks to achieve competitive performance. However, the validation and the presentation of the paper require improvements. For details see comments below. **Title** - The title might be a bit to generic (not very informative) and a bit misleading (for more complex datasets one would probably need more layer architectures). How about the following title: On learning masking operators for network random weights. **Abstract** - To strengthen the abstract, please add quantification of improvements in terms of space complexity. Also, based on the experimental section the improvements come at the expense of model accuracy, however, this trade-off is not captured in the abstract. **Introduction** - Introduction section is in general well written and easy to follow. However, it could benefit from some re-writing, shortening, and refocusing. - The introduction does not discuss the obtained results making it hard to assess the significance of the proposed approach. It is also unclear what the ML community gains with the results of this study. Please add such discussion to the introduction section. **Methodology** - This section would benefit the most from re-writing and restructuring. In the current form it is difficult to follow and do not allow to fully appreciate the presented ideas. - Figure 2 should be better discussed and better formatted. Please extend the caption to clarify the figure. In general, all figures in the paper should be self-explanatory making it possible to understand the figures just by reading the captions. In its current form it is impossible to understand the drawings. - Based on the description the process of updating the prototype weights into target weights is unclear. Could the authors clarify how different networks are updated? - Eq 4 and Eq 7 differ only in the dimensionality of w. Why changing the dimensionality leads to new paradigm of random weights padding? Moreover, please boldface vectors in Eq 7 to differentiate them further from scalars in Eq. 4. - The section lacks motivations behind different choices. **Results** - The introduced layer is not compared to previously published models. Would it be possible to compare the model to Supermaks and Popup? Adding comparisons to previous art would make the validation stronger. - Cifar datasets are small scale. Would the observations generalize to larger scale datasets? Adding another dataset would make the observations more compelling. - The results are missing stds, making it hard to assess the significance of the results. - In general, the proposed layers are underperforming w.r.t dense layers. That would be expected. However, the results section lacks discussion and positioning of the reported results, e.g., Why the reported results are interesting? What do we learn as a community from the results? What is the impact of the reported results? Adding more in-depth discussion would make the paper stronger. - The paper does not discuss the limitations of the discussed ideas. - The paper does not mention societal impacts. <doc-sep>This paper aims to handle the difficulty of restoring/transmitting models caused by the increasing model size for recent large-scale neural networks. Inspired by recent works (e.g., LTH, Popup) on random networks, the paper starts by answering a scientific question: what is the potential of a random network? Specifically, the authors propose a series of strategies to study the random network with different masks to map different features. Through the exploration for the answer, a new model compression paradigm is proposed by only restoring one-layer random weights and a bunch of masks to represent a model. Experiments were conducted based on using different CNN/transformer architectures. Extensive results validate the rationality of the motivation and show the feasibility of the new compression paradigm. Strengths: 1) This work tries to reduce the model storage size, which is a clear and practical motivation. Compared with typical model size compression methods that remove partial parameters, it is a novel way to represent a model by using different masks on fixed random weights. 2) This work is driven by studying the random weight capacity, which is an interesting yet under-explored studying point. It is novel to use “one-layer” weights with different masks to learn a model. 3) Experiment is extensive using different model architectures. Firstly, it answers the question about random weight potential using a series of proposed strategies to construct a network using random weights. Secondly, it shows the feasibility of a new compression paradigm compared with the typical model compression method. Weakness: 1) It is encouraged to revise the draft title to a more appropriate one. After reading the draft, I think the current title doesn’t convey the key factor of this paper. Iteratively selecting different masks on a set of fixed random weights for different feature mappings should be the main point, therefore, the usage of “one-layer” in the title is inaccurate. On the other side, “all you need” is a too vague description. It needs to be concretized to eliminate confusion. 2) Some related works are supplemented in the appendix, I suggest moving them into the main draft and providing necessary discussions about them. The discussion should include the difference between the submitted work with these existing works since they look highly related to this work, even if they are in a different setting. 3) Technically, the proposed random vector padding (RP) repeats the given set of random weights in the same order. If randomly shuffling the random set and then doing the padding to construct the model, can it improve the capacity? 4) Minors: (1) In Alg.3, it seems the output is written in the wrong way, which should be the output of MP strategy in Alg.2, but not consistent with RP strategy in Alg.3. (2) Around Eq. 5 and Eq. 6, the explanation of T^p is missing. It should be further clarified and consistent with Alg.1. (3) In Eq.9, the d_l should be the dimension of w_l instead of the number of vector v_pro. Please make it clear to eliminate confusion. The authors have addressed the limitations and potential negative societal impact. <doc-sep>This paper proposed a new paradigm for neural network compression. The authors randomly initialize a set of weights. The actual parameters of each layer are represented as the initialized weights with binary masks. The weights are shared by multiple layers, while the masks are different for each layer. The weights are fixed, while the masks are learnable. In this way, the total bytes are significantly reduced. Experiments show that the proposed method achieves better compression than baselines. Strengths: 1. This paper is well organized, and the core method is clearly represented. 2. This paper represents each layer as shared weights with different masks. The idea for model compression is interesting and novel. 3. Experiments show that the proposed method achieves good compression for image classification models. Weaknesses: 1. The title of this paper is unsuitable and the authors should change it. First, people will not associate the title with model compression. Second, the word "one layer" in this paper is misleading. Although some parts are shared cross all layers, there are differences between layers. Thus, we can't say them "one layer". In my opinion, masks are also parameters of the model. 2. It is better to compare the compression performance with stronger baselines or bigger datasets such as ImageNet. 3. In general, the proposed method achieves compression by sharing some parts of parameters (while adjusting the others). Several previous works have explored this direction, such as [1] and [2]. The authors should discuss them in related works. [1] Residual connections encourage iterative inference. International Conference on Learning Representations, 2017 [2] Recurrent convolutions: A model compression point of view. NIPS Workshops: Compact Deep Neural Network Representation with Industrial Applications, 2018 Authors discussed the limitations and potential negative societal impact of their work in supplementary material. <doc-sep>This paper proposes a new way of representing a neural network in a compressed way coined "One Layer is All You need". The idea is to keep a single fixed and randomly initialized weight vector as prototype for each layer of the network, whereas each layer is saved as a learned mask determining which weights of the prototype are used. Since saving bit masks is more memory efficient than floating point, the network can be efficiently stored. Experiments with ResNet32, ResNet56, ConvMixer and ViT on CIFAR10 and CIFAR100 show that this method achieves improved results in terms of accuracy compared to sparse network training baselines while maintaining larger compression ratios. Strengths: - {S1} The problem of storing neural networks in an efficient manner is significant and the proposed idea improves in this direction. - {S2} The trade-off between network compression and accuracy is improved in comparison to sparse network training baselines. - {S3} The writing is well-structured and easy to follow. Weaknesses: - {W1} Experiments only performed on low-resolution datasets (CIFAR10, CIFAR100, TinyImagenet). - {W2} It is not clear if experimental settings are repeated with different seeds. The checklist refers to the supplementary material, but I cannot find any results for multiple seeds there either. I believe all experiments should be conducted for multiple seeds. - {W3} No code is included for reproducibility. - {W4} The writing should be improved in terms of typos and grammar (see below for some instances). Typos: - The sentence in lines 20-21 seems incomplete. - Lines 126/150: rewrited -> rewritten - Line 259: compreesion -> compression - Line 263: The sentence is confusing, because you train two networks with different strategies and not one network with both. - Line 307: foundamental -> fundamental -------------------------------------------------------------------------- {W2} is addressed by the authors during the discussion. Furthermore, they ensured they will resolve {W3} and {W4}. I updated my score respectively. The authors discussed limitations in the supplementary material. The fact that this method cannot be used to compress already pretrained models but requires training from scratch is an important limitation and should be mentioned in the main paper.
The paper studies the use of random weights together with learnable masks. Authors demonstrate that such training approach for neural network can reduce the model storage requirements and has applications to network compression. Reviewer appreciated the novelty of the idea and the extensive experiments on various architectures. Adding experiments that would go beyond small-scale datasets would further strengthen the quality of the paper and its potential impact.
This paper introduces an extendable framework to train different VAE-based generative models. A use-case is presented benchmarking 19 different autoencoders on 5 different tasks (image reconstruction, image generation, classification, clustering, interpolation). - **{S1}** The framework enables researches to train (custom) VAE-based models in few lines of code, and thus reduces friction for further research in this area. - **{S2}** The framework seems to be well-documented and is readily available. - **{S3}** The experimental section in the paper is interesting, especially the part about varying sizes of latent spaces. However, see {W1}. - **{W1}** My main concern is that the experimental section is evaluated with IS/FID that use Inception as feature extractor. Several works address problems with these metrics, e.g. see [1-5]. If time permits I would suggest to run more metrics (e.g. [4] or [5]) and additionally report those in the paper. Otherwise, at least mentioning these concerns, suggesting additional alternatives, and stating that benchmarking on IS/FID alone is insufficient should be mandatory. - **{W2}** The linked framework does not seem to contain the code of the experimental section in the paper, i.e. benchmarking different VAEs and implementations of metrics like FID/IS. Please clarify if this is the case or if I was not able to find it. - **{W3}** In relation to {W2}: The submission is neither a dataset, nor a benchmark(ing tool), but a framework to train (custom) VAE-based generative models in few lines of code. I'm unsure whether this fits the scope of the dataset and benchmark track. * [1] A Note on the Inception Score, https://arxiv.org/abs/1801.01973 * [2] Effectively Unbiased FID and Inception Score and where to find them, https://arxiv.org/abs/1911.07023 * [3] Internalized Biases in Fréchet Inception Distance, https://openreview.net/forum?id=mLG96UpmbYz * [4] On Self-Supervised Image Representations for GAN Evaluation, https://openreview.net/forum?id=NeRdBeTionN * [5] The Role of ImageNet Classes in Fréchet Inception Distance, https://arxiv.org/abs/2203.06026 <doc-sep>This paper provides a python toolbox called Pythae to train and evaluate various AE models under a unified framework. This library has aroused vast interest among related users. In particular, there have already been over 800 stars in the open-source repository. This work also extensively compare various AE models from various aspects and provide insightful dicussions. 1. This paper is overall well-written and the documentation in the open-source library is very clear. 2. Besides a clear introduction to the code structure and project, the authors also extensively compare various AE variants from different aspects, including image reconstruction and generation, latent vector classification and clustering, and image interpolation. 3. Experiment results also provide insight into how different components used in different AE models affect the five aspects studied. 1. Experiments are performed on relatively small-scale datasets like MNIST and CIFAR. Experimenting on larger-scale datasets like ImageNet would better illustrate the efficacy of the proposed toolbox. 2. Many recent unimodal/multimodal pretrained models (e.g., BEIT, DaLLE)for text-conditioned image/video generation are also based on VQVAE. It would be interesting to also include some results on VAE used in these pretrained models. [1] BEiT: BERT Pre-Training of Image Transformers, ICLR, 2022 [2] DALL·E: Creating Images from Text, 2021. <doc-sep>The paper proposes Pythae, a Python library that provides implementations for various types of popular autoencoder (AE) architectures and modeling choices. Beside an introductory description of the library and a summary of each individually considered AE (also in supplementary pdf), 19 different AEs are benchmarked on various tasks ranging form measuring reconstruction losses to proxies to assess generation quality. POST DISCUSSION UPDATE: I believe many of my concerns have been addressed, improvements have been made and more outlined for a camera ready version. As detailed in the response below I encourage the authors to continue tuning the presentation and am raising my rating to recommend acceptance of the paper. (Previously 5 to now 7) There are various strengths to the proposes Pythae environment , primarily perhaps that it comes with a promise of providing a framework to easily experiment and compare various autoencoder based approaches. * Having the Pythae tool holds a promise towards more reproducible research (although simultaneously see first weakness below) * Although the considered approaches are all autoencoders, the breath of considered methods is impressive. There are of course a couple of examples that may be still added, but as the authors say the library should be subject to continuous development and as such will likely only grow to be more exhaustive over time. * The supplementary material overall is very helpful. Similarly, the usage examples and readme instructions in the provided GitHub repo are pretty comprehensive and are likely to facilitate adoption. * Appendix section D.4 is much appreciated, in particular, the effort to provide a short summary of the main mathematical advance in each respectively model. * As mentioned in the first point on strengths, the work holds a promise towards reproducing existing works. At the same time, when looking at the provided experimentation in the paper, some of the choices are a bit puzzling in this regard. For instance, there does not seem to be any mention or measuring of e.g. Kullback-Leibler (KL) divergences in any of the variational approaches. Similarly, reconstruction loss seems to be measured in a mean squared error, rather than what is typically reported in almost any generative modeling paper. It’s not clear to me why the choice has been made here to deviate from the by now fairly standardized convention to stick to reporting of (negative) log likelihoods. Naturally, this does not hinder the relative comparison between methods, but it does make it harder to assess the correctness with respect to the original papers and hinders their direct comparison/reproduction a bit. This points feels particularly important as the scale of values (MSE lying around 0.01 and log likelihoods typically being multiple orders of magnitude larger) can significantly affect choice of hyper-parameters etc. * The discussion of “enhancing the model” of section 2.2 is rather shallow. I understand the space constraint here, but it feels like crucial and heavily investigated arguments are missing. For example, any mention with respect to desired/undesired prior-posterior mismatches or any discussion on lossy compression is omitted here in favor of a too simplified narrative. This is a bit problematic because the “disentanglement” picture is rather naive and it is unclear whether it is an agreed upon perspective. There have been several well cited published papers, to name a few: “Disentangling disentanglement in variational autoencoders”, “resampled priors for variational autoencoders”, “rethinking lossy compression: the rate-distortion-perception tradeoff”, that challenge the narrative that autoencoders and variational versions of it are all about finding a better form for the posterior or that a weighting of the KL term (beta-vae) somehow induces disentanglement (in whichever way disentanglement is actually defined). * Following up on the above two arguments, I am a bit worried that the experimental evaluation may be somewhat misleading to the reader. In particular, the fact that the tasks are separated in the way they are does not seem very intuitive to me. For instance, what is the purpose of running an experiment measuring MSE reconstruction I na comparison between an AE, a VAE, and a beta-VAE and then concluding that the beta-VAE is a better model if one simply turns down the divergence term? Similarly, task 3 does seem to be somewhat oddly formulated and catered heavily towards the AE again. If we are interested in classification, why first do unsupervised pre-training and then train a linear classifier on top. Naturally the space of auto-encoders is much less constrained than that of any VAE, but what we would actually be interested in seems to be learning the joint p(x,y), rather than freezing the architecture. (Like in the semi-supervised variants of Kingma et al). As a third point, there should be a discussion around FID and it’s brittleness with respect to how useful it is as a measure (see e.g. paper “pros and cons of GAN evaluation measures” as one example). Why would we be interested in measuring an inception distance, which is imagenet based, if we train a model on gray-scale or binarized handwritten digits? Finally, perhaps the individual investigations are not troublesome in the sense that they were conducted, but the way that they are described, in a seemingly constant attempt to select “a clear winner per category” is rather misleading to the reader. Evaluation of generative models has been an ongoing debate for several years now and it is clear that is complex and challenging. It would be great to have some of this flavor depicted in the experimentation and discussion, rather than going for an overly simplified take-away. * See related work section below, on the library not being the first of its kind and lacking a comparison <doc-sep>Authors propose Pythae, an open-source python library that implements 21 state-of-the-art generative autoencoders (GAEs). In addition, authors perform benchmarking of 19 GAEs on 5 downstream tasks using MNIST, CIFAR10 and CELEBA datasets. Throughout these tasks, authors deduce several conclusions on the benefits of different GAEs for different tasks over each other. The main strength of this work comes from the proposed python library that should allow for a unified hub to compare different GAEs and find the best suited one for ones downstream task. Authors tackle a crucial topic and compare performance of many GAEs under same settings. This is often neglected and hence it is not trivial to compare empirical performance of different methodologies in the literature. Through comparative experiments, authors make valuable observations for the different tasks, especially for the image generation task. In short, there are two major weaknesses; (i) it is not clarified whether the author implementations can replicate the performance of the original works, and (ii) benchmarks are conducted on simple, low resolution datasets, rendering conclusions of the manuscript unconvincing. It has been seen in the computer vision community that findings in simple image domains (content and resolution) often do not translate well to more complex vision domains with higher resolution data. Authors conduct more than half of the experiments exclusively on MNIST and CIFAR10 datasets and derive conclusions based on their observations. Accordingly, these findings can be misleading and outright false for more complex datasets. In Sec. 4.2.1 Task 2: Image generation, authors themselves also emphasize that using more advanced density estimation models as opposed to GMM was perhaps not useful because of the simplicity of the dataset they experimented with. It would be vital to see if same results can be concluded after experimenting on ImageNet or other higher resolution natural datasets. In Sec. 4.2.1, it is not clear how the latent dimensions are set for each dataset. For example, especially for CIFAR10, latent dimension of 256 for input images of 32x32 size seem unnecessarily high. In Sec. 4.2.2, authors find out that often 16 or 512 latent dimensions were optimal depending on the task and/or GAEs. Unfortunately, these are boundary values within the search space the authors have used. Therefore, it is not right to conclude that 16 or 512 is optimal unless they observe the findings do not change after trying <16 and >512 dimensions.
Accept(Poster), The reviews recognized the importance of contributions by the paper. The rebuttal by authors addressed a number of concerns, it would be helpful if the authors can address the pending concerns in the camera ready version.
SUMMARY: This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture. STRONG POINTS: The paper is timely and fits nicely the scope of the conference. The numerical experiments are convincing, offering insights, and demonstrating some of the advantages of the proposed architecture. The writing is clear, making the paper easy to follow. WEAK POINTS: Except for the numerical experiments, I find that the contribution is quite limited. The postulation of GCNN architectures based on polynomial graph filters where the focus is on learning the polynomial coefficients has been studied thoroughly in the literature. In general, the paper does a good job listing relevant works in that area, although some are missing (e.g., Gama - Ribeiro). Some of the existing works look at ARMA structures and recursive order-one filter implementations. I acknowledge that the architecture considered in those papers may not be exactly the same as the one proposed by the authors in this paper. I also appreciate that the application at hand (vertex classification) was not the goal of many of those papers. However, I still feel that the contribution falls short, especially for a top conference such as ICLR. In any case, I am open to change my mind if the authors are able to strengthen their theoretical claims or address my concerns in their rebuttal. I believe that the title should be changed. GCNN are not mentioned. The current title places the focus on Stacked Graph Filters. My first concern is that, within the linear paradigm (i.e., as polynomials of the adjacency/Laplacian matrix), this type of architectures have already been investigated. More importantly, the paper focuses on NN architectures, so I think it is reasonable to have that on the title. OVERALL RECOMMENDATION: Marginal reject. The paper is topical, timely, and nicely written. It addresses a problem of interest and does so with contemporary machine learning tools. The results in real-world datasets are convincing. However, the contribution and novelty are limited, falling short of the average contribution at ICLR. ADDITIONAL RECOMMENDATIONS: Being able to obtain additional theoretical results would make the contribution more solid. Further elaborating on the robustness of the architecture it is another change that would strengthen the manuscript. <doc-sep>Adaptive stacked graph filter The paper proposes a relatively simple formulation for a graph convolutional filter, that has the advantage of providing useful insights on the characteristic of the considered datasets. Many points of the paper are however not convincing in the present form, mainly regarding the novelty of the proposed formulation. The paper proposes a graph convolution operator that is inspired by the well-known approximation of a graph filter using polynomials of the graph Laplacian. Pros: - The paper proposes a simple filter formulation that allows to study the dependency on the neighborhood radius on different datasets. - The visualisation of the filters is interesting. -The reported experimental results are positive, even though in many cases the improvement does not seem significant. Cons: -The proposed model is very similar to GCNII: Graph convolution by Kipf and Welling with a single scalar parameters instead of a parameter matrix + skip connections. The main difference with GCNII is the lack of the identity mapping. In fact, eq. of H^l in page 4 is very similar to eq. 5 in https://arxiv.org/pdf/2007.02133.pdf. Authors should deeply discuss the differences between their proposal and other works in literature, clarifying their novel contribution. Comments about specific sections follow. Experimental section: -In page 6, authors state that they fix the \\theta hyper-parameter of GCNII to 0.5, even though the recommended values are around 1.5. Can you justify this choice? Also, since you run the experiments on GCNII, it would be interesting to see its performance on the bipartite dataset with \\theta = 1.5 -In Table 3, the results from literature do not report the variance. In general, it seems like the results of the proposed method and baselines are pretty close, and in many cases inside the variance range. Appendix A: the horizontal stacking variant is not explained in detail. From the figure it looks like several stacked layers with an aggregation that sums the weighted representation computed at each layer. I don't see why this should be "horizontal". Probably writing down the equations of this model would help. B.2. While authors state that for each dataset and for each run they select the hyper-parameters using the validation set, later in the same section they state that the results in the main paper are referred to the hyper-parameters in bold. I don't understand how the hyper-parameter selection procedure is adopted. Minor: Table 3, Chamaleon dataset. Missing bold on SGC. Texas: MLP is in bold while it shouldn't be Page 6: "Note that we also the extact" -> we use the -----REBUTTAL I acknowledge having checked authors' rebuttal and the revised version of the manuscript<doc-sep>This paper proposes to stack the graph filters with learnable polynomial parameters to construct the new graph neural network model. Generally, this paper is well organized and easy to read. Here are my concerns. 1.Essentially, this paper argues that the approximation of Chebyshev polynomials in GCN can only capture the low-frequency features in the spectral domain, and proposes a more general approximation scheme by stacking the graph filter in the spatial domain. However, the low-frequency property of GCN is highly related to the localized first-order approximation of graph convolutions. Without this first-order approximation, GCN model can capture the high-frequency information in graphs, e.g, ChebyNet [2] with large enough order K. It's better to add more discussions/comparisons with this kind of GCNs. Moreover, my core concern is the superiority of why the proposed polynomial approximation (in Equation 7) is better than the previous Chebyshev approximation from both theoretical and practical justifications. In graph signal processing, using a polynomial series to approximate the graph filter has been well studied in the literature. As pointed out by [1], Chebyshev polynomial is a good approximator to approximate graph filters. It is better to add more justifications (e.g., numerical analysis) about the proposed approximation scheme. 2.Another concern is the experiment. Dataset splitting: It seems like that this paper adopts the new splitting plan (stratified 0.6/0.2/0.2 splits) for all datasets. Meanwhile, the paper also reports the best results reported in the literature. However, I think it’s improper to put them in the same table since we can’t make a fair comparison under different data splitting. Moreover, I would like to see the results of SGF on the public splitting of these datasets. Hyperprameters: In Appendix B.4, the authors claim that they follow the hyperparameter recommendation in the original paper of baselines. However, it seems that some of the given hyperparameters are not the best hyper-parameters. For example, for Cora, \\alpha of GCNII is set to 0.2, while in Appendix B.4, \\alpha=0.5 which inconsistent with the original paper [3]. On the other hand, In Appendix B.2, the authors adopt the random strategy to search the hyperparameters of SGF. Since the authors re-run all the experiments of baselines in the new splits, it’s better to conduct the same hyper-parameter search process for each baseline to ensure a fair comparison. The filter parameters visualization: From the model construction perspective, since the only difference between SGF and GCNII/APPNP is the trainable filter parameters. Therefore, I’m curious about the value of \\alpha and \\beta after the training. Could you visualize the value of two parameters in each layer from SGF? Overall, I think this paper is marginally below the acceptance threshold. [1] David K. Hammond, Pierre Vandergheynst, and Re ́mi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011. [2] Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering." Advances in neural information processing systems. 2016. [3] Chen, M., Wei, Z., Huang, Z., Ding, B., & Li, Y. (2020). Simple and deep graph convolutional networks. arXiv preprint arXiv:2007.02133. <doc-sep>Summary: The authors proposed to learn the polynomial graph filter in their model. It can be viewed as adaptively learning the propagation part of APPNP and follows by a linear transformation (in features). They show the proposed model can perform well on both homophilic and heterophilic graphs. Pros: 1. The idea of adaptively learn the polynomial filter seems correct and reasonable. 2. Results on filter visualization and structural noise are interesting. Cons: 1. The proposed methodology is not novel. A very similar idea has been proposed previously. (See Detail comments) 2. Problems of over-smoothing. 3. Results on experiment section (Table 2 and 3) are questionable. Detail comments: While the proposed idea of adaptively learning the polynomial graph filter is interesting, it has been proposed previously in not only the GNN literature [2] but also PageRank based methods [3]. Both of them proposed the idea of adaptively learn the polynomial graph filter, or equivalently the generalized PageRank weights. Hence, I do not think the current paper is completely novel. Nevertheless, the proposed methodology seems to be the correct answer for GNN to adapt to both homophilic and heterophilic graphs. One problem of the current proposed method is that why it can avoid over-smoothing when stacking many layers? The authors use a fixed initialization $\\alpha = 0.5$ which is the same as APPNP so at least at the very beginning it won’t suffer from over-smoothing. However, it is unclear how will the coefficients behave during and after training. Also, it is not clear how to initialize $\\beta$ in the model. Furthermore, if the proposed model can indeed adaptively learn the good polynomial graph filter, why doesn’t the random initialization work? Does that mean the implicit bias of the specific initialization proposed in the paper is necessary? If that is the case, then I do not see why the claim of “adaptive learning” is correct since it is actually sensitive to the initialization. Beside the methodology and novelty, I also find the experiment section questionable. Firstly, since the main theme of the paper is learning the polynomial filter, the authors should at least compare their method with ChebNet (GCN-Cheby)[5] which also use polynomial filter. Note that in both [4] and [2], they all show that ChebNet can better adapt to heterophilic graphs compare to GCN and GAT. On the other hand, according to Appendix B.4, the authors use $K=2$ (propagation step) for APPNP. This is *NOT* the suggested hyperparameter reported in [1] ($K=10$). Note that the authors of [1] even show that if we choose a larger $K\\geq 10$, the performance can be slightly improved on Cora, Citeseer and PubMed. In contrast, SGF use $K=16$ which is not a fair comparison to APPNP. There should be a experiment that compares APPNP with SGF under the same $K$. Finally, the authors claim the performance of most baseline methods are found in the literature. However, this is also problematic to me. Note that in the original GCN and GAT paper, the date split is much sparse then the $0.6/0.2/0.2$ split proposed by the authors. Also, in the Geom-GCN paper they do test their model on Chameleon in the split $0.6/0.2/0.2$. Why is it stated as not available? Even if we assume all the problem above can be well explained, the improvement of the proposed model seems not statistically significant. For example, on Wisconsin, Cornell and Texas, although SGF has the highest accuracy in average, the standard deviation is very large. MLP is within 1 standard deviation. Please report the confidence interval to show that the gain of SGF is indeed statistically significant. On the other hand, SGF is worse than not only SGC but also GCNII by a large margin on Chameleon. If SGF can indeed learn the near-optimal polynomial filter, then why this is the case? At last, in the original Geom-GCN paper, they also have the Actor dataset. I think it would be great if the authors can put this result at least in the Appendix. Besides these weaknesses, I still find the paper well written. Also, the experiment on filter visualization and structural noise are quite interesting. I believe the paper can be greatly improved if all the concerns above can be addressed. Minor comments: In page 2, the authors state that the normalized adjacency matrix with added self-loops is $\\tilde{A}= I-D^{-1/2}A D^{-1/2} + c$, where $c$ is some diagonal matrix. This is incorrect. Note that when we add self-loops, the degree matrix $D$ has to changed accordingly. Please see the correct expression in [1] for example. In page 2, the Rayleigh quotient $r(\\mathcal{L},x)$ is defined with two input arguments but later the authors ignore $\\mathcal{L}$. While it is clear from the context, the notation is not rigorous. In page 1 introduction section, the authors mention that the model does not need to tune the hyper-parameters. However, in the same page contribution section, the authors mention that they use one hyper-parameter setting. According to their experiment section, I think what they mean is the previous. It would be great to clarify the ambiguity here. Reference: [1] “Predict then Propagate: Graph Neural Networks meet Personalized PageRank,” Klicpera et al., ICLR 2018. [2] “Adaptive Universal Generalized PageRank Graph Neural Network,” Chien et al., arXiv:2006.07988. [3] “Adaptive diffusions for scalable learning over graphs,” Berberidis et al., In Mining and Learning with Graphs Workshop @ ACM KDD 2018, pp. 1, 8 2018. [4] “Generalizing Graph Neural Networks Beyond Homophily,” Zhu et al., NeurIPS 2020. (arXiv:2006.11468) [5] “Convolutional neural networkson graphs with fast localized spectral filtering,” Defferrard et al., NeurIPS 2016.
The topic covered by the paper is timely, and the way the authors have addressed the problem seems correct. The provided empirical evidence seems to be sufficient to support the main claim of the paper. Presentation is well structured and clear. Notwithstanding the above merits, the proposed approach seems to confirm other similar proposals presented in the literature, so the contribution of the paper seems to be limited. Although presentation is good, it is not highlighting enough the differences w.r.t. those proposals and the basic approximation result given by Chebyshev polynomials. Especially a better theoretical characterisation w.r.t. to approximation capabilities by Chebyshev polynomials (with no truncation) would have helped to better gain understanding of the merits of the proposed approach. Finally, some of the experimental results do not seem to have a significant statistical difference w.r.t to the baselines, so it would have helped to have the result of a statistical test.
The authors propose a method that allows training of UV methods without sharing any user (exemplar or class) embeddings with the server or other uses. Models are trained using gradient averaging on the server, so any leakage through that is not addressed in this work. The paper shows experimental results on speaker identification, face and handwriting verification tasks. The authors argue that this is the first work that considers secure training in a federated setup, with neither raw inputs nor exemplar or class embeddings being shared with the server or other users. #### Pros * The paper is clearly written and the derivations are sound (for the most part, see questions below). * The idea appears to be novel and a significant delta compared to the SoTa in terms of security and the novelty of a secure embedding learning protocol in the federated setup were only (one) positive classes are available for training. * The experimental results are promising albeit can't compete with existing less secure methods. #### Cons - Clarity of experiments - Especially for the face verification task the code length seems to play a major role. Any discussion giving an understanding of this would be appreciated. Specifically, how and why does $d_{min}$ affect the accuracy. Bottom of page 5 mentions that increasing the code-words and presumably $d_{min}$ increases the performance, but no reasoning is provided. - Additional insights of how the baselines (softmax, FedAws) were trained and what the emedding sizes are would be helpful. Is the embedding size ~64 in all cases? #### Questions & Comments - The assumption of $||z|| = \\sqrt{c}$ should be put into context. What are the practical applications for this assumption. Is it merely there for the math to work out? - The theorems show that $l_{neg}$ is redundant for when $l_{pos}=0$, however, it is not clear to me that minimizing $l_{pos}$ also corresponds to minimizing $l_{neg}$. In practice, $l_{pos}$ will likely never reach $0$ and a negative loss term could have a significant contribution to the loss surface. - Page 6 mentions that increasing $l_r$ reduces the minimum distance of the code for a given code length. Why is this the case? Is it because $r_u$ is sampled by the clients and no guarantees can be made? A more detailed discussion would be helpful. This work proposes a new idea that allows training embeddings for verification with only positive classes in a federated setting, while ensuring security. Some areas could be clarified in the paper, especially why it is sufficient to proof the redundancy of the negative loss term only for the global minimum of when $l_{pos}=0$. Assuming the authors can provide a satisfying explanation, I recommend accepting this work.<doc-sep>The paper leverages federated learning to train user verification models. The authors claim that their new federated learning addresses the security and privacy issues of previous methods. In particular, for privacy, the users do not need to send their class embedding vectors to server nor other users. For security, the paper claims that the proposed method is secure against poisoning attacks and evasion attacks. Strengths I think the major strength of the paper is to design a loss function and a way of modeling embedding vectors for users such that the embedding models can be learnt without sharing the embedded vectors to the server nor other users. Weaknesses The paper is weak on its security and privacy claims. 1. For privacy, can you quantify the privacy leakage of sharing embedded vectors with the server? Without a formal quantification, it is hard to claim your method is more private. 2. Poisoning attack. I don't think the paper addresses the poisoning attacks. The paper considers that the server may poison the learnt model. However, in the proposed method, the server can still poison the model. In particular, the server can send arbitrary new model to each user. In general, it is hard to defend against malicious server who performs poison attacks. Also, malicious users can poison the model training, which are more realistic poisoning attacks. But such poisoning attacks are not considered. I don't see how the proposed method can address these poisoning attacks. Some references on poisoning attacks: https://arxiv.org/abs/1807.00459 https://arxiv.org/abs/1911.11815 https://openreview.net/forum?id=rkgyS0VFvr 3. Evasion attack. The proposed cannot address evasion attack at all. 4. Experimetal details. Can you add more details on experimental details, e.g., learning rate. How is experiment on softmax loss function implemented. 5. Can you also report AUC to compare different methods, since you already show the true positive rate vs. false positive rate curves?<doc-sep>**Summary** Federated learning takes advantage of the fact that private user data does not need to be transferred and shared across devices or servers. This makes FL particularly attractive for the user verification scenario, where privacy-sensitive biometric data are used to train verification models. One crucial hurdle in this scenario is that per device, only positive data are present, potentially turning the device-wise training objective ill-posed (all embedding are likely to collapse to a single point). As a way to introduce negative examples, FEDAWS has been developed and presented at ICML 2020. This paper recognizes a crucial security risk in the FEDAWS system, that embeddings of user data are transferred to the server, and proposes a more secure training methodology, FEDUV, that involves the error-correcting codes. FEDUV enjoys stronger security guarantees while showing comparable ROC curves as FEDAWS at nearly identical computational costs (though not entirely sure about the computational cost bit ;) ). **Pros** The motivation is spot on. Having to see any form of negative samples is the itchy point of the FL-based user verification system. FEDUV magically solves this issue by pre-defining a unique prototype vector for each user, which are not shared across users and are by design far apart from each other (this is the crucial trick!) by employing a technique in error-correcting codes (ECC). As a result, each user's endeavour to get closer to the own prototype vector ensures the maximisation of distance from the others' prototype vectors. Three experiments that are quite close to real-world scenarios (speaker, face, and handwriting-based verification) show that the performance of FEDUV is comparable to FEDAWS, the state of the art framework from ICML 2020 with weaker security guarantees. Writing is nearly flawless. Highly enjoyable paper. **Cons** No major cons. Perhaps explain in a bit more depth on the BCH code to illustrate (at least a high-level, hand-wavy description) how it assigns the codes in a distance-maximizing manner. Section 2.3 only explains the desiderata for BCH, rather than *how* BCH achieves it. Please also confirm that FEDUV spends nearly identical computational cost as FEDAWS. Somehow I got this from the paper, but have not found a solid reference that confirms this (if not, please explain, too). Nits: Please add grid lines and row titles (training set, test set with known users, test set with unknown users) in Figure 2 plots. Baslines --> Baselines. Flatten the last part of Section 1 as paragraphs rather than itemize? Yu et al. 2020 (FEDAWS) is an ICML paper, not arXiv - please fix the reference. **Key reasons for the rating** I don't find any major rationale to reject this paper. However, its novelty is also eclipsed by the Yu et al. 2020 (FEDAWS) paper. Though I really like this paper, I believe the best scores should be reserved for more innovative papers. **After rebuttal & discussion** I still tend to think that the paper's scope can be adjusted relatively easily (it is not too difficult to insert more disclaimers and change the title), and we can force apply the adjustment by conferring a conditional acceptance. But I'm sold on the point that there is a lack of argumentation on whether undisclosing the user-specific embedding will improve the privacy guarantee. I had taken this argument as granted, but this is indeed not so obvious, given that there exist many attacks that are applicable in this kind of scenario, as R4 has argued. It would be great if the authors could quantify the improved privacy guarantee. I'm okay with rejecting the paper then. I still like the paper quite a lot, but rejecting it will also give the authors a good chance to assimilate more points of views in the paper.<doc-sep>In this paper, the authors focus on designing a federated user verification solution. Specifically, the authors address two fundamental challenges associated with user verification, i.e., one-class data (positive data only), and privacy protection (i.e., the raw data and the embeddings of the users and class). Technically, the authors extend a very recent work called FedAWS by (Yu et al., 2020), and introduce a user-specific codewords, which not only protect users' privacy (i.e., not sharing the embedding with other users or the server) but also do not need the negative samples (i.e., the two loss functions in Eq.(5) reduces to one due to equivalence shown in Theorem 1). We can see that the main idea of re-writing the Eq.(2) into two loss functions in Eq.(4) and Eq.(5), and introducing codewords are novel and effective, which also address the two challenges well. Empirical studies on three user verification cases show the effectiveness of the proposed solution FedUV. Overall, the technique is novel (and I like this idea) and the paper is well presented. I recommend acceptance.
In this paper, the authors propose to adapt the recent paper by Yu et al. (ICML 2020), namely FedAwS. In that paper, the authors solved a potential failure mode in federated learning, when all the users only have access to one class in their devices. In this paper, the authors extend FedAwS to a setting in which federated learning is used for User Verification (UV), namely FedUV. The authors argue that the previous paper could not be the solution to learning UV because FedAwS share the embedding vectors with the server. The authors then show a procedure in which they can learn a classifier in which the embedding vectors are not needed to be shared with the classifier. They use error-correcting codes to make the mapping sufficiently different and that allows the training to succeed without sharing the embedding. The proposed change is only marginally worse than FedAwS and centralized learning. This is the part of the paper that has attracted positive comments and is praised by all the reviewers. The authors take as given that by not sharing the embedding vectors and by using randomly generated error-correcting codes, the whole procedure is privacy-preserving and secure. The 4th reviewer indicates that these guarantees need to be proven and points out several references that hint toward flaws in the argument by the authors. Reviewer 4th does say that not sharing the embeddings might not be enough, but that self-evident arguments are not enough. This paper provides a significant improvement for a federated machine learning algorithm that deserves publication, but the rationale of the paper is flawed from a privacy and security viewpoint. I think if the paper is published as is, especially with the proposed title, it will create a negative reaction by the security and privacy community for not adhering to their standards. We cannot lower those standards. I suggest to the authors that they can follow two potential paths for publishing this work: 1 Change the scope of their algorithm. For example, I can imagine that by not sharing the embedding the communication load with the server might be significantly reduced or that adding new users with new classes can be easier. 2 Follow the recommendation from Reviewer 4 and show that the proposed method is robust against the different attacks. Minor comments: For a paper that is trying to solve the AU problem, I would expect a discussion about why learning is better than a private algorithm. In a way, learning is sharing, and that increases the risk of mischief by malicious users. The discussion about error-correcting codes and the minimum distance is quite old fashion. In high dimensions, the minimum distance is not the whole story. LDPC codes make sense when we stop focusing on minimum distance codes and minimum distance decoding. I would recommend having a look at the Berlekamp’s Bat discussion in David MacKay’s book (Chapter 13).
This paper proposes to use hypernetwork to personalize the federated model by encoding new user data and using this embedding as a parametrization argument. The results demonstrate significant average improvement over the new clients. Furthermore, the authors evaluate the possible use of DP to encode the user embedding vectors. Strength: The proposed idea contributes to the hypernetwork research and ourperforms current state-of-the-art results. Weaknesses: it's clear how the proposed paper is different from [1] but I would dedicate more space for the comparison, specifically Section 5.3 of [1] suggests using the nearest client (as you describe in Table 1). It's clear that CIFAR-100 or iNaturalist have more diverse data where your method excels, but I wonder if in real scenario with millions of users participating (e.g. language model for Reddit/Stackoverflow dataset) the same conclusion would persist. It would be useful to see the distribution and not the average accuracy values for new user performance. It might be that the proposed method benefits (or harms) the users with diverse data, which is an essential question for FL fairness. It leads to the following point: evaluating the method on the text data is important for FL as language modeling is one of the few industry-deployed use cases and widely used in FL literature. And also the personalization of LM models is an important problem. For DP please clarify what d \\in D is? Is it a single input of the user? Lastly, other personalization works that use local adaptation [2], such as finetuning global model on the local data, should be considered as baselines and I wonder what is their performance w.r.t. the proposed method. [1] Shamsian, A., Navon, A., Fetaya, E., & Chechik, G. "Personalized Federated Learning using Hypernetworks." In ICML'21 [2] Li, Tian, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. "Ditto: Fair and robust federated learning through personalization." In ICML'21. The paper is well-written and presents a new algorithm that uses encoder to adapt for new clients. The paper lacks few baselines and explanations but can be beneficial for FL community. <doc-sep>This paper studies the problem of in personalized federated learning, the current paradigm does not allow for new clients to join during the inference time. Federated learning does not generalize well to new data distribution that is very different from training. Personalized FL are not designed to apply to a client that is not being trained. This paper defines a new task inference-time personalized federated learning (ITPFL) to close the gap. The proposed approach IT-PFL-HN first trains a client encoder, then maps a full client dataset to descriptor. A hyper network maps the descriptors on to a space. During inference time, the client description can be computed locally and request the personalized model on-demand. The paper demonstrate the effectiveness of the proposed approach on both CIFAR and real-world datasets. Pros: 1. The paper overall in clearly written and easy to follow. The overall flow of the paper and descriptions of the proposed approach is clear. 2. The problem being study is well-motivated and can be useful. 3. It also provide differential privacy analysis to provide insights into the privacy protection perspective of the proposed approach. Cons: 1. Some of the experiment results are not explained. For example, in Table one, the CIFAR-100 pathological split, the proposed approach is not outperforming the baseline approaches. However, it lacks any explanation or analysis about why it is the case. 2. Question: in the real-world dataset, for the models both target model, client encoder, and hyper network. These are all simple fully connected learns with global operations. Are these models realistic for real-world use-cases? If these base model change, will the framework performance differ? It would be good to demonstrate the effectiveness of the proposed approach with different types of neural networks. 3. In the experiment, Section 6.4, it methods the pFedHN-ensemble suffers from large communication and computation cost. However, there are not numbers supporting the claim. It would be good to provide numbers to see if these are real-concerns. 4. I was wondering how much difference it will make in comparison with the new client participate in training. It would be good to add experiments to compare with that. Minors: 1. In section 4, Meta mechanism, the quote is not right. same for many other quotes Overall, the problem being studied in this paper is interesting and well-motivated. The paper is clearly written. The novelty perspective of this paper can be highlighted better. It seems like a lot of the aspect like hyper network are proposed by other approaches. Some other major concerns are in terms of the experiments, there are results are not explained or claims without number supported. Please see main review for details. 
 <doc-sep>The paper proposes a new task named as Inference-Time Personalized Federated Learning (IT-PFL). Specifically, given a new client with unlabeled data joining in the federated learning system after the training process has finished, IT-PFL aims to deploy personalized FL model to it. The strategy is based on recent works on hypernetwork. The training strategy is in an end-to-end framework. While the authors claimed that IT-PFL is a novel problem, the solution that the authors gave is similar to [1], which utilizes hypernetwork in personalized federated learning task. At least to me, the difference between [1] and this paper is marginal, which makes me question the novelty of the paper. I think the authors need to make clear what is the novel contribution given the existence of [1]. This paper does give some results on generalization bound and differential privacy, but I believe they are just results of simple application of existing theorems. The theoretical contributions from these two sections are not strong enough, or at least the authors did not make clear what are theoretical challenges. Besides the novelty concern, I have two additional suggestions. First, I think it will be interesting to compare hypernetworks and meta-learning. These two approaches are solving essentially the same problem in the end, and they are both applied to personalized federated learning. A deep understanding of the connections and differences between the two approaches will be beneficial to the community. Second, I think in related work, the authors should summarize the literature about Hypernetworks (HNs), and especially its recent application in personalized federated learning ([1]), because there is obviously an intimate connection between that line of works and this paper. [1] Aviv Shamsian, Aviv Navon, Ethan Fetaya, and Gal Chechik. Personalized federated learning using hypernetworks. arXiv preprint arXiv:2103.04628, 2021. Given the major novelty concern, I do not think the paper is appropriate for acceptance in its current shape. The authors should try to make clear what is the novel part of the paper, how challenging it is to get those results, and add more literature review in related work. <doc-sep>The authors propose a new personalized federated learning paradigm composed of a hypernetwork module and an encoder module in server and an extra novel client with unlabeled data. The encoder module is enhanced thanks to the unlabeled data. Preliminary experiments demonstrate the efficacy of the proposed algorithm. This paper is clearly written. The authors adopt an additional novel client to enhance the accuracy of personalized FL. However, the reviewer still has several concerns: 1. The proposed generalization bound is established for the novel client. How does this generalization-bound reflect the efficacy of the proposed personalized federated learning framework? 2. The compared baselines are not sufficient to demonstrate the efficacy of the proposed algorithm. In fact, there exist several PFL algorithms, such as Ditto, Sub-FedAvg, etc. 3. The authors should do an additional ablation study to evaluate the influence of the size of the unlabeled dataset. It seems that is the novel client merely has limited data. The proposed algorithm in this work reduces a smile variant of pFedHN. see the comment above.
This paper proposes a personalized federated learning method using a hyper-network to encode unlabeled data from new clients. At inference time, new clients can use unlabeled data as input to this hyper-network in order to obtain a personalized version of the model. The key strength of the paper is that the idea is interesting and timely. Personalization has been studied for clients that participate from the beginning of training, but personalization of models for new clients that join later on has not been considered in most previous works. The experimental results also show a reasonable improvement over the baselines. However, the following concerns remain: 1) Novelty in comparison with reference [1]. Please add a detailed comparison when you revise the paper. 2) Explanation of the experimental results and comparison with baselines was deemed insufficient by some of the reviewers. 3) The generalization bound and the DP results seem standard extensions of existing works and do not add much novelty to the paper. There wasn't much post-rebuttal discussion and the reviewers decided to stick to their original scores. Therefore, I recommend rejection of the paper. I hope that the authors will take the reviewers' constructive comments into account when revising the paper for a future resubmission.
This work introduces a new perceptual super resolution method for 3D brain image segmentation. The method uses a carefully designed up-sampling and a novel loss function to obtain the desired performance, and this method is evaluated based on a clinical relevant metric due to the lack of high resolution ground truth data. The authors find that the proposed method consistently improves the ability to detect regional atrophy both longitudinally and cross-sectionally in five relevant diseases. Strength: - This work addresses an important problem in the field of medical imaging, and the authors have demonstrated deep understanding and expertise in this field. - The method is discussed in a thorough manner, and important decisions are well justified. Truly quality work! - Rigorous experiments to evaluate the proposed method's performance (it's great to see CI evaluations) Weakness: - This work would be of a stronger stance if its originality is more well-justified: 1) the paper writes that "...three perceptual loss functions for PSR, two of which are new", while the neural network and the loss functions used (reconstruction error, TV, perceptual loss, dice) are not unheard in the field of medical imaging. The authors might elaborate which parts of the loss functions are new. 2) While the proposed evaluation paradigm is, in a sense, new, I would like to see a deeper discussion of why this is new (e.g., is this a novel way of seeing the problem of lacking SR ground truth label?) - The contribution bullet states that "demonstration of the impact of loss choice on performance differences in improving detection power in [population studies] of neurodegenerative disease". The presented results haven't directly shown the power of the proposed method on a population-scale. The authors might elaborate more about the stated population-level effect of the proposed method. Yes. <doc-sep>This paper outlines a new method for enhancing 3D neuroimaging resolution and improving image segmentation using "perception super resolution" in a technique labelled "concurrent super resolution and segmentation" (CSRS). The authors evaluate the effectiveness of the technique on publicly available clinical datasets from the Human Connectome Project, the Parkinson's Progression Markers Initiative, and the Frontotemporal Lobar Degeneration Neuroimaging Initiative (NIFD) & 4-Repeat Tauopathy Neuroimaging Initiative (4RTNI). They test this method in quantifying changes across cross-sectional and longitudinal MRI datasets for different neurodegenerative diseases. The results, however, only report comparisons of the new technique to the original-resolution model using bootstrapped t-tests, which are not corrected for multiple testing. The implications are very briefly discussed. I do not comment on paper originality as it is a flawed measure of paper quality. The overall quality of the paper is moderate; while the technique is interesting, there is insufficient detail about preprocessing of the MRI data to replicate the results, and the methods are not communicated very clearly. In addition, more of a clinical and/or neuroscience perspective in the experimental design would have been useful. The Discussion is overly brief and lacking in depth or detail. The Supplementary Materials are quite poorly organized and do not meet NeurIPS communication standards. However, the technique itself does seem very promising, and with clearer communication, this could be an interesting and important development. The authors do not address the limitations or potential negative social impact of their work. <doc-sep>The paper presents a new methodology for concurrent super resolution and segmentation (CSRS) on 3-D volumetric MRI data to consistently upsample both an image intensity channel and associated segmentation labels. To this end, a primary contribution is in adapting perceptual super resolution frameworks designed for 2-D data to 3-D data within the 2D deep back projection (DBPN) network. Since the ground truth high resolution images may not be present for a given dataset, the authors propose an indirect evaluation via the quantification of cross-sectional and longitudinal change in diseased cohorts. Specifically, they choose a set of phenotypically heterogeneous but related disorders that are associated with known patterns of brain atrophy. Their experiments examine the effect of various choices in loss function in terms of identification of neurodegenerative diseases within the Human Connectome Project (HCP), Data: Parkinson’s Progression Markers Initiative (PPMI) , Frontotemporal Lobar Degeneration Neuroimaging Initiative (NIFD) & and-Repeat 123 Tauopathy Neuroimaging Initiative (4RTNI) datasets. STRENGTHS: The clinical problem that the paper examines, concurrent super resolution and segmentation is interesting, as is the use-case of tracking brain atrophy within neuro-degenetative diseases. The proposed approach to extending existing Perceptual Super Resolution techniques is interesting, though a bit incremental on the technical front. WEAKNESSES: 1. The evaluation section is rather poorly explained and hard to follow. Several points are unclear and the main arguments are not very convincing in light of the quantitative results (a) Some key details such as dataset splits for training, testing, validation are not clearly mentioned. (b) It is unclear which dataset/disorder (all datasets?) the results in Table 1 and 2 and figures correspond to. (c) The differences across different loss functions and configurations in Table 1 are rather minor (third decimal place in terms of effect sizes, dice, psnr). It is unclear whether these improvements are consistent across dataset splits. (d) Table 2 is really hard to parse, lacking in description and in general confusing. As per my understanding, CSRS.R.TV.VGG vs the proposed framework should be compared across the same anatomical class. If so, the differences in effect sizes appear very minor (third decimal place) for several comparisons. 2. If my understanding is correct, the baseline comparisons in the paper correspond to evaluation against ablated versions of the framework (various loss functions), evaluation in the original resolution and linear interpolation. Given that the authors listed a couple of recent approaches proposed in literature in Section 1, it is not obvious why they did not include these as baseline comparisons. The main limitations discussed in section 6 including computation and quality of annotations. Given that the framework is so computationally expensive that 1 epoch takes 12 hours seems to greatly limit the practical utility. <doc-sep>This work developed a super resolution method for 3D neuroimaging and evaluate its performance in detecting brain changes due to neurodegenerative disease. It trained on 3D brain data to upsample both the raw intensity image and associated segmentation labels. The method was mainly based on the 2D deep back projection network (DBPN) [5], which was extended to three dimension and multiple output. The method was evaluated in downstream clinically relevant signal detection problem: quantifying cross-sectional and longitudinal change across a set of phenotypically heterogeneous but related disorders that exhibit known and differentiable patterns of brain atrophy. Strength: -- This work conducted the evaluation of super resolution in the downstream clinically relevant tasks, e.g., quantifying cross-sectional and longitudinal change across a set of phenotypically heterogeneous but related disorders. Weakness: -- This work overall has limited methodology contribution. -- The organization of this work is not clear and hard to follow for some sections. -- This work fits more to a dedicated medical imaging conference or journal. No. <doc-sep>The authors present an extension of a 2D perceptual super-resolution architecture extended to 3D within neuroimaging. The authors additionally extend the network to simultaneously produce up-sampled segmentation maps. The upsampled segmentation maps are evaluated against the ground truth, as well as against detection power across a variety of frontotemporal disorders. The effect of different loss functions used in training are also evaluated against prediction power. Whilst the super-resolution techniques improved standard evaluation metrics (PSNR & SSIM) when compared against the low resolution data, these metrics did not significantly improve when compared against models trained with the different loss functions used. Prediction power for frontotemporal disorders was improved in many regions when using the super-resolution models, however some regions suffered in prediction power when super-resolved. Overall this paper is clear and concise, and forms a well constructed manuscript. The background information and methods are extensive, and are presented in logical, clearly defined, sections. The overall study is fairly in depth for the given research question, and presents a nice analysis of the downstream effect of segmentation super-resolution within a disease model. The authors include the relevant previous work done in this area, and appropriately outline their contribution to this field. The biggest limitation of this work I feel is the limitation of impact in some of novel contributions outlined. Specifically, extending a model from 2D to 3D, whilst relevant within neuroimaging data, is not particularly impactful. Additionally, adding segmentation maps as an additional output channel does improve user convenience, and likely a computational efficiency gain, but is ultimately not overly impactful as the segmentation maps could be re-calculated with the inferred high-resolution dataset. Overall however, it is my opinion that the strengths do clearly outweigh the weaknesses in this work. The authors have addressed the limitations of their work in the conclusions section. <doc-sep>This paper proposed a 2x (from 2mm to 1mm) super-resolution method for 3D medical image volumes and the corresponding segmentation mask for better brain diagnosis. The proposed method is a natural extension of the 2D super-resolution model called deep back-projection network (DBPN). Compared with linear upsampling and nearest-neighbor upsampling, the proposed method achieved a better dice score as well as a better visual quality. Strength: This paper provides good implementation/technical details to illustrate the problem, the method, and the experiments. It also provides a good visual-based comparison to interpret the method's performance. It is interesting to consider upsampling both the image along with the segmentation mask. Weakness: The method proposed in the paper doesn't seem to be novel given the fact that it is a natural extension from 2D to 3D based on the deep back-projection network (DBPN). Though the paper proposed a few loss functions to adjust the model's objective, the performance does not show a significant difference between different combinations of loss functions. Moreover, the paper doesn't provide a good description of the related work and the main comparison is done between the variants of the proposed method and the trivial methods (linear/nearest-neighbor upsampling). As stated in the weakness, the method is not novel to me and the experimental comparison is not satisfactory for a NeuroIPS paper. The paper lacks a thorough description of related work and a better comparison with other existing methods other than linear/nearest-neighbor upsampling.
This paper has mixed evaluations, with two reviewers recommending accept and three recommending reject. After carefully reading the paper and the discussion, I agree with reviewers hYgi, Ho1b, aKcw, Uzm9 in their main criticisms. The paper still requires major revisions before it can be accepted, including, but not limited to, an improvement in the clarity of the presentation and more experimental comparisons against other, perhaps, even simpler approaches.
This paper tackles the issue of segmentation uncertainty. Using state-of-the-art SSNs, authors view these as factor models. They derive flow probabilities on these factors to visualize and quantify uncertainty associated with them, while looking for a "minimal rotation" of factors through orthogonal rotations. They show that this technique is suited to derive fine-grained maps for assessing uncertainty in segmentation results, as well as to tweak computed segmentations - which could prove useful for experts using such tools. I find this piece of work sound and interesting. Flow probabilities (FP), after factors have been rotated in a meaningful way, make for a nice object to intuitively visualize uncertainty and authors show a convincing piece of code allowing users to update segmentations thanks to these FP. **Significance** The described method could prove very useful if it can indeed help domain experts perform fast segmentation tasks while providing them with intuitive uncertainty measures. However, it is yet unclear to me wether it can help discard entire segmented zones when the number of classes is higher than 2 or 3 (see Question 1). **Originality** This paper could be considered mildly original as it mostly combines existing results from SSNs, factor analysis and flow probabilities. However, I think the idea to view low-rank models underlying SSNs as factor models sheds a nice perspective on these models, and that this aspect is more important than pure originality. **Quality** I find this piece of work to be well written and illustrated. I appreciate that authors bundled a repo that I could use out of the box (I tested the notebook and read a great deal of the code). As is, the codebase is hard to grasp and use though, and I think it would greatly benefit from being documented more extensively (docstrings for all methods would be nice) and tested (tests make for a nice way to understand the overall structure of the codebase and usage of each method). Please cite used packages in the main document (numpy, scipy, sklearn, torch, einops to name a few). **Clarity** Although this work calls for very visual and easily understood experiments, I found the article a bit difficult to dive in. I think it would benefit from having a figure describing the overall procedure, training steps, and connection between concepts (loadings, factors, latent variables, FPs, rotations and rotation criteria). It could be included in the supp. mat. I think intuitions leading to Proposition 1 would also benefit from being illustrated in a Figure. Authors mention computation time of flow-probability based rotation, but I think question 1 could be an important limitation of this work. <doc-sep>The manuscripts reinterprets stochastic segmentation networks (SSN 2020) as a factor model and thus adds latent factors governing the noise components within the single covariance of SSNs. Additionally rotation of the factors with imposed sparsity leads to a parsimonious, and supposedly more interpretable, representation of the factors. The manuscript provides derivations of the reasoning behind the proposed representation and performs a rigorous empirical comparison of already available rotation approaches. The results in the main manuscript and the supplement, including the video demonstrate that the approach works in providing uncertainty factors that could be individually manipulated. ## Strengths - A simple and pragmatic approach to extending SSN with an ability to control uncertainty components independently - A needed take on sparsity and uncertainty to address interpretability of uncertainty representation. ## Weaknesses - The major weakness, in my view, is that besides meeting the goals of factorization and sparsification of uncertainty, the manuscript is not convincing in that the created tool is interpretable and thus useful. The value of the method for interpretation of the model is not coming through in any of the examples of the manuscript, supplement, and the video. The most intuitively interpretable examples from the CamVid dataset does not add any information and looks like uncertainty of all classes but, possibly, the cars is simply mixed around all the objects. Other demonstrations are also not helpful and it is unclear how would a user of the model benefit from the new approach in either of the remaining examples. I do not look at the satellite images every day and that may be the reason the flagship example in the main manuscript and the supplied video does not convey much information since the segmentation seems to be very poor (the DICE score would be really low). - The value of the proposed approach is not clear from the paper, this limits further impact of the paper since practitioners won't be able to appreciate the need for this method. - Minor: computational complexity, as noted by the authors. <doc-sep>The paper proposes a novel method for structuring uncertainty in the context of stochastic segmentation networks(SSNs). They use low-rank multivariate gaussian distribution to solve SSNs model uncertainty. They also develop a tool for the analysis of factor models in SSNs and apply rotation criteria to provide simple and well-separated control variables. In the experimental part, the proposed method has been outperformed the current state of the art on several datasets. The idea of this proposed method appears to be a novel combination of stochastic segmentation networks. The authors try to solve the segmentation uncertainty by using smaller latent factor variables based on the recent work called SSNs. The model proposed by the paper is clearly explained and it includes sufficient and detailed experiments in the supplement. One possible weakness maybe somewhat missing the explanations of the so called ‘significant’ effect on output segmentations; another weakness maybe more detailed description about the rotation criteria. It will be great if the authors could explain more about the interface of fine-grained sampling. In the supplement, the proposed methods seems to have heavy computation cost, I expect them to better address this issue if possible. one limitation of this work is the heavy computation of the flow-probability rotation.
The manuscript interprets stochastic segmentation networks (SSN 2020) as a factor model and thus adds latent factors governing the noise components within the single covariance of SSNs. Additionally, well-chosen rotations of the factors with imposed sparsity lead to a parsimonious, and supposedly more interpretable, representation of the factors. The manuscript provides derivations of the reasoning behind the proposed representation and performs a rigorous empirical comparison of already available rotation approaches. The results in the main manuscript and the supplement, including the video, demonstrate that the approach works in providing uncertainty factors that could be individually manipulated. According to all reviewers, the paper is well written and the technical work looks solid. The proposed solution is pragamatic. Experiments are sufficient. The main issue of the paper is that the value of the method for interpretation of the model does not come through the examples of the manuscript, supplement, and the video. It is unclear how would a user of the model benefit from the new approach in either of the examples. The issue is that according to the authors, the new feature for fine-grained sampling and fine-adjustments of segmentations should be integrated alongside with classic manual techniques for editing segmentations in a full user application. However, the present work only focused on the new technique and not on building a full user application. Given this, there remains a doubt wehther the proposed technique is actually useful or not. Other issues related to presentation clarity and quality of the visual material have been well addressed. The consensus is that the paper should be accepted at NeurIPS 2022.
This paper provides a theoretical study of the popular self-supervised learning (SSL) techniques VICReg, SimCLR and BarlowTwins. The main results are closed-form optimal representations for each method as a function of the training data and the sample-relation matrix (which indicates which samples are from the same class, e.g. this could be constructed as the matrix that indicates which samples are augmentations of the same image). The paper further provides simplified versions of these expressions in linear settings and uses them to show equivalence between the SSL methods and various spectral methods, and provides a study on downstream task performances. Strengths 1. The paper studies the important problem of theoretically understanding increasingly popular SSL methods. 2. To my knowledge, no other work has derived such expressions concerning SSL optimal solutions, or made the connections between SSL and spectral embedding methods. 3. The scope of the paper is comprehensive: all of the most popular SSL techniques are analyzed. Clearly a lot of work went into writing this paper. Weaknesses 1. It is difficult for me to understand the insights of this work because the writing is so dense. Often expressions are introduced without sufficient background and conclusions drawn without sufficient explanation and/or motivation. E.g. $\\mathcal{L}\\_{var}$ and $\\mathcal{L}\\_{cov}$ in Theorem 2 and equations (13) and (14) - intuitively how do these equations arise and why are they useful? The figures are especially information-overloaded and difficult to parse. The claims relating to downstream task performance are not clear. Overall, it should be abundantly clear what the key messages are from this paper, but they are lost in the density of the work. 2. There is insufficient discussion of other theoretical studies of SSL (only a brief mention in the introduction). 3. Some of the statements and proofs are not rigorous, as it is not formally defined what it means for one method to recover another. The proof of Theorem 3 requires an assumption that is not made in the theorem statement. Other proofs can use more formality (only English text is given and the reader is left to fill in the details) and the proof of Lemma 1 is missing. The limitations of the theory are not discussed, and an assumption is missing (see above). There are no potential negative societal impacts. <doc-sep>This paper introduces a general framework that leverages spectral methods to unify the representative self-supervised learning (SSL) methods such as VICReg, SimCLR, BarlowTwins. The authors provide a detailed theoretical analysis of different SSL frameworks and demonstrate the properties of their learned representations. ### Strength: Unifying existing self-supervised learning methods with spectral methods looks interesting. It also provides new insights into the community to help understand how SSL works. Both the motivation and the technical details in the experiment designs seem sound. The authors have cited most of the relevant papers and well summarized previous works. ### Weakness: Although the paper provides an insightful view of understanding the mechanism of SSL, most of the analysis is only valid in the pre-assumptions, some of which might have a gap in practical usage. For example, the properties of VICReg are based on the linear network assumption, while in practice, the encoder backbones are often non-linear. As the authors claim this paper aims to provide some guidelines to practitioners, these gaps could limit their insights to them. The example in Eq (4) and (5) is a little bit hard to follow. It is not obvious see the logical relation between these two formulations. The presentation quality is fairly good but needs further polishing. It feels like the paper is finished in a rush, and some paragraphs are not clear to the readers. Some abbreviations are not explained with the full name or corresponding citation when first appearing in the text, e.g., DA, DN, which might confuse the readers. The experiment/simulation settings are not specified in the paper, although most validations are conducted on toy examples. For example, what datasets are used for these proof of concepts? Most of the discussions are based on the assumption that the downstream task is classification, while in practice, SSL methods are widely used in various settings, including detection, segmentation, generative modeling, etc. The insight to practitioners might be limited in the scope of the classification. Minors: The notation $h$ in Eq (3) is not used; simCLR -> SimCLR Line 91: loose -> lose; Line 98: variosu -> various The authors have discussed the limitation of the paper. <doc-sep>This paper explores the links between self-supervised losses, metric learning and spectral embedding methods. It specifically investigates which representations are learnt when the embedding is chosen to be linear, and carefully investigates how the choice of loss affects their rank. **Strengths** Though many connections between metric learning, SSL and spectral embedding methods have been previously explored [1, 2, 3], this paper is a welcome addition. Explicitly writing out the optimal representations in the linear regime is useful. Explicitly showing how the choice of SSL loss impacts the rank of the learnt representation (in comparison of the rank of the downstream information encoded in G) is a welcome contribution to recent investigations on the "dimensional collapse" of SSL representations. **Weaknesses: better referencing known terminology and problems** Given the authors are connecting SSL to known Spectral Embedding methods, it would add significant clarity to draw on known terminology as well. For example, VICReg is composed of three terms that are eponymously described as 'variance-invariance-covariance' with a rather new vocabulary of 'dimensional collapse' and 'semantic' similarity. On a simpler level, the VICReg loss is simply Robust Nonlinear PCA, where Terms 1 and 2 'whitens the signal' and Term 3 encourages robustness to perturbations (e.g. jitter or rotation) encoded in G. Specifically, - Term 1 removes the scaling ambiguity of PCA (by setting the variances to 1) - Term 2 encourages orthogonal representations - Term 3 encourages robustness to user-specified transforms (e.g. rotation) Relating this to known terminology (PCA) is useful because it *explains* the observations made by the authors, when they say - "the optimum is not unique" or "there exist many local minima": this is due to well-known indeterminacies for PCA (e.g. permutation, offset) and is not a new problem - Fig 2, gamma = 0 vs. gamma = 0.1 is essentially PCA vs. Robust PCA. Expectedly, PCA has known indeterminacies (Panel 3) and Robust PCA blurs the landscape (Panel 3) and smoothens the latent space (Panel 2) as it desensitizes the loss to local perturbations. It seems like these observations are framed as a new problem of VICReg when they in fact inherit from PCA. Abbreviation: SSL = Self-Supervised Learning. [1] Agrawal, Ali, Boyd, 2021. Minimum Distortion Embedding. [2] Lee, Lei, Saunshi, Zhuo, 2021. Predicting What You Already Know Helps: Provable Self-Supervised Learning. [3] Pfau, Petersen, Agarwal, Barrett, Stachenfeld, 2019. Spectral Inference Networks: Unifying Deep and Spectral Learning. I have no potential negative societal impact to raise. <doc-sep>This paper proposed a unfied framework for self-supervised learning with the help of spectral manifold learning .the authors show that VICReg, SimCLR, BarlowTwins are specical cases of their proposed framwork. They points out the prerequisite about the production of optimal self-supervised representations for downstream tasks. Strengths 1. this paper is well written and easy to follow 2. the idea of specral manifold learning for unifying SimCLR(NNCLR), BarlowTwins and VICReg seems to interesting for me 3. this paper is technically sound Weakness 1. I think the authors overclaim their contributions of this paper. From my view, they have only consider VICReg, BarlowTwins and SimCLR in this paper. Although They have mentioned DINO in this paper, they do not reveal the connection between DINO and others. No
This paper focuses on providing some theoretical intuition/understandings of popular self-supervised learning (SSL) methods. The authors develop closed-form optimal representations for various method as a function of the training data and the sample-relation matrix. The authors also provide further intuition by developing simplified versions of these expressions in linear settings which they use to show an equivalence of sorts between SSL and various spectral methods and how it affects downstream tasks. Overall the reviewers were positive and thought the paper had nice insights. They did raise some concerns about the quality of exposition and various detailed technical issues. Most of the technical issues seems to have been addressed by the authors in their response. I concur with the reviewers. The paper has nice insights and therefore I recommend acceptance. I do however recommend that the authors further polish the paper for the camera ready version by addressing the issues raised by the reviewers especially about the exposition.
In this paper, the authors propose an extension to the Non Autoregressive Translation model by Gu et. al, to improve the accuracy of Non autoregressive models as compared to the autoregressive translation models. The authors propose using hints which can occur as 1. Hidden output matching by incurring a penalty if the cosine distance between the representation differ according to a threshold. The authors state that this reduces same output word repetition which is common for NART models 2. Reducing the KL divergence between the attention distribution of the teacher and the student model in the encoder-decoder attention part of the model. We see experimental evidence from 3 tasks showing the effectiveness of this technique. The strengths of this paper are the speedup improvements of using these techniques on the student model while also improving BLEU scores. The paper is easy to read and the visualisations are useful. The main issue with this paper is the delta contribution as compared to the NART model is Gu et. al. The 2 techniques, although simple, don't make up for technical novelty. It would also be good to see more analysis on how much the word repetition reduces using these techniques quantitatively, and performance especially on longer length sequences. Another issue is the comparison of latency measurements for decoding. The authors state that the hardware and the setting under which the latency measurements are done might be different as compared to previous numbers. Though still impressive speedup improvements, it somehow becomes fuzzy to understand the actual gains.<doc-sep>This paper proposes to distill knowledge from intermediary hidden states and attention weights to improve non-autoregressive neural machine translation. Strengths: Results are sufficiently strong. Inference is much faster than for auto-regressive models, while BLEU scores are reasonably close. The approach is simple, only necessitating two auxiliary loss functions during training, and rescoring for inference. Weaknesses: The discussion of related work is deficient. Learning from hints is a variant of knowledge distillation (KD). Another form of KD, using the auto-regressive model output instead of the reference, was shown to be useful for non-autoregressive neural machine translation (Gu et al., 2017, already cited). The authors mention using that technique in section 4.1, but don't discuss how it relates to their work. [1] should also probably be cited. Hu et al. [2] apply a slightly different form of attention weight distillation. However, the preprint of that paper was available just over one month before the ICLR submission deadline. Questions and other remarks: Do the baselines use greedy or beam search? Why batch size 1 for decoding? With larger batch sizes, the speed-up may be limited by how many candidates fit in memory for rescoring. Please fix "are not commonly appeared" on page 4, section 3.1. [1] Kim, Yoon and Alexander M. Rush. "Sequence-Level Knowledge Distillation" EMNLP. 2016. [2] Hu, Minghao et al. "Attention-Guided Answer Distillation for Machine Reading Comprehension" EMNLP. 2018<doc-sep>This work proposes a non-autoregressive Neural Machine Translation model which the authors call NART, as opposed to an autoregressive model which is referred to as an ART model. The main idea behind this work is to leverage a well trained ART model to inform the hidden states and the word alignment of NART models. The joint distribution of the targets y given the inputs x, is factorized into two components as in previous works on non-autoregressive MT: an intermediate z which is first predicted from x, which captures the autoregressive part, while the prediction of y given z is non-autoregressive. This is the approach taken e.g., in Gu et al, Kaiser et al, Roy et al., and this also seems to be the approach of this work. The authors argue that improving the expressiveness of z (as was done in Kaiser et al, Roy et al), is expensive and so the authors propose a simple formulation for z. In particular, z is a sequence of the same length as the targets, where the j^{th} entry z_j is a weighted sum of the embedding of the inputs x (the weights depend in a deterministic fashion on j) . Given this z, the model predicts the targets completely non-autoregressively. However, this by itself is not entirely sufficient, and so the authors also utilize "hints": 1) If the pairwise cosine similarity between two successive hidden states in the student NART model is above a certain threshold, while the similarity is lower than another threshold in the ART model, then the NART model incurs a cost proportional to this similarity 2) A KL term is used to encourage the distribution of attention weights of the student ART model to match that of the teacher NART model. These two loss terms are used in different proportions (using additional hyperparameters) together with maximizing the likelihood term. Quality: The paper is not very well written and is often hard to follow in parts. Here are some examples of the writing that feel awkward: -- Consequently, people start to develop Non-AutoRegressive neural machine Translation (NART) models to speed up the inference process (Gu et al., 2017; Kaiser et al., 2018; Lee et al., 2018). -- In order to speed up to the inference process, a line of works begin to develop non-autoregressive translation models. Originality: The idea of using an autoregressive teacher model to improve a non-autoregressive translation model has been used in Gu et al., Roy et al., where knowledge distillation is used. So knowledge distillation paper from Hinton et al., should be cited. Moreover, the authors have missed comparing their work to that of Roy et al. (https://arxiv.org/abs/1805.11063), which greatly improves on the work of Kaiser et al., and almost closes the gap between a non-autoregressive model and an autoregressive model (26.7 BLEU vs 27 BLEU on En-De) while being orders of magnitude faster. So it is not true that: -- "While the NART models achieve significant speedup during inference (Gu et al., 2017), their accuracy is considerably lower than their ART counterpart." -- "Non-autoregressive translation (NART) models have suffered from low-quality translation results" Significance: The work introduces the idea of using hints for non-autoregressive machine translation. However, I have a technical concern: It seems that the authors complain that previous works like Kaiser et al, Roy et al, use sophisticated submodules to help the expressiveness of z and this was the cause for slowness. However, the way the authors define z seems to have some problems: - z_j does not depend on z_1, ..., z_{j-1}, so where is the autoregressive dependencies being captured? - z_1, z_2, ..., z_{T_y} depends only on the length of y, and does not depend on y in any other way. Given x, predicting z is trivial and I don't see why that should help the model f(y | z, x) help at all? - Given such a trivial z, one can just assume that your model is completely factorial i.e. P(y|x) = \\prod_{i} P(y_i|x) since the intermediate z has no information on the y's except it's length. This is quite suspicious to me, and it seems that if this works, then a completely factorial model should work as well if we only use the "hints" from the ART teacher model. This is a red flag to me, and I am finding this hard to believe.
+ sufficiently strong results + a fast / parallelizable model - Novelty with respect to previous work is not as great (see AnonReviewer1 and AnonReviewer2's comments) - The same reviewers raised concerns about the discussion of related work (e.g., positioning with respect to work on knowledge distillation). I agree that the very related work of Roy et al should be mentioned, even though it has not been published it has been on arxiv since May. - Ablation studies are only on smaller IWSLT datasets, confirming that the hints from an auto-regressive model are beneficial (whereas the main results are on WMT) - I agree with R1 that the important modeling details (e.g., describing how the latent structure is generated) should not be described only in the appendix, esp given non-standard modeling choices. R1 is concerned that a model which does not have any autoregressive components (i.e. not even for the latent state) may have trouble representing multiple modes. I do find it surprising that the model with non-autoregressive latent state works well however I do not find this a sufficient ground for rejection on its own. However, emphasizing this point and discussing the implication in the paper makes a lot of sense, and should have been done. As of now, it is downplayed. R1 is concerned that such model may be gaming BLEU: as BLEU is less sensitive to long-distance dependencies, they may get damaged for the model which does not have any autoregressive components. Again, given the standards in the field, I do not think it is fair to require human evaluation, but I agree that including it would strengthen the paper and the arguments. Overall, I do believe that the paper is sufficiently interesting and should get published but I also believe that it needs further revisions / further experiments.
This paper proposes an activation quantization method, AC-SGD, for the fine-tuning step on language models with slow network set-up. AC-SGD compresses the change of the activations instead of activation values directly. This paper shows this compression method can converge well and it can lead to throughput improvement under slow network systems. ### Strengths - This paper doesn’t aim at compressing activations directly. Instead, by compressing the changes of the activation, the proposed algorithm can achieve efficient convergence. - With a slow network system (i.e., the portion of communication overheads is quite big during end-to-end distributed fine-tuning process), this algorithm can achieve higher throughput with smaller communication overhead. ### Weaknesses As weaknesses of this paper, I have two concerns as below: - I’m very confused about what slow networks are. It seems to be hard for me to imagine which scenario this paper assumes, V100 multi-gpus with 100Mbps network. The only network I can imagine is that single GPUs are distributed and inter-connected with slow ethernet systems. It is so weird. I don’t understand why we should assume extremely slow interconnection between high-end GPUs for fine-tuning large PLMs. - For the GPT results in Figure 4, I think showing training loss is not sufficient to prove that the compression method works well for generative language models, especially for fine-tuning steps. Since the pre-trained model consists of an enormous number of parameters and is trained with a large dataset, it is hard to trust the train loss and validation loss for fine-tuning dataset. It tends to be easily overfitted and there is rare correlation between loss results and measured scores. Since there is a limitation on scale of BERT-like models, this paper should show the inference results on generation tasks and large generation models. - As I mentioned, this paper should describe why they assume this slow network system in detail. As far as I know, it seems to be not common. - The experimental results are not sufficient to show that this method works well. - Showing profiling results of the fine-tuning process with this method and various speeds of the network would be better for understanding. <doc-sep>The authors proposed a differential activation quantization scheme with proven convergence, for distributed model-parallel training. [+] Clear presentation in general [+] Real-world demonstration [-] Limitation of practical usefulness from storage requirement [-] Lack of stability study See above. <doc-sep>This work examines the feasibility to compress the activations for models trained with pipeline parallelism. To this end, the scheme AC-SGD is proposed, which aims to make pipeline-parallel training more communication-efficient over slot networks. Different from previous efforts in activation compression, instead of compressing activation values directly, AC-SGD compresses the changes of the activations. The most novel insight is that: one can still achieve O(1/pT) convergence rate for non-convex objectives under activation compression, without making assumptions on gradient unbiasedness that do not hold for deep learning models with non-linear activation functions. The this work shows that AC-SGD can be optimized and implemented efficiently, without additional end-to-end runtime overhead. The evaluations on AC-SGD are shown on fine-tuned language models with up to 1.5 billion parameters, compressing activations to 2-4 bits. AC-SGD provides up to 4.3⇥ end-to-end speed-up in slower networks, without sacrificing model quality. Strengths: - An interesting insight on pipeline-parallel training. - The solution is effective and the optimization space is covered. - Extensive evaluation results. - Source codes available Weaknesses: - The work may need more rationales upfront to motivate the problems (i.e. slow networks) - More breakdown analysis to showcase the effectiveness of the approach - I think the paper has good potentials, and it can be strengthened by addressing the two points (raised in the weakness part). - I assume the geo-distributed network is a more realistic scenario for this proposed approach. Hence, I would suggest the authors to get more literature on this part, and back up both their motivation and simulation setup better (both qualitatively and quantitatively). - Also, I believe breaking down the optimizations in AC-SGD can add more values in this work, so that the readers can better understand how essential your optimizations are. <doc-sep>Large Language models are trained or finetuned using a combination of pipeline and data parallelism methods. Pipeline parallelism leads to activation and activation gradient being communicated between devices, while data parallelism leads to weight gradients being communicated between devices. A lot of work has focussed on the later, but the former has received less focus. Compressing activations is non trivial as compressing it in a stochastic unbiased way will lead to biases in the gradient and breaks the unbiasedness assumption made by most gradient compression results. Previous work either failed to provide theoretical guarantees This paper proposes AC-SGD, an activation compression algorithm for communication efficient pipeline parallelism. This method helps accelerate pipeline parallelism over slow networks. They provide theoretical guarantees to prove convergence and show that the method can be implemented with minimum runtime overhead (albeit at a huge memory cost). They compare their method with recent work and show better accuracy at lower precision. Depending on the communication BW used as baseline, they can achieve 4.3x end to end speedup. Strengths: 1. The claims in the paper are evaluated against strong baselines 2. The method comes with a theoretical guarantee of convergence Weakness: 1. The idea relies on data being repeated over multiple epochs. This can limit the applicability of this techniques to more important workloads like pre-training and limit to the idea to only fine-tuning of LLM. This reduces the potential impact of the work. 2. The requirement to store activations for new samples in the finetuning datasets makes it harder to scale this technique with larger finetuning datasets Have the authors adequately addressed the limitations and potential negative societal impact of their work? Yes
In this paper, authors propose to speed up the fine tuning of large models over slow networks by compressing *deltas* of activations (vs activations themselves), so as to reduce the computation cost. Original reviews were mixed, but at the end of the discussion period, all reviewers are leaning towards acceptance. The main issues that were raised are: * The motivation for training very large models over slow networks * The limited amount of metrics to validate the quality and robustness of the optimization process * Concerns about the scalability of the method (storage requirements) and its applicability to the online setting I consider that these concerns have been mostly addressed during the discussion period by the authors, who also remained honest about some of the limitations of their method. In my opinion, the pros of this work (a practically useful idea) outweigh the cons (it may only be useful in somewhat niche settings), and I thus recommend acceptance.
In this work, the authors propose the WARM method to help conduct iterative and interactive weakly-supervised learning. Active learning is used here to refine the labeling functions by focusing on data points that are once labeled. The authors further incorporated gradient propagation to alternatively update the LF parameters and the DP model. Experimental results show that the WARM method can improve the quality of training data. Strength: The proposed method is clearly described and the writing is easy to follow. Weakness: 1. The theoretical analysis is insufficient and I recommend the authors provide more analyses on their main contributions. 2. The general idea of active learning with weak supervision is not novel, which can be seen in [1]. The methods proposed in this work are combinations of existing ones and the authors also fail to contribute novel theoretical results. 3. The layout should be improved. I recommend the authors separate the key notations, e.g., labeling functions, from the contexts for better presentation. There are also some typos, e.g., 'dependant' should be 'dependent' in the last paragraph of page 2. [1]. Chicheng Zhang and Kamalika Chaudhuri, Active Learning from Weak and Strong Labelers. NIPS 2015: 703-711 The authors should compare with more related works and provide some theoretical analyses to make this work more convincing. The ideas are totally heuristic and the experimental results are also not satisfying. <doc-sep>This paper proposes WARM, an active learning approach to weakly/programmatically supervised learning. In the WARM approach, which bases off of the data programming/Snorkel paradigm for weak supervision, users write labeling functions (LFs) to programmatically label training data; these labeling functions are then modeled by the Snorkel framework for weak supervision and used to train downstream models. In the WARM setup, these LFs are assumed to be, or cast as, differentiable. The paper then proposes an active learning approach to sampling labeled data points to tune the parameters of these LFs, and validates this approach on several medical datasets. Strengths: - (S1) This paper tackles an important problem with an intuitive approach of complementing recent programmatic/weak supervision approaches with expert feedback via an active learning-style approach. - (S2) This paper introduces a clean formulation of / argument for LFs being cast as differentiable functions, whereas to date most LFs have been non-differentiable - (S3) The paper shows some strong results relative to recent approaches. - (S4) The paper includes a range of datasets from synthetic (data + LFs), to 'semi-synthetic' (real data + synthetically generated LFs), to a real EEG task/dataset + LFs developed in conjunction with medical SMEs- an impressive and real world-relevant contribution. Weaknesses: - (W1) Simple method: While not a major drawback in isolation, it is worth nothing that the proposed approach is fairly simple and standard from a methodological/algorithmic standpoint. From an active learning perspective: the query function is just based on model uncertainty, which is the most basic type of active learning query function (the only tweak being that it is the label model, i.e. model over LFs, but this does not change anything from an algorithm perspective). Then, these data points are used to tune the parameters of the LFs in a manner that is also straightforward (the only tweak here being that the approach alternates between two formulations of the label model objective which is not necessarily needed... see W#1.a). - (W1.a) The authors state that the data programming label model is not differentiable with respect to the tunable LF parameters introduced in WARM, which is not true. - (W2) Lack of exploration of effect of differentiable LFs: Given the above lack of methodological novelty, this reviewer at least saw one of the main points of novelty and overall contribution of the paper being around the differentiable LFs themselves, and the overall setup here. However, unfortunately this contribution was not explored in any depth (e.g. what are the tradeoffs of "softening" LFs to make them differentiable? How should we think about this more broadly beyond the medical settings treated? etc), which would have been interesting and significantly strengthened this aspect of the overall contribution. - (W3) Lack of relevant ablations: In general, there were a range of ablations of the overall approach I would have thought natural- for example, what is the impact of an active learning setup vs. just using some randomly sampled labeled data to tune the LF's internal parameters? Could these internal parameters also be learned without labeled data, following the basic Snorkel modeling approach, and how would that do? How would the approach do without tuning the internal LF params? Etc. - (W4) Weak/improper comparisons: Since two of the other approaches compared to have no access to tune the internal params of the LFs and WARM does, this seems like a somewhat handicapped comparison... Overall, the proposed approach introduces some interesting and practical ideas with some exciting experimental applications, however does not sufficiently explore the most novel elements of the contribution, ablate them sufficiently, or compare them to prior methods appropriately. <doc-sep>This paper proposes a new method for data programming, i.e., using weak supervision to generate probabilistic training labels for unlabelled points using heuristics devised by domain experts. In particular, the authors propose WARM, a framework for iteratively improving these weakly supervised models by modifying the parameters of labeling functions and directing users to a subset of data points that, when labelled, would most improve the model. Pros: 1. The proposed method would be of great use in many real-world scenarios where labelled data is scarce (e.g. medicine). 2. To my knowledge, the authors’ proposed method (i.e., actively refining the voting weights of each labeling function and the parameters of the labeling function) is novel. 3. Generally I found the writing in the manuscript to be of high quality. As someone who is not an expert in data programming I found the manuscript easy to follow with some minor exceptions (see point 3 in “Cons”). 4. The authors experiment with their method on a variety of datasets, including one for which they use human domain experts to craft labeling functions. I greatly appreciate the application to a real-world scenario! Cons (listed in order of importance to my score): 1. I found it difficult to assess the significance of the knowledge shift experiment results presented in Figure 2 and Figure 3 due to a lack of any results from baseline models. As such, I would appreciate it if the authors could add results from their baseline models to these Figures to (1) illustrate the severity of the knowledge shift problem and (2) (potentially) better illustrate the advantages of WARM over previous work. 2. As mentioned by the authors in Section 5, the active learning baseline outperforms WARM on half of the tested datasets. The authors say that this is due to some datasets being “simpler” than others, though I was not clear as to what “simpler” meant here. I would be willing to raise my score if the authors could provide experimental results that clearly illustrate the reasons behind the poor performance of WARM on these “simpler” datasets. For example, if “simpler” means having fewer data points, an experiment that assesses WARM's performance on simulated data with varying dataset size would be helpful to better understand when WARM may outperform existing methods vs. when it may underperform them. 3. Some of the terminology relating to the problem formulation/labeling functions is not precisely defined within the manuscript and can be confusing for readers not already familiar with data programming (e.g. I had to look outside the manuscript for a definition of the “polarity” of a labeling function). 4. The literature review in Section 2 is very data-programming specific, and does not discuss other recent approaches to active learning (e.g. [1,2,3]). Expanding the related works section would help put WARM in a broader context and assist the reviewer in assessing the significance of WARM. [1]: “Learning Active Learning from Data” (NeurIPS 2017) [2]: “Learning Algorithms for Active Learning” (ICML 2017) [3]: “Active Learning with Partial Feedback” (ICLR 2019) Overall I enjoyed reading this paper. The authors’ method appears methodologically sound and it seems to provide considerable benefits when applied to complex datasets. However, there are some weaknesses in the manuscript that somewhat undermine the paper’s story. For now I am recommending a weak accept, and I am willing to revise my score upwards if the authors address my concerns. <doc-sep>This paper proposes an algorithm for choosing a small set of labels to improve labeling function model performance both directly and for downstream tasks. Additionally, the authors provide a general method to convert standard labeling functions to "soft" labeling functions which are differentiable with respect to some parameters (e.g. a threshold). If the labeling functions are differentiable, this paper provides a method to update the labeling function parameters. Finally, experimental results show that the method introduced outperforms other active labeling approaches for weak supervision. In equation (1), the \\propto seems like the wrong relation since the left hand side is actually the softmax of the right hand side. Why are the labels not used in estimating the labeling function accuracies? In figure 3.(i), why is the performance not monotonic? In particular, the curve without noise peaks with 20 labels and then starts to deteriorate. It's interesting to note that WARM's downstream performance never improves over the accuracy of the label model. It seems that a piece of the weak supervision pipeline is broken here. Perhaps using a more expressive model (random forests?) would be more appropriate for the downstream model. I'm surprised that the active learning baseline generally outperformed the weak supervision methods even though the weak supervision methods have access to extra information (the labeling functions trained on the whole dataset). Minor things: I think the arguments of p_\\theta are swapped throughout the paper. Look at eq (2), eq (3), and line 4 of the algorithm. The method proposed in this paper does not contain any particularly novel ideas and seems to be based on heuristics (maybe the proposed quantities could be derived from more general principles?). Additionally, it appears that weak supervision is not appropriate for the paper's empirical settings as seen by the lack of improvement from the downstream model and the stronger performance of non-weakly-supervised methods (active learning). <doc-sep>The paper gives a method to iteratively and interactively improve the label model in weak supervision. The approach consists of two steps, first is standard weak supervision way of weighted combination of labelling functions to generate labels. Novelty and improvement mainly comes from the second step, where true label for most uncertain data point is queried using which the parameters of the labelling functions are improved, which in turn lead to a more accurate weak supervision model. A key requirement and assumption in this paper's setup is that labelling functions are given by some learnable parameters ( i.e. they can be differentiated w.r.t. their parameters), which allows parameters updates using the true labels acquired. Empirical results on various real world datasets in medical domain show that in some cases this approach can yield a more accurate model in comparison to pure active learning approach and some recent baselines which combine weak supervision with active learning. These results also show that the paper's approach can get accuracy comparable to fully supervised model as well. The main strengths of the paper lies in effectively combining active learning and weak supervision to generate high quality labels. The proposed approach incrementally improves the labeling functions by using the true labels obtained in each round. To achieve this they need LFs to be differentiable, which can be a drawback in some cases. Except these few changes, it is still using most of the existing weak-supervision machinery (i.e. label model, accuracy estimation and weighted combination of labeling functions to generate final labels etc.). So while the novelty might seem limited, I think it is useful in the sense that it may require minimal changes in existing weak-supervision setups to enhance it using active learning. I don't see major problems with this work, except a few. Firstly, interpreting the accuracy results in Table 2, is slightly difficult since class distributions in the datasets are not provided i.e. are the datasets imbalanced? From the results it looks like the cases where one can obtain good labeling functions from experts then this approach works better than active learning otherwise not. There might be cases where obtaining expert labeling functions is more costly than obtaining labels ( which is probably easier than writing LFs). It might be worthwhile to discuss these limitations in the paper. Overall, I think its a nice paper with a sound and simple approach to improve weak supervision's label quality using active learning. The contributions are novel and useful in practice. I am inclined towards accepting this paper. I need some clarifications in the experiments section to be more confident in this assessment.
The authors propose WARM, a novel method that actively queries a small set of true labels to improve the label function in weak supervision. In particular, the authors propose a methodology that converts the label function to "soft" versions that are differentiable, which are in term learnable with true labels using proper updates of parameters. Empirical results on several real-world data sets demonstrate that the method yields a pretty strong performance. The reviewers generally agree that the idea of making the labeling functions differentiable is conceptually interesting. They are also positive about the simplicity and the promising performance. They share joint concerns on whether the idea has been sufficiently studied in terms of the design choices and completeness of the experiments. For instance, the authors can conduct deeper exploration of the trade-off for differentiable LFs. They can also study active learning strategies that are beyond basic uncertainty sampling. While the authors have provided more studies about those exploration and ablation studies during the rebuttal, generally the results are not sufficient to convince most of the reviewers. In future revisions, the authors are encouraged to clarify its position with respect to existing works that combine active learning and weakly-supervised learning. The authors position the paper as more empirical than theoretical. So the suggestion from some reviewers about more theoretical study is viewed as nice-to-have but not a must.
The paper introduces an environment called Honor of Kings Arena, which is derived from the popular mobile game Honor of Kings. The environment can be used for evaluating the generalization ability of agents in competitive games. The authors publish the environment engine and provide easy-to-use interfaces along with detailed specifications. By comparing the performance of two different RL-based methods (i.e. PPO and DQN) and one rule-based method (i.e. BT), the results show that the environment is efficient for training learning algorithms. Finally, the paper demonstrates that the environment imposes generalization challenges across opponents and targets on the agents. The Honor of Kings Arena environment focuses on generalization ability evaluation in competitive games, which is a research hotspot in the reinforcement learning domain. The authors conducted several experiments to show that the environment is of high performance, and is feasible for comparing the generalization ability between different agents. The engine is accessible and the APIs are well-designed. RL research related to generalization is expected to benefit from the environment. The generalization challenges of Honor of Kings Arena come from different opponents and different targets. Actually, more challenges can be manufactured in this environment, simply by introducing various goals or small tasks, such as "killing the opponent hero as fast as possible", "gaining as much money as possible", or "cooperating with other heroes on some small tasks". Different goals can make the agents more generalizable, and small tasks can lower the bar for training and learning. <doc-sep>This paper presents a benchmark based on the game Honor of Kings, where a player can take on one of many different roles and compete against a similarly wide selection of opponents. The motivation is that generalization across these two axes (targets and opponents) is equivalent to generalization across several game dynamics including action control. The key contributions are: - An optimized and simplified 1v1 Honor of Kings game engine and interface for RL research. - A series of baselines and experiments. - Speed and accessibility of implementation. - Existing interest in this game environment from the community. - Comparison of direct transfer learning, multi-task, and distillation setups was interesting. - The generalization framing is not very well supported, motivated, or contextualized with regards to prior works. It's not really explained or justified why the type of variation captured by this benchmark (aka certain gameplay elements via the choice of target and opponent) is significant or interesting compared to existing benchmarks. - It is unclear if this is a challenging enough benchmark to drive forward research progress, given the performance of the baselines. <doc-sep> This paper introduces a new RL simulation environment called "Honor of Kings Arena" for training agents to play **1 vs 1** Honor of Kings, a popular multi-player game worldwide. *Honor of Kings Arena* has an efficient game engine, and combined with the authors' training pipeline, researchers can train a viable agent in just 7 hours with 128 CPU cores and (presumably) a GPU. Further, the authors suggest Honor of Kings Arena can be an excellent testbed for measuring generalization because the agent needs to learn to 1) control different heroes with different skills and 2) play against different heroes. They have conducted a series of experiments, showing that the agents could overfit and lose to the heroes they were not trained against. To mitigate this issue, the authors have experimented with more diverse training heroes and model distillation that showed promising results. The authors clearly put tremendous effort into open-sourcing this testbed, and this work has great potential. However, I can't give out a high rating given the accessibility and documentation issues described below. - The paper presents the first open-source RL interface to a commercially successful MOBA game. - The RL interface is well-designed and easy to follow. - The authors show detailed experiments on the effects of computational resources - The authors have conducted preliminary experiments examining the RL agent’s generalizability and explored ways to improve generalization. - **Accessibility is a big issue**. While Honor of Kings Arena (the RL interface) is licensed under Apache 2.0, running it requires downloading the "hok gamecore”, which requires users to fill out a questionnaire to request access and sign a non-commercial agreement. This practice may violate the key criteria listed in the dataset and benchmark track submission instructions: > “datasets should be available and accessible, i.e. the data can be found and obtained without a personal request to the PI, and any required code should be open source.” While I understand the “hok gamecore” can be difficult to open source, I think the authors should consider dropping the access request and the non-commercial agreement, such as the case for PySC2, StarCraft II Learning Environment ([https://github.com/deepmind/pysc2](https://github.com/deepmind/pysc2)). - If eventually an access request and signing a non-commercial agreement are required, this requirement should be clearly communicated in the paper, which is not done in Section 3. - **The documentation and setup seem poorly organized.** See comments below. <doc-sep>The paper presents a benchmark for a competitive 1v1 digital game (Honor of Kings) oriented toward testing RL algorithm generalization when facing different opponents and using different characters with differing action spaces. The benchmark provides access to run a performant game simulator and a gym-like reinforcement learning environment API to the simulator. Included are baseline algorithms including human authored rule-based behavior and baseline RL algorithm implementations stemming from prior research on the game. The paper contributes a scalable game simulator that can stress RL agent generalization across action spaces and different opponent types. Initial examples demonstrate the failures of baseline RL techniques (DQN, PPO) to generalize in either sense on this benchmark, showing it is an open problem. **"Real world" game simulator**. The game provided is a popular and highly competitive game. Prior work has not provided access to this level of game complexity, instead focusing on more "toy" tasks. Contributing access to this type of game can spur further research in RL in general for these types of environments, including the generalization topic the paper emphasizes. While practitioners have worked on these domains in partnership with companies (OpenAI with Valve for DOTA2, DeepMind with Blizzard for StarCraft II), there has not been access for other researchers. **Performant and scalable benchmark**. A single research machine can run experiments in a reasonable amount of time (< 1 day). With a cluster the benchmark can scale to much larger training. RL research is typically limited by one of these problems: being too slow for a single researcher to experiment on a local machine or being too limited in infrastructure to leverage more resources when needed. This makes the benchmark valuable to a wide range of research applications and practitioners. **Important open problem**. Despite successes in RL for targeted problems in games, generalizing well across opponents (in the multiagent setting) remains an open challenge. To date no publicly available systems have exposed this problem, limiting research on the topic. The topic is important in games in particular and RL in general. **Unclear relevance for action space generalization**. The paper does not articulate why using different action spaces would be relevant outside the games context. Why is it helpful to have robotic agents that can control different arms (the example given in the paper)? Of the two types of generalization exposed, this form seems more niche to the games RL community. **(currently) Requires Windows**. For better or worse most research in ML favors using linux. The requirement for the simulator to use Windows will be a (minor) impediment to adoption. The forthcoming linux release mentioned will alleviate this, but it is not currently available and has no timeline for release stated. <doc-sep>Reasonably useful dataset for RL. Unfortunately, I am not an expert in this field, but the author seems to provide suitable tools to efficiently explore the action space. Benchmarks demonstrate that basic hardware is capable of utilizing this data generator and training a RL model. Generalization across heroes can be of interest for transfer learning. Dataset appears to be useful for future research. Isn't exactly innovative. Other sources for RL data generation are available. Generalization across heroes is not particularly innovative as starcraft 2 provides generalization across races. Missed potential for multi agent team based RL.
This paper introduces a novel RL benchmark based on the popular computer game "Honor of Kings Arena". There are two modes of evaluation, one is a single agent task of beating the computer AI and the second one is a competitive setting (two player zero-sum). The game offers interesting challenges from a generalisation and transfer point of view and will make a valuable contribution to the research landscape. The authors have included basis baselines in the evaluation. One suggestion I have is to implement the training on the GPU to reduce computational overhead for scientists.
While there is an extensive literature on both multi-agent reinforcement learning (MARL) and offline RL separately, this work explores a setting that combines the two. In the MARL setting, the size of the joint action space grows exponentially with the number of agents, which is a key challenge in this subarea. This paper investigates how to tackle this problem and whether we can find a Nash equilibrium strategy in the offline MARL setting given unilateral coverage. Specifically, to construct a confidence interval for each state-action pair in the offline MARL setting, the authors proposes to estimate each strategy separately to circumvent the dependence of the joint action space which grows exponentially with the number of agents. Besides, based on this, the authors also develop two different algorithm frameworks under the tabular case for both the offline two-player zero-sum Markov games as well as the offline multi-player general-sum Markov games separately. This paper helps to improve our current understanding of algorithms for the offline MARL under the tabular setting. The major contribution is the proposed strategy-wise bonus, which firstly estimates the state value functions for a strategy and then add the bonus on them. However, if the strategies are complicated and therefore the class space not well structured, the calculation for state value functions for the strategy could be challenging. Given this, for the two-player zero-sum Markov games, the proposed framework utilizes a strategy-wise bonus as well as maximin-optimization-type algorithm. For the multi-player general-sum Markov game, as there is no saddle point structure, this paper proposes a surrogate function to minimize the performance gap, which, however, requires to enumerate the strategy class in the worst case and thus not computationally efficient. Another concern regarding the current version is that the data assumption adopted in this work (Assumption 1) assumes that the all the tuples in the logged dataset are independent, while this is not true in most real-world scenarios. As in many real-world applications, this assumption is very restrictive on the dataset and therefore usually does not hold. To derive practical algorithms based on the theoretical analysis and test the efficacy of the proposed methods in practice, the work should consider the impact of the violation of this assumption on the algorithm performance. The authors has addressed some potential limitations. <doc-sep>In this paper, the authors study the offline multi-agent RL. They propose the strategy-wise concentration principle in contrast to the point-wise concentration principle in existing works. By the new concentration principle, they propose novel algorithms with strategy-wise bonuses for both zero-sum and general-sum games, which achieve better sample complexities than the existing works. The sample complexity of the new method does not scale with the size of the joint action space. Moreover, the proposed algorithms can take a pre-specified strategy class as input and output a strategy that is close to the best strategy in such a class. This paper proposes novel algorithms with the strategy-wise bonus, which leads to better sample complexities than the existing works. I think it is an important contribution to the theoretical analysis of the offline Markov games as the new sample complexity result does not scale with the size of the joint action space. In addition, the writing of this work is good in general and this paper is well structured. However, I was not able to find the discussion of the limitation in the paper although the author answered yes in the checklist. It would be much better if the authors can explicitly summarize the limitation of their work in the conclusion section or in the checklist. Besides, is it possible to apply such a strategy-wise bonus to the online Markov game setting? I was not able to find the discussion of the limitation. It would be much better if the authors can explicitly summarize the limitation of their work in the conclusion section or in the checklist. <doc-sep>This paper considers offline multi-agent reinforcement learning. The sample complexity for multi-agent general sum MDP is at least $N = \\prod_{i=1}^m A_i$, where $A_i$'s denote the effective action space size for each agent (i.e., the number of actions obeying $\\pi_i(a) > 0$). Specifically, to ensure $\\hat{C} < \\infty$, one needs at least $N$ samples such that ${\\hat{d}}h(s, a) > 0$ when $d_h^{\\pi}(s, a) > 0$. In addition, the assumption $C < \\infty$ implies there are at least $N$ actions satisfying $d_h(s, a) > 0$, which leads to $1/p_{\\min} \\ge N$. Hence, all results for the multi-agent case in Section 3.2 exhibit an exponential dependence on the number of agents, except the case that almost all agents will choose only one action. On the other hand, it is well known to find the NE for multi-agent case is computationally hard. Then is it possible to compute CCE effectively for offline multi-agent RL? The sample complexity for multi-agent general sum MDP is exponential.
Reviewers appreciate the paper's contribution to a novel intersection of fields: offline and multi-agent RL. While feasibility of the results is limited to cases where prior knowledge allows strategy-wise decomposition, it is nonetheless an interesting step in this field. Reviewers are concerned that the above substantial limitation of the work has not been sufficiently discussed in the paper, and the authors are asked to clarify this aspect in a subsequent revision.
The authors introduce a physical systems evaluation dataset framework that focuses on evaluating machine learning algorithms for simulation problems. They provide four systems of increasing difficulty to evaluate baseline methods (spring, wave, Naive-Strokes and spring mesh), explore the trade-offs between derivative-based prediction and step prediction and advocate the usage of K-nearest-neighbors to better understand the complexity of different simulation tasks. The paper is very concise, easy-to-follow and well-illustrated. The authors do a great job motivating the four representative benchmark physical systems, and provide a comprehensive array of baseline data-driven methods. While neither of these are exhaustive, the flexibility of their framework allows for the integration of other learning tasks or machine learning methods. All in all, their contribution promisingly lays the groundwork for future research in the field of scientific computing While the graphs are typically easy-to-follow, I was slightly confused by repeated colors in figures 3 and 4. My understanding is that same-color marks are different architectures of the baseline methods. However, different architectures could have significantly different strengths and weaknesses. For instance, shallow MLPs are typically more robust to noisy datasets than deep MLPs. Do you believe that readers would benefit from having more fine-grained labels for the methods (e.g. shallow vs deep MLP/CNN) in figures 3 and 4? Further, due to the non-deterministic nature of some NN-based approaches, it would make sense to average results over multiple runs. In the main paper there doesn't seem to be any indication of this. Were the results presented as averages over multiple runs? <doc-sep>The present work is motivated by the need for a) thorough evaluation of data-driven approaches in scientific computing pipelines and b) the lack of standardized benchmarks in the literature. The authors focus on physical simulation benchmarks that a) map a high-dimensional state space into another high-dimensional space (as in temporal integration schemes, mapping the state of the system at one time step to the next, or b) from a high-dimensional input space to a lower-dimensional output (as in surrogate models, mapping the initial conditions to a functional of the solution. In addition the paper addresses the narrow data regime, where initial conditions are sampled from a low-dimensional manifold (even within a high-dimensional state space), and the wide regime, where initial conditions span a truly high-dimensional space. The main contribution of the paper is the presentation of a suite of simple, representative physics problems along with reference numerical solutions for traditional time integration schemes to benchmark data-driven methods (MLPS, CNNs, kernel machines, Nearest neighbors). The key conclusion of the paper is that, even in the simplest physical models, current data-driven pipelines, while providing qualitatively acceptable solutions, are quantitatively far from directly numerically integrating physical models, and this performance gap appears unfeasible to close by merely scaling up the models and/or the dataset size. Another finding is that a simple L2-based nearest neighbor regressor outperforms most deep learning models in the narrow regime for complex systems such as incompressible Navier-Stokes systems. -- Provides a standard benchmark for contrasting traditional solutions for numerical computation with physics based models against data-driven methods. -- I like the progression of complexity of the benchmarks and the exploration of narrow and wide data regimes. Such a standard benchmark allows for systematic evaluation and tradeoff analysis. -- The experimental results provide sufficient insights -- The code and data is made accessible and is extensible -- No ethical/social implications exist -- Apart from highlighting the deviations in results between data driven and traditional methods, It is not absolutely clear how the benchmark results points to ML algorithm improvements. (The authors state this as a point of strength of the paper, but it is not obvious what they imply by this). -- While I like the organization of the benchmark in the paper, I would have assumed that a more simpler differential system with well known closed form solution for the result may be an interesting option as a baseline. -- The paper will be strengthened significantly if other aspects not covered by the authors (described in the limitation sections) are included including the missing timing analysis. <doc-sep>This paper introduced a benchmark for the methods of data-driven-based physical system simulation. They provided a dataset of four physical systems (spring, wave, spring mesh, navier-stokes) including data, data generation methods, codes, and documents. Then, they conducted a variety of experiments using knn, nn kernel, MLPs, and CNNs. This work is interesting, and important for learning physical systems. The descriptions are also detailed. However, the value of this benchmark needs to be more explained, please see Weaknesses. - This is hard work (850h on GPU + 2180h on CPU). It is really difficult to process such a large amount of data and keep it unified and correct. - The data, source codes, and documents are all well organized. - The generation methods, documents, and experiments are described in detail. For a paper of dataset and benchmark, the contributions should be novel/originality data, comparison and analysis of state-of-the-art methods, and the impact on the future of related fields. I hope the authors could give more explanation about these points. - The paper claimed that the proposed benchmark takes a step towards unified evaluation protocols and metrics, but it is unclear. - Data. The complex system is more important for simulation problems. The value of simple system data needs to be explained. The data of four systems are also generated in the existing paper, such as spring in [7], Navier-Stokes in [48]. What is the difference between the provided data and the previous data? - Benchmarks. Only simple baseline models are tested (except for u-net on naiver-stokes), and the paper claim that these models could not solve the problem well. However, advanced models have been proposed and compared with the baseline models in the existing papers. SOTA models should be tested and analyzed.
This paper introduces a physical evaluation dataset framework for scientific computing pipelines that map one high-dim state space into another high- or low-dim one, providing a suite of simple representative physics problems. Reviewers appreciated for motivation, clarity, comprehensiveness, and overall contribution to the space.
This paper proposes a deep state space model for videos. The dynamics are defined by linear Hamiltonian Dynamics, and the motion matrix is further assumed to be block diagonal in order to separate different categories of actions. Like previous works, a latent variable z is introduced for explaining content and kept fixed for all frames. Experiments are carried out on Sprites and MUG to demonstrate the efficacy. Strengths: The paper is well written and easy to follow. The idea of introducing Hamiltonian dynamics as an inductive bias for explaining repetitive or cycled motions in videos are reasonable and natural. The theoretical derivation is technically sound. Weaknesses: My main concern is that the proposed method is not well supported by the experimental results: - I don't understand how SSIM and PSNR can be used for evaluating "generation quality", as generated samples are supposed to be different from the training datasets. I can only imagine that the numbers in Table 1 are reported for reconstruction, in which case it is not for generation quality as described by the paper. Also, no baseline methods are compared in terms of reconstruction. - There's no commonly used metrics reported that are designed for really evaluating sample quality, such as FVD or Inception scores. - For disentanglement evaluation, it is not fair to use the conditional Halo model to compared with the baselines which are trained unconditionally. The only fair way is the compare unconditional Halo models with the baselines where the performance of the proposed model does not stand out. - In the ablation study section, it makes me confused that the paper mentioned RNN or linear dynamic model cannot make image move but it also showed the results of swapping motions of the two baselines where the video sequences are changing over time. Also it sounds wired to me that the linear/RNN dynamics cannot do image-to-seq as those have been applied by many classical state space models. In experiments, two operators H and skew-H are compared but there's no official definition of skew-H in the previous sections. The model assumes that the action space can be divided into subspaces where each subspace represents a unique action. This representation can be highly ineffective if the number of actions goes huge. Yes. <doc-sep>The paper proposes Halo, a novel type of variational autoencoder with structured latent space and demonstrates its applications to different types of (controlled) video generation tasks. The main contribution is a principled decomposition of the latent space into a content space and a motion space, where the motion space is modeled using Hamiltonian dynamics. The structural constraints (e.g, symmetries) imposed by these dynamics induce desirable properties like reversibility and volume-preservation, enabling the conservation of (learned) quantities. **Strengths** + Video generation is an important and challenging task with a rich history. The proposed approach takes a fresh perspective on this topic and explores the benefits of inductive bias based on principles rooted in the physics community. + Sections 1-3 (introduction, related work, method) are well-organized and easy to follow: the main contributions are clearly formulated, the figures are helpful in understanding architectural details, and the mathematical notation is (mostly) consistent. The related work section is commendable and provides a comprehensive overview of the field. The paper does have a fairly strong physics flavour and I would recommend to provide stronger guidance for an audience which may not be familiar with topics that are not part of core ML, such as group action, phase space, conservation law, and symplectic geometry. + The Hamiltonian design of the latent space is interesting and novel, and the advantages of reversible and symplectic latent dynamics make intuitive sense. I also appreciate the principled derivation of the dynamical model $f$ from a constant-energy perspective (l.206-l.214). The variational inference section is less clear and I would encourage the authors to move at least some intuition about the ELBO from the Appendix into the main paper. **Weaknesses** - The weakest part of the paper are its experiments, both in terms of their presentation and design. - Presentation: - The structure of the experiments is confusing throughout section 4. For example, the description of the Sprites and MUG datasets starts in the middle of the “Rotating Balls” paragraph (l.262). Likewise, the description of the baseline comparison starts in the middle of the “Quantitative Evaluation” section (l.302). Grammar and text flow in the experiment section also feel unpolished. Finally, none of the figures in this section have proper axes/labels and the reader needs to count rows and infer the content from the caption or even the main text (l.348-350 for Figure 5 (left)). - Design: - Since Table 1 does not include a comparison to other baselines it is not possible to assess whether the presented SSIM/PSNR/MSE scores are competitive or not. Why not use the same metrics as in Table 2? - Table 3 is not mentioned in the text and seems to be based on the single example of Figure 5 (right), which is not enough to make any general statements. The positional encoding mentioned in this table is not explained and not supported by any qualitative evidence. - In Figure 4 (left) the reconstructed and generated sequences look fairly similar, which can be an indication of low diversity. - It is unclear how the sequences of the rotating balls dataset were generated as the mentioned constraint does not specify any temporal pattern. What is the dynamic model used here? - The sequences are very short (8/16 frames) and small (64 x 64). What is the main bottleneck that prevents application to high-fidelity image sequences? **Minor comments** - Typos: Figure 1 (“alongwith”), l.224 (“long term term”), l.234 (“(6))”), l.258 (as”blue”), l.283 (“EvaluationWe”) - The Appendix provides valuable information about the ELBO objective, terminology, and network structures, but the main paper does not refer to it often enough (e.g., content/position/momentum network). - The paper follows a top-down approach, first introducing high-level structures and then filling in the details. While that is a reasonable approach, it does mean that readers will have to read the paper twice (or go back to previous paragraphs), because the motivation for some design choices remains initially unclear. One example is the structure of the phase space. **Summary**. I appreciate the technical formulation of this paper but am on the fence due to the weak and unconvincing experiments. I encourage the authors to address the concerns above as well as the questions below. - The paper flags potential misuse in the area of fake video data generation. - The paper does not contain a limitations section. <doc-sep>This paper deals with the task of generating image sequences. Specifically, the authors propose a method called Halo that allows to disentangle the content from the motion in image sequences, in the VAE framework. They do so by separating the latent space in two spaces: 1) the content space, a global content vector that summarizes the image sequence; 2) the motion space, a sequence of time-dependent vectors that capture the dynamics of the sequence. The main contribution of the authors is to model the motion space with Hamiltonian dynamics. The authors claim that Hamiltonian dynamics have good inductive biases for sequence generation, such as reversibility of the motion. Experiments on simple image sequences are performed to prove the quality of their model. Strengths * 1) The latent Hamiltonian operator is quite generic. It could be extended and used with other families of deep generative models for sequences, and thus be of great interest for practitioners. * 2) Halo achieves SOTA scores on motion/content disentanglement metrics. Ablations with similar architecture and other sequence models are convincing (especially Table 1 of Supplementary Material). Weaknesses * 1) Stochasticity is only allowed by the gaussian sampling, but there is no stochasticity in the Hamiltonian operator. Thus, Halo can only generate one motion vector given input frames. However, trajectory prediction is a highly stochastic process. This could be a limitation that makes scaling to more complex environments difficult. * 2) How does the method guarantee that no motion information leak in the global content vector? Since the encoder + LSTM that generate the global vector see the whole input sequence, it could also capture some information about the motion. * 3) Limited evaluation: the model is tested on three simple datasets. Could be interesting to see how it performs on more complex datasets, with different/moving backgrounds. It also misses comparisons with recent works. E.g.: a) Franceschi et al., "Stochastic latent residual video prediction". ICML 2020. b) Wang et al., "G3AN: Disentangling appearance and motion for video generation". CVPR 2020. * 4) Clarity. While the paper is well written, there lacks some implementation details on the main component of the paper: the hamiltonian operator (see questions). This affects the understandability. The authors addressed the limitations.
This paper proposes a novel type of variational auto encoder, referred to as HELO. The latent space is decomposed into a content space and a motion space, and the main contribution is the proposal to model the motion space using Hamiltonian dynamics. All reviewers agree that the idea of using Hamiltonian dynamics is interesting and novel. One main critique, that the authors agreed on, was that the operator does not contain any stochasticity and that this might be a limitation when applying the idea to model more complex data. Another remark was that the experiments are limited and experiments on less constraint data are missing. A quick look at the baseline methods revealed that they also use the same kind of data sets to evaluate their methods, so this latter concern might be of minor importance. All in all, the potential positive outcomes of this paper outweight its current limitations, so we recommend acceptance at this point, while urging authors to address the remaining concerns in the final version.
In this paper, the authors propose an enhanced GAT model named SuperGAT by adding link prediction as an auxiliary task when conducting node classification and compare different methods of attention forms. The authors have also conducted evaluation based on both synthetic datasets and real-world datasets to analyze how different attention methods perform on various data and task types. In general, the paper is written well and easy to follow. However, there are still several issues the authors need to address: First, the authors need to better justify the novelty of the proposed method. The claimed self-supervision task can be considered as general link prediction task where the attention weights is used as features. Though they authors propose two new types of attention forms, i.e., 1) scaled dot-product and 2) mixed GO and DP, they are normalized and combined version of existing attention mechanisms. I suggest the authors to better justify the novelty of the proposed technique and how they differ from existing work. Another major concern is the experiment settings. It is good to see that the authors raise several research questions to guide experiment design. However, some assumptions in these research questions are questionable. For example, in RQ1, the authors claim that “ideal node representation can be generated by aggregating only neighbors with the same label”. As the neighborhood information of a node can be also informative when predicting node labels in certain cases, a good node representation does not necessarily need to only aggregates neighbors of the same labels, which makes the proposed method that uses KL divergence to compare label-agreement and graph attention questionable. I suggest the authors to provide more justification on this assumption. RQ2’s primary goal is to understand how different graph attention methods perform for the link prediction task, it would be better if the authors can justify why they didn’t conduct experiments where only link prediction (self-supervised) loss is used and discard the node classification task. In RQ3, the authors hypothesize that “different graph attention will have different abilities to model graphs under various homophily and average degree”. This is probably true. However, given the so many graph properties (e.g., degree distribution, graph diameter, and average clustering coefficient) and model configuration (e.g., # of layers and task type), it is unclear why the authors choose these two controlled variables and why they believe they are the most important ones. I suggest the authors provide more rationale on how they choose the controlled variables and how other factors may impact model performance. Another minor question is that why do the authors add an activation function for $e_{ij, DP}$ in Eqn. 4, given that $e_{ij, DP}$ is already a dot product that indicates the weight of a link. It would be better if the authors can elaborate more on the design rationale. In summary, I think the authors focuses on an interesting problem but need to further address the issues listed above. <doc-sep>Summary: =========== The paper provides an interesting direction for improving the Graph Attention Networks. More specifically, the authors propose a self-supervised graph attention network (SuperGAT) designed for noisy graphs. They encode positive and negative edges so that SuperGAT learns more expressive attention. They focus on two characteristics that influence the effectiveness of attention and self-supervision: homophily and average degree. They show the superiority of their method (4 variations) by comparing it with many state-of-the-art methods, in 144 synthetic datasets (with varying homophily and average node degree) and 17 real-world datasets (again with various ). Reasons for score: =========== Overall, I vote for accepting. I find very interesting the idea of using self-supervision to improve graph attention networks and the experiments are nicely done and convincing. The authors do an impressive work to include as much information and results as they can in the given space. Strengths: =========== - The paper is about an interesting problem in the ICLR community. Graph Attention Networks have gained a lot of attention in the recent years from researchers in the field of graph and node representations with applications in node classification and link prediction. The idea of adding the advancements in recent direction of self-supervised learning to improve the learnt representations seems very promising. - The authors have done a great job in the structure and the presentation of the paper. The paper is well-written and especially the sections Experiments and Results are well-structured and contain a lot of packed details in the design and the outcome of the experiments. More specifically, Figure 4 stands out as in a very limited space contains the information for the best performed model in all 144+17 graphs! - The contributions of this work include the proposed method (GANs with self-supervision), but also an analysis for the selection of the best model depending on two important features of the graph (homophily and average degree). - The authors compare their method with all state-of-the-art methods and also four variations of their own model and exhaust their evaluation by testing the performance in 144 synthetic graphs and 17 real-world datasets (including the benchmark datasets that are usually being used in this domain). Weaknesses: =========== - The proposed method uses two known graph attention mechanism as building blocks, they use negative sampling and they add cross-entropy loss for all node labels and self-supervised graph attention losses for all layers. These building blocks and mechanisms are known in the literature, and as a result the proposed method adds incremental novelty compared to the related works. - In Appendix, A.3, the description and discussion for t-SNE plots is limited or absent. It would be better to add more details to it, as for example why this is a good representation and how the representations improve or not based on the hyperparameters. Also, how the results in the subfigures differ in terms of representations. It is difficult to get any insights from these plots. Questions during rebuttal: =========== - Overall, my recommendations for more analysis and insights of the results are all responded from the Appendices. I would like a comment and clarification from the authors regarding Appendix A.3 Figure 5 (t-SNE plots), even though it is not in the main paper submission. - My understanding is that the authors are going to release the code upon acceptance, is this correct? In the repository that the code will be released, it will be useful to also add links to all 17 public datasets to ease research in the field. <doc-sep>This paper proposes a new attention mechanism SuperGAT (with various flavours) for graph neural networks that is self-supervised. They exploit the presence/absence of an edge between a pair of nodes to guide the attention. The authors then make the observation that the homophily and average degree of a graph influence the design of the attention mechanism. Extensive experiments are shown, where the various versions of SuperGAT are tested on 17 real-world dataset and many synthetic ones, and these results are compared against other state-of-the-art models (including the original GAT work). The paper is well-written and it is clear that the authors have done extensive experimentation to test their hypothesis. However, the paper has some weaknesses that I try to summarize below. - To obtain SuperGAT, the authors have made some tweaks made to the original GAT formulation. These tweaks are minor and are not surprising or inspired from a deep/novel insight. - The choice of studying two graph properties homophily and average degree seem arbitrary. What is the reasoning behind these properties? Were there other properties (e.g. diameter, degree sequence, ...etc) that the authors have studied that did not yield good results? This is particularly of interest due to the fact that experiments do not entirely support/explain the importance of these two properties, such as in the case of Flickr and Crocodile datasets. - The above reasons would still not be a major disadvantage had the experimental results shown strong superiority of SuperGAT over previous models. In all the experiments that the authors perform, SuperGAT’s performance is only slightly better than older models (e.g. in Table 2, difference between an accuracy of 72.5 vs 72.6 can hardly be considered superior). And even then, SuperGAT does not have the best performance across all datasets. Proposition 1 and its proof are interesting contributions of this work, but they may not be enough. Perhaps instead of concentrating on performance superiority, the authors can look at the explainability aspect of their proposed architecture and look deeper into other graph properties that may guide the design/use of self-supervision. Another minor comment: on Page 3, the authors say “if the number of nodes is large, it is not efficient to use all possible negative cases”. The number of negative cases is a function of the number of edges and not the number of nodes. If a graph contains all possible edges, then the number of all possible negative cases is zero, no matter how many nodes there are. <doc-sep>********Summary In this paper, they introduced self-supervised graph attention network (SuperGAT), which is claimed to perform well in noisy graphs. They used information in the edges as an indicator of importance of relations in the graph, then they learn the relational importance using self-supervised attention. After learning the attentions values using their self-supervised method, they can predict the likelihood of an edge between nodes. They worked on two popular attention mechanisms: GO and DP, and showed in their experiments DP has better performance than GO for link prediction task. And Go has better performance in label-agreements between nodes. The other question they answered in their experiments was what the best attention model is to choose. They introduce a recipe based on two graph characteristics: homophily and average degree. ********Positives - One thing I liked about this paper was thorough and neat experiments. I enjoyed the way they designed their experiments by mentioning several important questions followed by their answered backed up with their experiments. They used two attention mechanism as the base, then applied their method on these two methods for link prediction and label-agreements tasks and compare their results. - I also liked that they examined their recommendation for the choice of attention model on real world datasets, and their answer for real-word data was almost similar to synthetics data. - The paper was well-organized and well-written. They clearly explained their method. ********Notes - I would recommend adding a picture showing their architecture and compare it with other two attention models - I sort-of understand the reading as to why "GO learns label-agreement better than DP." Based on the argument on page 6. A strong argument would be helpful to explain why "DP predicts edge presence better than GO." - (minor note:)No need the parenthesis in this sentence line 8: "Interestingly, for datasets (CS, Physics, Cora-ML, and Flickr) in which" ********Reason to accept I am in general positive about this paper. The innovation is not significant; however, their experiments were interesting, and they prove how well their method works well empirically. I think this research will be useful for people in this area. ******* After Rebuttal I have read the author's response
Two reviewers are very positive about this paper and recommend acceptance, one indicates rejection and one is on the fence. Although all referees appreciate the extensive experiments and analysis presented in the paper, their main concerns are related to the limited superiority of the method wrt state of the art [R1], seemingly arbitrary choices and questionable assumptions [R4]. The rebuttal adequately addresses R1's concerns by highlighting statistical significance of the results, and partially covers R4's concerns. Although the proposed approach may be perceived as incremental [R1, R2, R3, R4], the authors argue that introducing self-supervision to graph attention is not trivial, and emphasize their findings on how/when this is beneficial. Moreover, R2 and R3 acknowledge that the contribution of the paper holds promise, is worth exploring, and may be useful to the research community. Most reviewers are satisfied with the answers in the rebuttal. After discussion, three referees lean towards acceptance and the fourth reviewer does not oppose the decision. I agree with their assessment and therefore recommend acceptance. Please do include your comments regarding the choice of average degree and homophily in the final version of paper.
In this paper the authors propose a meta-learning method for few-shot learning. The propose approach, MLTI, creates new (artificial) tasks by interpolating two (existing) tasks form the training set during (meta-)training. The new tasks are generated by interpolating features/labels of two sampled tasks from the training set. The authors show that the proposed approach achieve good results in multiple datasets (both in regression and classification), multiple settings (”label sharing” and “non-label sharing”) and different algorithms (MAML and ProtoNets). ### Pros + The paper is well written and easy to follow. + The idea of interpolating tasks in a meta-learning setting is novel, intuitive and simple. Although previous work exists that augment the number of tasks, this is the first approach that augments across tasks (rather than within task). + The authors shows good result on different datasets, settings and backbones. ### Cons - I feel like results in more “traditional” (and larger) FSL datasets are missing. For example, it would be nice to see results in tieredImagerNet or metaDataset. - I also feel that the authors introduce the method as being a general meta-learning approach, but only show results on image. classification/regression. It would be nice to see results in other domains such as RL/NLP/etc tasks. - I find the theoretical analysis difficult to follow and potentially not very informative to the rest of the paper (that been said, I am not an expert on generalization theory/Rademacher complexity and cannot properly validate it). I recommend this paper for acceptance. The proposed idea is simple and novel, the paper is well written and the empirical evaluation is well executed. --- **Post-rebuttal update** I thank the reviewers for the rebuttal and I keep my rating of 8. Congratulations for the nice work! <doc-sep>[Summary] This work tackles a scenario where there may not be a large number of training tasks available, which increases the susceptibility of meta-learning algorithms to meta-level overfitting/memorization problem. In particular, to cope with the scarcity of tasks, the paper proposes to augment the given task set through interpolation of tasks. The paper reports better performance than other methods on benchmarks that have fewer training tasks. [Strengths] The work provides extensive theoretical analysis to provide theoretical guarantees as to how the proposed MLTI task interpolation method achieves better generalization. In contrast, previous methods that have employed common augmentation methods (e.g., label noise, CutMix, MixUp) without theoretical guarantees. The work introduces scenarios that are more challenging than standard benchmarks by limiting the number of meta-training tasks. The work provides extensive experiments across various datasets under such challenging scenarios and demonstrates better performance than previous methods, providing empirical support for the effectiveness of the proposed task interpolation method. [Weaknesses] I believe the work has minor technical novelty compared to the related work by Ni et al [1]. In particular, Ni et al. [1] performs several augmentations for meta-learning, one of which is MixUp for tasks. Using MixUp between any given pair of classes, Ni et al. [1] also creates new tasks. [Comments] In Related Work section, the work states that compared to work by Ni et al. [1], the proposed method directly desnifies the task distribution. But, doesn’t [1] effectively densify the task distribution, where a new task can consist of new classes that are constructed by using MixUp on pairs of classes? As such, I believe more discussions on this issue should help better differentiate the work from the related work. Why does the proposed method randomly sample a location where features are to be interpolated? Is there an ablation study on the sampled location? I wonder if this technique is what makes the proposed method perform better than other works. I’m curious as to whether the proposed method, without this technique, still performs better than other works. The ablation study on this would be helpful for better understanding of differences from other works. Also, how does it compare with related works on standard benchmarks, such as miniImageNet. I think that the proposed method should still work with a larger number of tasks and believe that these experimental comparisons can strengthen the contributions of the proposed method. [1] Ni et al. Data Augmentation for Meta-Learning. [Recommendation] Despite strong experimental results and analysis, at this point, I believe the technical novelties are not significantly different from the work by Ni et al. Thus, I believe the work is marginally below the acceptance threshold. If the above comments are addressed, I’m willing to increase the score. ----------------------------------------- [Post-rebuttal] I thank the authors for the response, along with clarifications and updates in the manuscript. As the authors have addressed most of my concerns, I'm happy to increase the score accordingly. <doc-sep>The paper proposes an interpolation strategy for meta-learning to improve the learned model’s generalizability. The interpolation strategy is quite simple - interpolate between a pair of tasks, in contrast to existing methods such as adding label noise or data augmentation on each task individually. The authors show that the resulting gradient- or metric-based meta-learning framework (MLTI) induces a data-dependent regularizer that controls the Rademacher complexity leading to better generalization. MLTI is tested on specially curated datasets derived from standard benchmarking datasets. Furthermore, MLTI is also compared against existing data augmentation and interpolation strategies for meta-learning to illustrate its effectiveness. strengths Although nifty, the idea of pair-wise task interpolation is an incremental change over the existing data augmentation approaches. The theoretical results, highlighting the relationship between task interpolation and the Rademacher complexity, are non-trivial extensions of the Zhang et al. ICLR 2021 and Yao et al. ICML2021 to account for pair-wise task interpolation. I view this as the primary contribution of the paper. The comparison against existing data-augmentation baselines for both metric- and gradient-based meta-learning approaches is quite exhaustive. Furthermore, MLTI is tested on a wide variety of datasets. While the improvement on each dataset is only marginal, the consistent improvement in all datasets and across all approaches strengthens the paper’s contribution. The paper is well-written and easy to follow. Concerns Current approaches in meta-learning rely on heavier backbones such as ResNet. As the goal of all the meta-learning methods is to improve the model’s generalizability, I think it is fair to evaluate the effectiveness of MLTI with heavier feature extraction backbones. Such a comparison is relevant as the proposed task interpolation is conducted on the features extracted from some intermediate layer of the network. Overall, the paper proposes a simple extension to standard data/task - augmentation methods for meta-learning but justifies it with rigorous theory. The theoretical results are non-trivial extensions/combinations of existing work. The effectiveness of the approach is evident from the extensive empirical evaluation. The contributions are strong, albeit limited to the meta-learning research community. <doc-sep>This paper proposes a task augmentation method via task-interpolation for data efficient meta-learning. While the traditional meta-learning methods highly rely on a large amount of data to retain diverse training tasks, the proposed method, MLTI generates tasks by interpolating the tasks which are obtained from training data. The experimental results on variety of few-shot learning dataset show that MLTI is effective when the meta-training data for constructing training tasks is not enough, for both gradient-based and metric-based few-shot learner. Strength 1. This paper proposes a novel task-augmentation method, which is affected by Manifold Mixup, which can be applied to many existing few-shot learning tasks. 2. The theoretical analysis shows that the proposed MLTI augmentation has a regularization effect and leads the meta-learner to have a better generalization capability. 3. Extensive simulation results on variety of few-shot learning datasets and two representative few-shot learning methods show that the proposed MLTI is highly effective for meta-learning with fewer data. Weakness 1. Comparison with the prior methods in large dataset is missing. For example, in Table 3, the comparison results are provided only for small datasets or reduced version of large datasets. However, the proposed method is not restricted to small dataset. The ablation experimental result in Figure 2 shows that proposed MLTI is still effective when the full miniImageNet/DermNet dataset is used, although the performance gain becomes small when the full dataset is used. I suggest the authors to include the comparison of MLTI and prior methods with full size of miniImageNet and DermNet. Question 1. In Section 3, the authors mentioned that it is intractable to calculate prototypes with mixed labels. However, in prior work on semi-supervised few-shot learning [1], the prototypes are computed using soft-labels. What happens if we compute prototypes using soft-labels as done in [1]? 2. Some additional studies on the interpolation layer would be helpful for understanding the proposed method. In Algorithm 1 and 3, the interpolation layer $l$ is randomly chosen in step 7. What happens if we fix $l$ instead of randomly sampling $l$ for every iteration? In that case, how are interpolating at lower layer and interpolating higher layer different? Typo: In last line of page 4, there is a typo (regularizaiton -> regularization) [1] Ren, Mengye, et al. "Meta-Learning for Semi-Supervised Few-Shot Classification." International Conference on Learning Representations. 2018. This paper proposes a novel task-augmentation method of MLTI and shows the effectiveness of proposed MLTI through the extensive simulation results and theoretical analysis. The proposed MLTI can be applied to both optimization-based and metric-based few-shot learning methods. Adding some experimental results would make the readers to better understand the proposed method. However, I believe the idea of this paper is valuable for few-shot learning field, and I recommend to accept this paper. <doc-sep>This paper describes a method for augmenting task selection in meta-learning, by interpolating support and query sets between two random tasks from the base dataset. This is examined in two scenarios, label-shared LS and non-label-shared NLS, differing in whether the label space is the same between tasks (e.g. pose estimation) or different (classification to different discrete class sets). In the former, label targets are interpolated as well as support set inputs, while in the latter, new classes are constructed by random cross-task pairings. Comparisons are made to other interpolation augmentation approaches, including MetaMix, which interpolates in query set but not the support set. The approach results in significant performance gains on multiple benchmarks in both settings. I found the approach to be simple and relatively well explained, including ablations studies on large-point questions I had while reading, including its behavior and effectiveness for different sizes and number of classes in the original base dataset, as well as effects of inter- and intra- task interpolations. The key difference between this work and MetaMix (Yao et al 2020) is incremental but important: MetaMix will run the inner loop on the unmodified support set only, and use a mix of support+query in outer loop comparison optimization, whereas this work interpolates support set in inner loop as well. This difference enables between-task interpolation which adds additional augmentation particularly in settings where few tasks can be drawn from the base data. I didn't follow much of the theoretical sections in detail, and had to look at the appendix proofs to even understand some of the notation in the main text. In my somewhat limited understanding they seem reasonable. These claim to show a theoretical generalization improvement in simplified settings (binary classification of single layer model, linear protonet feautres). Additional questions: NLS: In addition to a single set of correspondence pairs, the input examples for each class can be mixed with all-pairwise-combinations. How many combinations are used? That is, for two sets of k examples {xs_i} and {xq_j} (i,j in 1..k), one can form k^2 interpolated examples {a xs_i + (1-a) xq_j} using each i,j combination. Are all of these combinations formed or just a single set of k pairings? If using more than k pairings, this would change the task from k-shot to k^2-shot; but the l-layer features for each of the k^2 combinations could be computed, and then up to (k^2 choose k) tasks could be selected from these and used in the upper layer loss comparisons. For k=5 that would increase interpolated pairs from 5 to 25, but potentially get up to 53130 upper layer loss comparisons from each task pair sample -- would this get even more benefit from this task augmentation technique? eq 5: what does the name of the subscript "cr" mean (does it stand for something)? It could be useful to have a more explicit explanation of differences with MetaMix. MetaMix will run the inner loop on the unmodified support set only, and use a mix of support+query in outer loop comparison optimization, whereas this work interpolates support set in inner loop as well. This is already mentioned at a high level (fig 1 caption and sec 5 last paragraph), but I think it could be even clearer by pointing out the difference in the discussion around eq 5, that support set H^s,Y^s in the inner loop is mixed between tasks, whereas in MetaMix, only the H^q,Y^q are replaced by mixing. Overall, the approach is described well enough to understand the approach, and is emperically shown to result in decent performance gains in the low task data settings for which it is intended. The theoretical sections corroborate this, but I found them hard to follow.
Current meta-learning algorithms suffer from the requirement of a large number of tasks in the meta-training phase, which may not be accessible in real-world environment. This paper addresses this bottleneck, introducing a cross-task interpolation in addition to the existing intra-task interpolation. The main idea is very simple, which can be viewed as an incremental adding-up to existing augmentation methods. However, the method is well supported by nice theoretical results which highlight the relation between task interpolation and the Rademacher complexity. In fact, this is not a trivial extension of existing work. Authors did a good job in the rebuttal phase, resolving most of concerns raised by reviewers, leading that two of reviewers raised their score. All reviewers agree to champion this paper. Congratulations on a nice work.
Quality of the paper: The paper is quite clear on the background literature on adversarial examples, physics based rendering, and the core idea of generating adversarial perturbations as a function of illumination and geometric changes. Originality and Significance: The idea of using differential renderers to produce physically consistent adversarial perturbations is novel. References: The references in the paper given its scope is fine. It is recommended to explore references to other recent papers that use simulation for performance enhancement in the context of transfer learning, performance characterization (e.g. veerasavarappu et al in arxiv, WACV, CVPR (2015 - 17)) Pros: Good paper , illustrates the utility of differentiable rendering and simulations to generate adversarial examples and to use them for improving robustness. Cons: The experimental section needs to be extended and the results are limited to simulations on CIFAR-100 and evaluation on lab experimental data. Inclusion of images showing CIFAR-100 images augmented with random lighting, adversarial lighting would have been good. The details of the image generation process for that experiment is vague and not reproducible. <doc-sep>The paper demonstrates a method for constructing adversarial examples by modifications or perturbations to physical parameters in the scene itself---specifically scene lighting and object geometry---such that images taken of that scene are able to fool a classifier. It achieves this through a novel differentiable rendering engine, which allows the proposed method to back-propagate gradients to the desired physical parameters. Also interesting in the paper is the use of spherical harmonics, which restrict the algorithm to plausible lighting. The method is computationally efficient and appears to work well, generating plausible scenes that fool a classifier when imaged from different viewpoints. Overall, I have a positive view of the paper. However, there are certain issues below that the authors should address in the rebuttal for me to remain with my score of accept (especially the first one): - The paper has no discussion of or comparisons to the work of Athalye and Sutskever, 2017 and Zeng et al., 2017, except for a brief mention in Sec 2 that these methods also use differentiable renderers for adversarial attacks. These works address the same problem as this paper---computing physically plausible adversarial attacks---and by very similar means---back-propagation through a rendering engine. Therefore it is critical that the paper clarifies its novelty over these methods, and if appropriate, include comparisons. - While the goal of finding physically plausible adversarial examples is indeed important, I disagree with the claim that image-level attacks are "primarily tools of basic research, and not models of real-world security scenarios". In many applications, an attacker may have access to and be able to modify images after they've been captured and prior to sending them through a classifier (e.g., those attempting to detect transmission of spam or sensitive images). I believe the paper can make its case about the importance of physical adversarial perturbations without dismissing image-level perturbations as entirely impractical. - The Athalye 18 reference noted in Fig 1 is missing (the references section includes the reference to Athalye and Sutskever '17). ===Post-rebuttal Thanks for addressing my questions. With the new comparisons and discussions wrt the most relevant methods, I believe the contributions of the paper are clearer. I'm revising my score from 6 to 7. <doc-sep>Summary: This work presents a method to generate adversary examples capable of fooling a neural network classifier. Szegedy et al. (2013) were the first to expose the weakness of neural networks against adversarial attacks, by adding a human-imperceptible noise to images to induce misclassification. Since then, several works tackled this problem by modifying the image directly in the pixel space: the norm-balls convention. The authors argue that this leads to non-realistic attacks and that a network would not benefit from training with these adversarial images when performing in the real world. Their solution and contributions are parametric norm-balls: unlike state-of-the-art methods, they perform perturbations in the image formation space, namely the geometry and the lighting, which are indeed perturbations that could happen in real life. For that, they defined a differentiable renderer by making some assumptions to simplify its expression compared to solving a light transport equation. The main simplifications are the direct illumination to gain computation efficiency and the distant illumination and diffuse material assumptions to represent lighting in terms of spherical harmonics as in Ramamoorthi et al. (2001), which require only 9 parameters to approximate lighting. This allows them to analytically derivate their loss function according to the geometry and lighting and therefore generate their adversary examples via gradient descent. They show that their adversary images generalize to other classifiers than the one used (ResNet). They then show that injecting these images into the training set increase the robustness of WideResNet against real attacks. These real attack images were taken by the authors in a laboratory with varying illumination. Strength: - The proposed perturbations in the image formation space simulate the real life scenario attacks. - The presented results show that the generated adversary images do fool the classifier (used to compute the loss) but also new classifiers (different than the one used to compute the loss). As a consequence the generated adversary images increase the robustness of the considered classifier. - Flexibility in their cost function allows for diverse types of attacks: the same modified geometry can fool a classifier in several views, either into detecting the same object or detecting different false objects under different views. Major comments: - Method can only compute synthetic adversary examples, unlike state-of-the-art. - The main contribution claimed by the author is that their perturbations are realistic and that it would help better increase the robustness of classifiers against real attacks. However, they do not give any comparison to the state-of-the-art methods as is expected. Minor comments: - Even if the paper is well written, they are still some typos.
The paper describes the use of differentiable physics based rendering schemes to generate adversarial perturbations that are constrained by physics of image formation. The paper puts forth a fairly novel approach to tackle an interesting question. However, some of the claims made regarding the "believability" of the adversarial examples produced by existing techniques are not fully supported. Also, the adversarial examples produced by the proposed techniques are not fully "physical" at least compared to how "physical" adversarial examples presented in some of the prior work were. Overall though this paper constitutes a valuable contribution.
The authors proposes a Hierarchical Bayesian approach for lifelong RL. The global world-model posterior models the world model shared across tasks and the task-specific model learns the dynamics within a specific task. The task-specific model achieves forward transfer by initializing from the global world model. The authors use mean-field variational approximation to scale the proposed model. Also, the authors introduce sample complexity analysis. The method is evaluated on two toy tasks (grid-world and box jumping) and one on MuJoCo simulator and showed superior performance to the previous works. - Clarity of the paper should be improved significantly. The current version makes understanding hard. - The paper does not describe the notion of many notations (e.g., the indices j and k in Figure 1.) and denotes some distributions in its short-form, e.g., q_e or q_\\theta^{m_i}, without introducing its full form. These are not just a few, but observed broadly across the entire paper, perhaps Section 3 requires the most significant improvement. - The paper also explains the proposed method by directly explaining lines of the pseudo-code without discussing the big picture or the rationale behind the design. - It also skipped describing the full joint distribution of the model which is essential in describing such probabilistic models. - The key idea of the proposed method is quite simple. It's like applying Variational Continual Learning to the world model learning in the RL setting. It can also be understood as Bayesian meta-learning but with sequential task exposure. - The experiment is also quite simple. The first two toy tasks are quite too toyish, very low-dimension and tow task complexity. The last MuJoCo experiments show the superiority of the proposed model. - The algorithms (e.g., variational continual learning, Bayesian meta learning, and mean-field variational inference) on which the proposed method is based are known to be not scalable in more complex settings like high-dimensional data (like image), many sequential tasks (e.g., 1000 or more tasks), and the mean-field approximation has significant limitation in its expressiveness. So, I'm somewhat doubtful if this method can be an important milestone toward more realistic and complex settings. The writing clarity requires a significant improvement. It's currently a major drawback hindering the understanding of the proposed model (I understood at the later part of the paper but it was hard until reaching there). The evaluation is quite simple and uses toy tasks. I'm doubtful about the potential of the proposed model to extend to more realistic and complex settings such as image inputs. <doc-sep>This submission presents an approach for Bayesian model-based exploration in a lifelong RL setting, building upon existing approaches for Bayesian exploration (BOSS) and Bayesian multi-task modeling (HiP-MDPs). The approach keeps separate models for sampling transitions and rewards for each task, and each task model is drawn from a shared prior that models the distribution over tasks. The method continually updates the shared model with data from all observed tasks and the task-specific model with data from the current task. To achieve backward transfer, the approach replaces the task model with the shared model whenever the task model has not been sufficiently trained on a particular state-action pair. ############## Strengths ############## 1. The overall idea of model-based lifelong RL is very relevant, particularly since lifelong RL precisely seeks to reduce sample complexity 2. The high-level idea of replacing the task model with the world model whenever the task model is uncertain is intuitively appealing 3. The use of the task model permits deriving reduced sample complexity bounds thanks to the Bayesian formulation ############## Weaknesses ############## 1. The low performance of the agent in the more complex MuJoCo evaluations makes the results unconvincing 2. The technical approach is very closely tied to existing works, in particular HiP-MDPs and variants and BOSS 3. There are no comparisons to existing model-based lifelong RL methods ############## Arguments ############## The primary contribution of this work is the introduction of one of the first approaches for model-based lifelong RL. Since one of the key desiderata of lifelong RL is to decrease the amount of experience needed to learn new tasks (forward transfer), using model-based techniques, which are inherently more data efficient than model-free techniques, seems to be a promising direction. However, the proposed method itself is fairly incremental. In particular, it heavily hinges on BOSS (as an exploration technique) and HiP-MDPs (as a multi-task model-based model). While the use of these models permits relatively simple adaptations of the proofs of BOSS to demonstrate decreased sample complexity (which is a nice plus), it does make the work contain very little technical insights. In particular, when adapting a multi-task method (HiP-MDPs) into a lifelong method, I would have expected to see considerable effort in designing a technique for updating the shared model with knowledge from new tasks. However, the authors simply train this shared model in a multi-task fashion with data from all tasks. This is not only costly in terms of memory footprint (requiring to store vast amounts of data for each task), but is also computationally inefficient, since the multi-task training step is executed iteratively _for each task_ (O(n^2) cost). The one piece of technical insight offered in the submission that I found interesting was the idea of replacing the task model with the world model whenever the task model is not yet well trained. Unfortunately, the approach itself is very heuristic (relying on thresholds on the uncertainty across sampled models) and the heuristic choices are not discussed in detail or validated empirically (e.g., via ablative tests or sensitivity analysis on the thresholds). I intuit that the method is quite sensitive to these thresholds: if they are too high, the task model will almost always be used, even if it's uncertain; if they are too low, the world model will be used too frequently and task performance will likely degrade by reverting to the "average" world model. Would it not be possible to do some sort of soft combination (of the task and world model) weighted by the uncertainty, instead of a hard selection of one or the other? However, my main concerns with the submission lie on the empirical evaluation. The first major concern is that all the agents evaluated on MuJoCo domains achieve very low performance, as compared to the results in the papers cited in the submission for the experimental design of these tasks (Mendez et al. and Wang et al.). While it is clear that the reduced performance stems from the choice of using fewer environment interactions (which itself is a nice choice, given the use of model-based techniques), this does raise questions about the conclusions drawn from these results. In particular, how useful or informative are forward and backward transfer if the agent hasn't really learned any meaningful behaviors? It would be useful to include videos of the learned behaviors to assess whether the transfer results are in any way significant from a behavioral perspective, or if they're simply minor reward increases across poor behaviors. My second major concern is the choice of baselines for these evaluations. On one hand, the authors chose to only compare against model-free lifelong RL techniques. In this setting, it should certainly be expected that the model-based approach outperforms those baselines simply by the nature of the underlying techniques. This is not novel insight. Yet the authors claim these as general improvements over existing lifelong RL methods, which seems like a stretch. Instead, the authors should have considered existing model-based lifelong RL methods (e.g., [1]) for a more apples-to-apples comparison. Note that [1] is a task-agnostic method, so this would require some adaptation to handle the setting where the agent is given access to task indicators. On the other hand, the authors make claims about forward transfer, but in the MuJoCo domains there is no comparison to a single-task or no-transfer baseline, like used in the box-jumping task. Such a baseline is critical for assessing whether the approach is actually achieving transfer across tasks, since improvements w.r.t. single-task training are precisely what demonstrate transfer. As one additional comment regarding the evaluations, there is no information about the implementation details of any of the baselines, including their model architectures and hyper-parameters. How were these chosen to guarantee a fair comparison? ############## Additional feedback ############## The following points are provided as feedback to hopefully help better shape the submitted manuscript, but did not impact my recommendation in a major way. Intro - I wonder if the example of different houses and toothbrushes matches the HiP-MDP formulation introduced immediately after - The intro is fairly clear and describes the solution approach well. - I'd suggest including an example that more closely matches the HiP-MDP formulation, since this is the formulation adopted throughout the paper. Sec 4 - The ideas seem to be very closely related to the original HiP-MDP papers, especially the BNN extension of Killian et al. (2017) - The notation for the BNN needs a fair bit of work. The authors never explain what the "particles" are. Are these the (s,a,r,s') tuples sampled from the sequences given by the combination of CEM and the BNN? This becomes increasingly relevant in 4.1 where the authors define their approach to backward transfer. My understanding is that "aleatory variability" is modeled as the BNN's internal variance, whereas the epistemic uncertainty is measured as the variance in the (mu, sigma) output by the BNN across sampled particles. Is this understanding correct? Sec 5 - The gird-world evaluation shows nothing about forgetting/backward transfer. - Box-jumping: Why no comparison to a single-task variant of the solution? This is required to assess forward transfer. Also no backward transfer measure. - MuJoCo: again, no single-task learner, so it's unclear if there's forward transfer. Plus, the rewards are very low, so it seems that even VBLRL is not solving the tasks. How useful are these results then? Even though VBLRL is the best, it's not really achieving meaningful behaviors. - I disagree with the claim that the model "cannot" suffer from forgetting, since certainly the wrong choice of threshold for backward transfer could lead to forgetting. - How was this hyper-parameter chosen? Typos - Sec 2, second paragraph: the task facing a single agent -> the agent facing a single task? [1] Nagabandi et al. Deep online learning via meta-learning: Continual adaptation for model-based RL. ICLR 2019. Unfortunately, I recommend the rejection of this work. While I agree with the premise of the submission that model-based lifelong RL is a relevant area of research, with potential implications on real-world applications of lifelong RL, the submission as it stands appears to not be ready for publication. On the technical side, the approach seems to add just a few incremental changes to multi-task HiP-MDPs to adapt them to the lifelong setting. This on its own is perhaps relatively minor, since the novelty comes from adapting it to a new problem setting. However, such technically incremental contributions should generally be accompanied by strong empirical evaluations, which is not the case in this work. In particular, the low overall performance of all agents on MuJoCo domains suggests that none of the agents are learning to achieve meaningful behaviors, which raises questions about the conclusions reached by the authors. Moreover, the authors should have compared (at least qualitatively, but ideally also empirically) to existing work in lifelong model-based RL. On the flip side, the submission does include an interesting insight of replacing the task-specific model with the shared model whenever the task model is uncertain. <doc-sep>The paper deals with the problem of lifelong RL, also referred to as meat-RL, where an agent attempts to solve a sequence of tasks in order to facilitate the solution of a novel task. The framework follows that of Baxter 2000 (albeit that paper deals with supervised learning), and has been widely studied in recent years. The basic assumption is that the tasks are drawn from an underlying task-distribution, and each task (an MDP) is stochastically selected from a task-specific distribution. The authors work with a Bayesian framework, assuming a hierarchical distribution of the two levels, and learn the two levels separately. This framework has the advantage of providing both estimates and uncertainty estimates. For the discrete case they present a sample complexity analysis, and suggest a variational approach to practical learning. Finally, experiments are provided supporting the utility of the approach. The formal framework is that of hidden-parameter MDPs (HiP-MDPs) from Doshi-Velez 2016, and each MDP is modeled based on a transition and a reward model based on a hidden parameter. As more tasks are encountered the posterior over world models sharpens, and, being used to learn new tasks, is expected to facilitate learning. The learning of each new task is as in BOSS, and takes place by sampling from the learned MDP distribution, creating a mixed MDP, and using standard model-based approaches to solve these. The main theoretical contribution suggested in the paper is a PAC-MDP of Theorem 1 for a single task. This theorem is based on Lemma 2, for which a full proof is not provided in the main text or appendix, so its veracity cannot be verified. Moreover, it is based on the assumption that the posterior is consistent, which I believe is what needs to be shown in a meta-learning setup, and cannot be assumed. The form of the bound is also strange as it depends on \\delta rather than on \\ln(1/\\delta) as in previous bounds (e.g., Strehl 2006 and Asmuth 2009), and its dependence on \\gamma and \\epsilon is also worse than previous bounds. As far as I understand this is a bound for the single instance setting rather than for meta-learning. Following the theoretical part the authors develop a variational approach based on probabilistic networks that model the posterior distribution. As far as I am aware their variational approach is rather standard, although the authors do not refer to previous work. Also, their use of model-predictive control is common in current ML applications, but, again, this is not mentioned or discussed (e.g., the length of the future horizon and how is it selected). The authors conclude by presenting numerical simulations for a grid-world and some MuJoco problems for continuous control. While the method compares well to simple baselines it is hard to assess performance relative to more recent work such as Liu et al., "Taming MAML: Efficient Unbiased Meta-Reinforcement Learning" and the other baselines measured in it. The paper is phrased within a sequential approach to meta-learning that has been widely studied within the supervised learning community (e.g., Baxter 2000, Pentina and Lambert, "A PAC-Bayesian Bound for Lifelong Learning" 2014 and much later work), with explicit performance bounds. It would be nice to acknowledge these roots. The present approach is plausible, and combines previous work, such as HiP-MDPs, BOSS, variational Bayes, in a sensible manner. However, I do not find that the level on innovation in this combination of approaches suffices for publication at ICLR, nor did I find the theoretical or experimental results of sufficient interest (see comments above). Following rebuttal: Following the authors' response and my response to their rebuttal I have lowered my assigned grade due to my dissatisfaction with their replies, which served to enhance my existing concerns about the paper.
The topic of this paper is timely and important. However, ultimately the reviewers remained unconvinced that this paper provides a sufficiently clear and sufficiently significant advance to lifelong RL. As an additional note, the setting under investigation here is not the full lifelong learning setting. E.g., several of the challenges outlined by Schaul et al. [1] are not treated, and this work is, instead, situated in a somewhat typical multi-task setting with substantlal structure. That is not bad, but it would be good if this is reflected clearly in all the statements, and, e.g., in the title of the work. The authors are encouraged to carefully take the provided feedback and see how they can use it to improve their work. This is an important research direction. It was just felt the current submission was not quite ready for publication yet. [1] https://arxiv.org/abs/1811.07004
This paper proposes a clever and sensible approach to using the structure learned by the auxiliary variational method to accelerate random-walk MCMC. The idea is to learn a low-dimensional latent space that explains much of the variation in the original parameter space, then do random-walk sampling in that space (while also updating a state variable in the original state, which is necessary to ensure correctness). I like this idea and think the paper merits acceptance, although there are some important unanswered questions. For example: - How does the method work on higher-dimensional target distributions? I would think it would be hard for a low-dimensional auxiliary space to have high mutual information with a much higher-dimensional space. In principle neural networks can do all sorts of crazy things, but phenomena like VAEs with low-dimensional latent spaces generating blurry samples make me suspect that auxiliary dimension should be important. - How does the method work with hierarchical models, heavy-tailed models, etc.? Rings, MoGs, and flat logistic regressions are already pretty easy targets. - Is it really so valuable to not need gradients? High-quality automatic differentiation systems are widely available, and variational inference on discrete parameters with neural nets remains a pretty hard problem in general. Some other comments: * It’s probably worth citing Ranganath et al. (2015; “Hierarchical Variational Models”), who combine the auxiliary variational method with modern stochastic VI. Also, I wonder if there are connections to approximate Bayesian computation (ABC). * I think you could prove the validity of the procedure in section 2.1 more succinctly by interpreting it as alternating a Gibbs sampling update for “a” with a Metropolis-Hastings update for “x”. If we treat “a” as an auxiliary variable such that p(a | x) = \\tilde q(a | x) p(x | a) \\propto p(x) \\tilde q(a | x) then the equation (2) is the correct M-H acceptance probability for the proposal \\tilde q(a’, x’) = δ(a’-a) \\tilde q(x’ | a). Alternating between this proposal and a Gibbs update for “a” yields the mixture proposal in section 2.1. * It’s also possibly worth noting that this procedure will have a strictly lower acceptance rate than the ideal procedure of using the marginal \\tilde q(x’|x) as a M-H proposal directly. Unfortunately that marginal density usually can’t be computed, which makes this ideal procedure impractical. It might be interesting to try to say something about how large this gap is for the proposed method. * "We choose not to investigate burn-in since AVS is initialized by the variational distribution and therefore has negligible if any burn-in time.” This claim seems unjustified to me. It’s only true insofar as the variational distribution is an excellent approximation to the posterior (in which case why use MCMC at all?). It’s easy to find examples where an MCMC chain initialized with a sample from a variational distribution takes quite a while to burn in.<doc-sep>In my opinion, the paper contains very interesting novel ideas. However, some parts needs a future clarification and the state-of-the-art must be improved. - First of all, Sections 2.3.1 or 2.3.2 can be improved and clarified. For instance, I believe you can create a unique section with title " Choice of Proposal density " and then schematically describe each proposal from the simplest to the more sophisticated one. - At the beginning of Section 2, please devote more sentence to explain why extending the space and apply the variational inference is good for finding a suitable good proposal density. - Related to Section 2 ( theMixture Proposal MCMC contribution), the authors should discuss (in the introduction and also in the related works section) the Multiple Try Metropolis schemes with correlated candidates where, for instance, a path of candidates is generated and one of them is selected and tested with MH-type acceptance probability, in a proper way. This is more general that your scheme but very related. Please see Qin, Z.S., Liu, J.S., 2001. Multi-point Metropolis method with application to hybrid Monte Carlo. Journal of Computational Physics 172, 827–840. L. Martino, V. P. Del Olmo, J. Read, "A multi-point Metropolis scheme with generic weight functions", Statistics and Probability Letters, Volume 82, Issue 7, Pages: 1445-1453, 2012. L. Martino, "A Review of Multiple Try MCMC algorithms for Signal Processing", Digital Signal Processing, Volume 75, Pages: 134-152, 2018. - Related again with the state-of-the-art description, the references regarding Adaptive Mixture Metropolis methods are completely missed. If I have properly understood, you also adapt a mixture via variational inference. Please, in Section 4, consider the different works that considers an adapting mixture proposal for a Metropolis-type algorithm, P. Giordani and R. Kohn, “Adaptive independent Metropolis-Hastings by fast estimation of mixtures of normals,” Journal of Computational and Graphical Statistics, vol. 19, no. 2, pp. 243–259, September 2010. Tran, M.-N., M. K. Pitt, and R. Kohn. Adaptive Metropolis–Hastings sampling using reversible dependent mixture proposals. Statistics and Computing, 26, 1–21, 2014. D. Luengo, L. Martino, "Fully Adaptive Gaussian Mixture Metropolis-Hastings Algorithm", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver (Canada), 2013. Roberts, G. O. and J. S. Rosenthal (2009). Examples of adaptive MCMC. Journal of Computational and Graphical Statistics 18, 349–367. <doc-sep>This paper proposes an auxiliary variable MCMC scheme involving variational inference for efficient MCMC. Given a target distribution p(x), the authors introduce an auxiliary variable a, and learn conditional distributions p(a|x) and q(a|x) by minimizing the KL divergence between p(x)p(a|x) and q(a)q(x|a), with q(a) something simple (the authors use Gaussian). A MH proposal step involves simulating x givea the current MCMC sample x (from p(a|x), taking a step in A-space, and then returning back to the X space (using q(x|a)). The authors show how to calculate the acceptance probability. I think the idea is nice and useful (I'm surprised people haven't thought of this before), though I think the paper presents this in a less clear way (as an extension of ideas from Agakov and Barber's "Auxiliary variational method"). While this is correct and perhaps more general, in my mind it slightly obscures the main idea, as well as the strong ties with variational autoencoders: express a complex distribution as a (learnt) transformation of a simple distribution (this is the actual approach taken in the experiments). The motivation of the approach is that the nonlinear encoding network can transform the complex p(x) into a simpler q(a). For this reason, I think an important baseline is the independent MH sampler from equation 8 (I think this essentially uses a trained VAE generative model as a proposal distribution). The authors talk about how producing independent proposals can be sub-optimal, yet it seems to me that if the encoder and decoder neural networks are powerful enough, this should do a good job. I think excluding this baseline hurts the paper a bit. The proof of correctness while correct is a bit unclear, can perhaps be simplified if you view the MCMC algorithm as operating on an augmented space (x,a,x') with stationary distribution p(x)q(a|x)q(x'|a) (writing writing q for \\tilde(q)). This clearly has the right distribution over x. Each MCMC iteration starts with x and proceeds as follow: 1) Given x, sample a and x' from q(a|x) and q(x'|a) 2) Make a deterministic proposal on the augmented space to swap (x,x'). The acceptance probability is now equation 2. 3) Discard a,x'. In figure 4, the authors use HMC as an "improved MCMC algorithm", yet this is not an algorithm that deals with multimodality well. More useful would be to include some tempering algorithm like serial or parallel tempering. While I like the idea, I unfortunately don't think the experiments are very convincing (and the authors barely discuss their results). Other than mixture of Gaussians, HMC (which involves no training) appears to be superior. With some tempering, I expect it to outperform the proposed method for the MoG case Table 2 left: since HMC involves no training, does this mean that, taking training time into account, HMC is 5-6 orders of magnitude more efficient. L?ke I mentioned earlier, these results need more discussion. It would also help to provide absolute training and run times, so the reader can better understand whether the proposed method of ANICE is better. Figure 3: why don't the authors also plot the histogram of values in the auxiliary space, p(a). It would be interesting to see how Gaussian this is (this is what variational inference is trying to achieve). Also, does Figure 3(a) mean that conditioned on x, p(a|x) is basically a delta function? This would suggest that the encoder is basically learning a deterministic transformation to a simpler low-dimensional space? There is some work in this direction in the statistics literature, e.g. "Variable transformation to obtain geometric ergodicity in the random-walk Metropolis algorithm" The authors some refers to the distribution of a|x as q(a|x) sometimes (in section 2.1) and sometimes as p(a|x) which is a bit confusing. Figure 2: the labels are wrong.
The reviewers all argued for acceptance citing the novelty and potential of the work as strengths. They all found the experiments a little underwhelming and asked for more exciting empirical evaluation. The authors have addressed this somewhat by including multi-modal experiments in the discussion period. The paper would be more impactful if the authors could demonstrate significant improvements on really challenging problems where MCMC is currently prohibitively expensive, such as improving over HMC for highly parameterized deep neural networks. Overall, however, this is a very nice paper and warrants acceptance to the conference.
This paper applies two existing techniques derived from adversarial example literature and MSGAN to improve the quality and diversity of generated samples by GAN. The former idea from [Goodellow et al., 2014b] is used to shift a latent vector z originally sampled from the Gaussian, which is different from the original adversarial attack paper that transforms the input image. The direction to move the latent vector is calculated by using I-FGSM with a standard loss for the generator proposed in the vanilla GAN paper. The experiments show that this technique can improve the quality of generated samples. Furthermore, this paper also consider how to improve the diversity of generated samples by transforming the latent vectors before putting them into the generator network. The approach for that is based on MSGAN's idea that mines hard latent vectors for the discriminator. The authors combine these two techniques and achieved better results compared to DCGAN, WGAN, WGAN-GP, and SNGAN models on several relatively small scale datasets such as CIFAR-10, STL-10, etc. This paper introduces their motivation convincingly with some easily comprehensible figures. However, the experiments to show the effectiveness of the proposed method are basically performed only with small scale datasets, so that it makes difficult to figure out how complex manifold in the target space can be handled by this approach, in particular, it is unclear how it can scale to the ImageNet dataset. The discussion to compare the proposed method with other work seems to be not enough because there are some more research that shed light on the importance of latent vector transformation. For example, the series of StyleGAN work is continuously improving the performance by precisely analyzing the relationship between the latent space and the pixel space. They also firstly adopted to transform the latent vector z to another latent space W using a mapping network. That should be discussed as one of the possible approach to transform the latent vector, i.e., using a trainable function instead of I-FGSM. Some figures to show the effectiveness of the proposed method seems difficult to interpret as the captions claim, e.g., Figure 5. It was difficult to find the clear difference between the results of "Anti-diverse" and "Original", and the "Diverse" results look worse than other two in terms of the quality (if it is intended because the figure tries to show the increase of diversity, but it still unclear that the "Diverse" row in the Figure 5 has higher diversity compared to other two.) The paper is well organized and easy to follow the authors' claims but the qualitative comparison results seems difficult to be interpret as the captions state. And from the perspective of transformation of the latent vector z, there can be more discussion in the comparison with other methods such as StyleGAN that transform z before putting it into the generator. <doc-sep>The paper looks at the problem of improving the generative quality of GANs. The paper makes improvement along two dimensions: a) The paper adjust the samples distribution of GANs using adversarial attacks (I-FGSM), and thus effectively samples from a potentially multi-modal distribution. b) The paper improves the diversity of generated samples using using adversarial attacks on the mode-seeking objective of Mao et al 2019. + The paper is generally well written and quite easy to understand. It does however mix preliminary work (I-FGSM) with the proposed contribution in 3.2. + The paper is able to improve vanilla GAN models using the presented objectives. - The paper does fully compare to prior work. Yes Table 8 highlights some of the similarities to prior work, but the main evaluation does not compare to alternative approaches that consider the interplay between GANs and adversarial attacks or latent space exploration. In order to see the efficacy of the presented method, the paper should experimentally compare to the majority of methods highlighted in Appendix C. The paper would be a lot stronger if it could show that the design choices made here are better than other attacks (i.e. perturbations on the generated or real images, instead of latent features, e.t.c.) - The visual quality of the presented examples is somewhat underwhelming. Does the presented method work on larger GAN architectures such as StyleGAN2 or BigGAN? Does the presented method actually address the issues highlighted in Fig 3? The paper explores an interesting idea of adding adversarial robustness into GAN training to improve latent distribution sampling and diversification. Unfortunately, the paper falls a bit short in the experimental validation of the approach, and comparison to prior approaches. --- Post rebuttal. The rebuttal makes a good case for their final algorithm using additional results. However, I still do not see what the paper adds on top of baselines, or how the problem setup in Figure 3 (interpolation artifacts) is actually addressed. The rebuttal mentions some experimental evidence that seems to indicate latent-space sampling can helps. However, I would need to see these results in an actual paper submission for review to feel comfortable about accepting it. As is the paper seems interesting, but not ready for publication. <doc-sep>This work proposed a sample shifting method in GAN, formulated as adding intermediate latent space to generated pixel space. Such method is based on observation of continuous mapping limit: image quality in pixel space is not as continuous as latent space; limited latent space will incur mode collapse thus poor image diversity. The main contributions are: a new optimization problem as sampling method to improve image generation by quality and diversity, propose to use I-FGSM optimization method to achieve this sampling optimization problem. The experiment showed improvement on public dataset of STL-10, CIFAR-10. Pros: 1. The paper is well organized, with clear sub-titles and clear logic flow; 2. Ablation study on various baseline GAN architecture (DCGAN, WGAN, WGAN-GP, SNGAN) is conducted to show generalizability of such sampling method. Cons: 1. Compare with baseline method MSGAN: (1) why it only compare the div+ with MSGAN; (2) improvement over baseline method MSGAN is very limited. 2. If generator trained with better quality regularization, the latent space after mapping should have better continuity? Comparison like Fig 3 after training would be needed to prove that. Some minor issues: 1. The paper is a bit redundant on algorithms and figures. For example, it seems lengthy to include both algorithm 1 and 2 in the main paper 2. Fig 1. It’s not obvious what the latent space did in this figure The paper is overall well written, and the idea is very clear and elegant. Meanwhile, I have some doubt on the sufficiency of experiment to show the improvement from quality. Also the improvement over baseline method MSGAN seems minor for me. Expect the rebuttal to clear my doubt. <doc-sep>The authors demonstrate that the generator in a GAN is a continuous function two latent codes that are close in the latent space are mapped to two images that are close in the pixel space. However, the quality of the generated images is not preserved as quality is not a continuous function in pixel space. To address this issue, the authors propose to transform the original latent codes and demonstrate that it results in better generation quality and diversity. 1. Related works have not been cited. For example, the following paper performs a similar optimization technique with a different objective. a. Yan Wu, Jeff Donahue, David Balduzzi, Karen Simonyan, Timothy Lillicrap. LOGAN: Latent Optimisation for Generative Adversarial Networks. (https://arxiv.org/pdf/1912.00953.pdf) b. https://arxiv.org/abs/2005.02435 c. https://arxiv.org/abs/1809.03627 Please do a thorough literature review to incorporate any such missing work. 2. The authors propose five methods viz (i) AdvLatGAN-z, (ii) AdvLatGAN-qua, (iii) AdvLatGAN-qua+, (iv) AdvLatGAN-div, and (v) AdvLatGAN-div+. Each method either focuses on improving quality or aims to enhance diversity. It would be interesting to see what happens when the proposed objectives in equation (5) and equation (8) are combined. Does the hybrid method outperform both AdvLatGAN-qua+ and AdvLatGAN-div+? 3. Another limitation of the work is that they compute FID and JSD. FID, although a widely used metric, is not able to quantify the quality and diversity as it is a unidimensional score. Therefore, it would be nice to quantitatively verify the claims of enhancement in quality and diversity in AdvLatGAN-qua+ and AdvLatGAN-div+ respectively. Comparing other metrics such as precision/recall or density/coverage will be more meaningful towards such goals. a. Density/Coverage: Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. Reliable fidelity and diversity metrics for generative models. In International Conference on Machine Learning, 2020. (https://arxiv.org/abs/2002.09797) b. Precision/Recall: Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, Sylvain Gelly. Assessing Generative Models via Precision and Recall. In NIPS, 2018. (https://arxiv.org/abs/1806.00035) 4. The number of baselines used in the experiment section is too less for the CIFAR-10 case? 5. It would be good have experiments on large-scale datasets such as FFHQ. 6. I wonder if FGSM is the only method this can be applied with or any other can be used as well? If so, how does the method depend on the adversarial training method employed? 7. I am not sure why MSGAN was chosen as the baseline for regularization? 8. There is no theoretical justification on why should the proposed method work? There is some empirical evidence but there would be better to have some theoretical backing on why this method should aid in avoiding mode-collapse. Please refer to the above comments.
To improve the generative adversarial nets, the paper proposes to add an implicit transformation of the Gaussian latent variables before the top-down generator. To further obtain better generations with respect to quality and diversity, this paper introduces targeted latent transforms into a bi-level optimization of GAN. Experiments are conducted to verify the effectiveness of the proposed method. The paper is highly motivated and well-written, but the experiment part still needs to be strengthened because the goal of the paper is to improve the GAN training, comprehensive and thorough evaluation of the proposed method is necessary. After the first round of review, in addition to the clarification issue and missing reference issue, two reviewers point out that the method is only tested in small-scale datasets, and suggest authors evaluate the performance of the proposed method in more complex datasets. Two reviewers point out that the experimental validation and comparison to prior approaches are insufficient. During the rebuttal, the authors provide extra experiment results to partially address some issues. However, most of the major concerns from other reviewers, such as (i) how are the performance of the method in large scale datasets that have complex latent space manifolds, (ii) non-convincing performance gain, and unclear problem setup, still remain. After an internal discussion, AC agrees with all reviewers that the current paper is not ready for publication, thus recommending rejecting the paper. AC urges the authors to improve their paper by taking into account all the suggestions provided by the reviewers, and then resubmit it to the next venue.
The authors propose a layerwise pruning method that formulates the problem of eliminating neurons as a weakly submodular optimization problem for which the well-known greedy algorithm gives an approximation guarantee. They illustrate the practical performance of three different variants of their strategy when extended to prune the entire network on a variety of tasks. Originality: The idea of formulating the pruning problem to take advantage of weak submodularity is novel to me. Although it does build somewhat crucially on existing work. Quality: The technical and experimental results seem to be well-executed to the best of my assessment. One can always add more competitors and try on a wider variety of architectures, but I found the selected experiments to be illustrative and compelling. Clarity: The exposition was clear overall though I found Figure 1 tough to read even after significant zooming. Significance: While it may be more expensive than some other approaches, the cost of this procedure is typically only born once. So, I can see this as being a very useful tool in practice and may spur pruning research in novel directions. Some limitations were discussed, though I don't see the focus on the data limited regime as a limitation of the methodology necessarily. Or am I missing something important? <doc-sep>This paper propose a network pruning approach via sub modular optimization. The proposed method uses a greedy fashion to layer-wisely select neuron that improves the performance most. This paper shows that the returned solution given by the greedy algorithm is able to well approximate the optimal solution (of a NP hard problem). To reduce the cost, a computation-saving approach is also proposed. Empirical results shows that the approach is able to achieve good performance when the available training data at pruning is small. Overall I find this paper quite interesting, especially its performance when there is limited number of training data at pruning. The theoretical guarantee is also stronger and does not require stronger assumptions. However, I do have some concerns: 1. It seems the proposed method in general is very similar to Ye at al.. Both methods are greedy forward selection and the different seems subtle. Could you give more discussion in terms of methodology? 2. Following up Q1, it seems that an main improvement over Ye at al is to reduce the computation cost. However, [1] also propose a new technical to reduce the cost and I was wondering how does your method compares with [1]? 3. What would max_{|S|\\le k} F(S) looks like? I understand that it is NP hard but it would be interesting to show how this quantities looks like. 4. Can we empirically verify the \\gamma_{U,k} as this is an important quantity? [1] Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough Yes <doc-sep>The paper proposes a neural network node pruning method and shows that the objective is essentially a form of weakly-submodular function optimization. Therefore, the pruning of a single layer can be solved using the greedy algorithm with a theoretical guarantee. The paper also shows that using a limited number of data, the proposed method is able to achieve the best performance compared to baselines. Strengths: 1. The paper draws a very intriguing and solid connection between neural network pruning and submodular optimization. More specifically, the weakly-submodular optimization factor is closely related to the activation matrix. 2. The paper studies the problem in a comprehensive manner, including pruning of regular regions of neurons, strategies of pruning multiple layers, and speed-up tricks for the submodular optimization. 3. The proposed method empirically achieves the best performance compared to baselines on some network structures under a limited number of samples. Weaknesses: 1. The proposed approach has relatively high computational complexity. It seems that scaling would be a problem for scenarios such as larger network structures or utilizing a large number of data samples for pruning. 2. Given the limitation of the complexity, I guess that the empirical experiments could be hardly extended to more complex datasets (e.g., ones with larger image dimensions) or larger network structures. 3. The proposed framework only works for one layer. For pruning of multiple layers the paper proposes some heuristics. Yes. <doc-sep>Goal: effective and efficient structured pruning of a pre-trained NN-network if only small amount of unlabeled training data is available Contributions: 1) The authors propose a new technique (called "principled data-efficient structured pruning") that alters the existing "reweighting" method [Mariet and Sra, 2015]. Unlike [Mariet and Sra,2015]: * submodular optimization ... they formulate the subset selection problem of structured pruning as a weakly submodular maximization problem and solve it approximately by greedy search * extended pruning: pruning of regular regions of neurons (e.g., channels) and three strategies for pruning of multliple layers * limited number of training data (cca. 1% of the original training data) with no labels and one-shot pruning (without fine-tuning) 2) Theoretical justification of the method and its performance (in the Supplementary material). 3) Experimental evaluation of a solid scope with promising results. Strengths (significance and quality): 1) The problem of effective structured pruning of a pre-trained NN-network in the presented limited-data regime is important. Most of the existing structured pruning techniques require greater amount of training data and fine-tuning to work well. The proposed method (or even its particular parts) appears to be a valuable contribution with this respect. 2) The experimental evaluation of a solid scope offers promising (outstanding and stable) results. 3) Theoretical performance guarantee. 4) The authors try to be fair in their comparisons with concurrent techniques (e.g. the application of reweighting, various parameter settings...) and analyze also the weaknesses of their method. Clarity: 1) The submission is relatively clearly written and easy to read (except the theoretical parts and the figures). If there was enough place, I would prefer to move more experimental results from the supplementary into the main paper. The figures (e.g. Figure 1) are small and less comprehensible. It would maybe help to scale the graphs differently, to highlight the variants of the proposed method or to change the used colors. Novelty: 1) The novelty of the contribution is slightly limited. The proposed method is based on previous work by [Mariet and Sra,2015]. The original method is altered using the principles of submodular optimization to be advantageous in the new context of limited-data. 2) The related work is cited and addressed adequately. OK: The authors try to analyze both strengths and limitations of the paper in the experimental part.
The paper proposes a data-efficient structured pruning method, that for a given layer finds neurons/channels to prune with corresponding new weights for the next layer, that minimize the change in the next layer's input induced by pruning. This selection problem is formulated as a weakly submodular maximization problem, thus it can be provably approximated using the greedy algorithm. The proposed solution is interesting and practical as it requires limited-number of training data and no labels. The reviewers found the authors' response convincing, however the authors are strongly encouraged to incorporate the clarifications provided in the rebuttal into the final version.
This paper proposed to relax fixed table structures by introducing a Transferable Tabular Transformer (TransTab) for tables. They basically convert each row into a feature vector and then apply stacked transformers for feature encoding. There are several advantages of this encoding: (1) it can deal with the tables that have different number of columns; (2) it is easier to transfer the knowledge learned from different columns. They conduct experiments on one clinical dataset and several public datasets under four different settings: supervised learning, feature incremental learning, transfer learning, and zero-shot learning. The empirical results show that the proposed approach outperform the baselines in the literature. They also showed that in the zero-shot learning scenario, they can almost match the performance of pretraining plus fine-tuning. - Originality: The main idea in this paper is a good combination of several ideas proposed in the literature. With modifications and adaptations, it worked and yield promising results on several datasets. - Quality: The proposed approach is technically sound and the experimental results showed that it outperformed several strong baselines for tabular data prediction. Although the results are impressive, I have several comments: - One of the main advantages advertised in the paper is that the proposed method could easily extend to feature incremental learning, pretraining+finetuning, and zero-shot inference. In the paper "existing works only cover vanilla supervised learning and fixed-table pretraining due to the fixed-column assumption." In my point of view, this is overclaiming. Not all existing works only cover vanilla supervised learning. For example, those transformer-based architectures like TabTrans, FT-Trans, can be easily adapted to those settings. - The zero-shot performance in Table 5 seems surprising to me. How do you split the table into three distinct sets? Do you do random split and how many random seeds have you tried? I would imagine a split that during the training, the model mostly sees Categorical and Binary features while during test it mainly sees Numerical features. In this way, I don't think the model is able to do zero-shot transfer. Moreover, can you try a setting that you manually control the number of categorical, binary, and numerical feature in both training and testing and see how does the model generalize? - In addition to the quantitative results, I would also like to see some qualitative analysis of the transferability of the model. What does the data look like and why the model is able to do the transfer? - How does the self-supervised pretraining affect the performance? I would like to see an ablation study where you only training the model with the direct supervision signals. This could help understand how much of the improvement is from the architecture design and how much improvement is from the self-supervised pretraining. - Clarity: In general, this paper is well-organized and easy to follow. I think section 2.4 could be better explained: what's the definition of $v_i^k$ in line 133? How do you compute $\\psi$ in equation 4? - Significance: This paper achieved strong results across a range of different datasets. Although the experiments are not comprehensive enough for the readers to understand every aspect of the system, I think it still sets a strong baseline and a good reference for the future work in this direction. This paper has sufficiently addressed the limitations. <doc-sep>This paper presents a tabular learning framework that covers transfer learning across tables, zero-shot inference, feature incremental learning, pre-training, and finetuning. This approach does not assume that columns in the table are fixed and work even with variable column tables. The authors propose two Contrastive Learning-based pre-training approaches by vertically partitioning the tables. This pre-training approach is feasible since the columns can vary across tables, making self-supervised and supervised pre-training possible. The transformer model proposed performs significantly better in all the claimed settings (transfer learning across tables, zero-shot inference, feature incremental learning, pre-training, and fine-tuning). In addition, the authors also introduce *clinical trial mortality prediction* tabular dataset. **Pros** * The proposed contrastive learning methods are computationally cheaper. * This variable column approach is really useful when the tables have too many columns and encoding them will be difficult in current existing transformers for tabular data (e.g. TaBERT). **Cons** * Setting for *Feature incremental learning* and *Transfer learning* seems very similar. (Dividing the dataset into three sets containing an equal number of columns and first training on set 1&2 then train on set 3 vs the transfer learning setting in the paper) * line 214-215 is confusing (incomplete) * In Feature incremental learning, no comparisons on how the performance on set1 after training on set1+set2; set1, set2, set1+set2 after training on set1+set2+set3. Will the performance on previous sets decrease? If yes, How to mitigate that? NA <doc-sep>This paper focuses on the transferability of tabular data classification methods. It proposes three novel settings to evaluate the model transferability in terms of columns: column overlapping, column increment, and zero-shot. It also proposed a novel method combining self-supervised and supervised pre-training. ### Strength * Three novel settings to evaluate the model transferability on tabular data classification. Transferability is an important research topic. * A novel method based on (self-)supervised pre-training for tabular data classification which is more accurate and transferable. ### Weakness * Incorrect claim in line 109: $E$ is not contextualized. To get contextualized embedding, the input embeddings should interact with each other, but not simply concatenization. * Feature incremental learning setting is unclear. * No baseline results (e.g. VIME and SCARF) for zero-shot setting. * This paper assumes tables are matrix-like and column types are given, which hiders its transferability. Many papers in the NLP community have explored to process tables under a more flexible setting: * Wang et al. [Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning](https://arxiv.org/pdf/2205.03972). NAACL 2022 * Yang et al. [TableFormer: Robust Transformer Modeling for Table-Text Encoding](https://arxiv.org/pdf/2203.00274.pdf). ACL 2022 * Wang et al. [Retrieving complex tables with multi-granular graph representation learning](https://arxiv.org/pdf/2105.01736). SIGIR 2021 * It would be meaningful to compare the methods focusing on numerical tables with those focusing on text tables.
This work introduces and evaluates a general scheme to feature-ize tabular data, and methods for (self-supervised) pre-training over the same, with a focus is on learning transferable representations. Reviewers were unanimous that the approach proposed constitutes a flexible, practical approach that borrows and brings together existing SOTA techniques. Some questions about the specific settings concerned in the evaluation (and distinctions between them) were sufficiently addressed during the response period. Empirical results show consistent gains over the baselines on the tasks considered. An additional suggestion: one might naively anticipate that transfer learning for tables is not particularly promising given the very different semantics two arbitrary tables might have. However, the scenarios considered here involve settings in which transfer seems a priori reasonable; I might suggest the authors address this upfront, and explicitly outline the conditions under which transfer learning for tables is anticipated to work (and what assumptions are necessary for such cases), and where it is not.
This paper proposes a new method called GenTD for off-policy evaluation of GVFs. The algorithm can further estimate multiple intercorrelated GVFs at once under some reasonable (causal) assumptions. The contributions are mainly theoretical (convergence proofs). Some empirical evaluation on a small MDP is provided. # Strengths Overall, the paper tackles an interesting problem, identifies a shortcoming of previous algorithms and proposes a solution with theoretical guarantees. I found it overall pretty clear. # Weaknesses ## Scalability I have some concerns about the scalability of the method and whether it could extent beyond the linear regime. In particular, Alg 1 requires some projections step that may not make sense in the non-linear regime. Could authors comment on whether they think it is an issue? ## Experiments The experimental validation is weak in my opinion. I understand this is a theory paper and the main contribution is the algorithm and its guarantees, however I believe it is also in the authors interest to showcase that the algorithm performs well on environments more complex. In particular, GenDICE which this work builds upon had some empirical evaluation on non-trivial gridworlds as well as a simple control (Half-Cheetah) environment. As far as I know, methods based on min-max problems (DualDICE etc) are known to be quite unstable, so I think it is important that you showcase the robustness of your algorithm. Limitations are mentioned above. Beyond the questions on the theory, the limitations appear most on the practical side. In the paper the empirical evaluation may be lacking and thus making it harder to convince the community that this algorithm would be a safe and efficient one. More broadly, it is not clear to me how this algorithm would be extended to the non-linear setting. <doc-sep>The problem formulation this paper is concerned with is the General Value Function (GVF) approximation task, where the algorithm's task is to estimate a vector which is defined as an expectation of some algebraic manipulation of the signals received. This vector is named a GVF. On this task, the authors mainly compared their approach with another existing method called Gradient Temporal Difference (GTD). The main observation of this paper is that the goal of GTD is to minimize the empirical Mean Squared Projected Bellman Error (MSPBE), whereas a more legit goal would be the expected mean-squared projected Bellman error which they name as the Mean Squared Projected General Bellman Error (MSPGBE). To account for the shift from the empirical (data-dependent) mean to the expected mean, they applied an existing density ratio estimation method. Strengths:\\ Originality: The motivation is legit.\\ Quality: The logic chain works through, it is clear what they are doing. \\ Clarity: The story is easy to read. (I didn't check any proof.)\\ Weakness:\\ Significance: \\ 1. Seems A+B, where A = GTD, B = density ratio estimation.\\ 2. More precisely, I put this as a weakness because personally, I don't quite get the logic why learning density ratio would be more accurate. The authors put as a counter-example where GTD fails for Complete F_\\Phi. However, if for the same example, the density ratio is not learned very well, why would GenTD work? I guess assumptions count for limitations. A better limitation from my point of view would be to specify for which interesting classes of RL tasks, the algorithm GenTD wouldn't work. <doc-sep>The paper considers evaluating multiple interrelated general value functions using offline data. A generalized TD learning algorithm is developed and its theoretical properties are analyzied in detail. The strengths and weaknesses of the paper are given in the next section. The contributions of the paper are mainly theoretical and include (1) establishment of the contraction property for both forward and backward value functions; (2) development of a generalized TD learning algorithm to overcome the limitations of previous baseline methods; (3) convergence analysis of the proposed algorithm; (4) identification of sufficient conditions under which the proposed algorithm would converge. Strengths: 1. The paper evaluates both forward and backward general value functions. Despite the richness of the literature on off-policy evaluation, these general value functions are less studied. In that sense, the paper targets an interesting research problem. 2. The proposed method considers multiple interrelated general value functions jointly rather than on a case-by-case basis. 3. A generalized TD learning algorithm is proposed to address the limitations of the gradient TD methods. 4. Convergence of the model parameter and the estimated value functions are investigated in detail. Some comments: 1. There is a typo in the title of the pdf file. In addition, there is a question mark in the checklist. 2. The linearity assumption is quite strong. The algorithm might perform poorly in nonlinear systems with high-dimensional state information. 3. It remains unclear to me if the variance of "the reward to go" (mentioned on L23) can be represented in the form of the general value function. In particular, the variance of the cumulative reward would involve the interaction term that measures the covariance of the rewards at different time points. 4. Can you allow $B_j$ to be unobservable? The gradient of the value function (mentioned on L26) does not seem to be an observable quantity. It might be more useful to cover cases where $B_j$ needs to be estimated from the data as well. 5. The literature on off-policy evaluation with a general scalar value function is not thoroughly reviewed. In addition, it was mentioned on P2, L92 that these methods are not directly comparable. However, I do not agree with this argument. Suppose the initial state-action distribution concentrates on a particular state-action pair, then the value function is reduced to the state-action value. Combined with a kernel-type estimator, existing OPE methods such as (marginal) importance sampling, double robust estimation (double reinforcement learning) can be potentially applied to this setting. It remains unclear to me whether the proposed estimator is better. 6. The numerical study is oversimplified. It considers a simple toy example with 7 states and 2 actions. More extensive simulations based on e.g., OpenAI gym environments are needed to test the empirical performance of the proposed algorithm. The main limitation includes the linearity assumption as well as a lack of extensive empirical studies. The former was partly discussed in the discussion where nonlinear function approximation was mentioned. <doc-sep>The paper proposes a new off-policy algorithm to evaluate forward and backward general value functions that have the property of causal filtering. They show that the existing algorithm (GTD) fails in this (for example, failing to compute the ground truth GVF) and propose an algorithm to overcome its shortcomings. Thanks to the authors for putting in the effort in doing this work! Strengths: - I think this is an important area to research. GVFs are relatively under-researched and it's important to see this type of work. - The 1) use of the concrete example on line 140 was helpful to communicate the idea of GVFs; 2) the comparisons between GenTD and GTD was also nice. Weaknesses: - Given that the concept of GVFs is unique and somewhat niche, I think the authors should be clearer on what GVFs exactly are (see suggestions below). - The experimental section seems rather limited. The cited paper on GVFs General value function networks, 2021, seems to have a pretty detailed experimental section. It would be interesting to see how the proposed algorithm scales to harder domains. No, they do not address it, but this seems to be more fundamental work with no direct societal impact.
The paper proposes a new algorithm called GenTD for the estimation of multiple general value functions (predictive and retrospective) from off-policy data. The paper shows convergence guarantees for this algorithm to the ground truth for a certain class of general value functions with causal filtering. The initial reviews were mixed. On the positive side, the reviewers found the writing to be clear overall, found the studied problem important and appreciated the theoretical results. On the negative side, several reviewers voiced concerns regarding the experimental evaluation. Other concerns are the limitation of the linear setting and possible extensions to the non-linear setting as well as the significance, specifically, whether this work is merely a combination GTD and density ratio estimation. The authors' response could alleviate these concerns, further clarifying the contributions of the paper as well as adding additional experimental results. After the discussion with the authors, all reviewers view the paper positively and the AC agrees. All in all, this paper is recommended to be accepted.
The major result of this paper is to provide a general framework for analyzing compressed gradient descent: an algorithm that has unbiased gradient and has bounded variance with respect to a *shift* vector. This general framework allow one to recover several existing algorithm, and provide some improvement. In particular, it improves the previous rate of $\\max {\\kappa(1+\\frac{\\omega}{n}), \\omega }$ to $\\kappa(1 + \\frac{\\omega}{n})$. There are no major weakness of this paper (provided I do not miss important literature). One minor issue I can think of is that the algorithm only works for strong convex function and smooth, which is relative narrow. If there are simple adaptations to other setting (as usual in optimization literature), please explicitly point out in the paper. The writing in general is good, with a few minor issue: 1. Page 5 "Our first approach is based on the celebrated DIANA...", it is unclear to me why DIANA is a celebrated one, please consider rewording. 2. There is no explanation on the parameter $\\omega$, how large it could be in practical, and how people view it in literature? <doc-sep>1. The paper widely covers several distributed communication compression algorithms, and combines the existing work with the proposed shift compressor. It provides a new aspect of analyzing compression algorithms. 2. This paper has a clear structure, with sufficient discussion of related work and several comparisons. see detailed comments 1. The experimental results seem a little bit weak. The authors just provided the classical ridge regression optimization problem, which is quite poor to show the outperformance of the proposed methods. Although the authors mentioned that the benefits have been clearly demonstrated by other related work, it is better to show results on the common deep learning baselines in this paper. 2. I am also a bit concerned on the novelty of the shift compressor. In particular, I feel this is quite similar to the EF21 compressor proposed in *"EF21: A new, simpler, theoretically better, and practically faster error feedback." Advances in Neural Information Processing Systems 34 (2021).* of course, this paper provides a general view of such compressors. Yet it still seems highly related and the intuition seems quite similar. The authors might want to claim the differences with the above work. 3. The summary in the contribution part claimed about “Improved rates”. Yet the authors mentioned in the following, “the results … can have the same complexity as compressed gradient methods”. It seems from this claim the shifted compressor maintains the convergence rate instead of really “improving” it? 4. The authors can explain more about the shift compressor and the meta-algorithm. The eq.(3) shows that it subtracts a shift $h$, compresses and then adds back the shift $h$, it involves two more steps, adding more computational costs, but it is not clear about the advantage of such a shifted compressor through the theoretical analysis. 5. It seems that the choice of shifts is crucial for the proposed shift compressor. However, one may not know which shift is optimal. 6. The theory is all about convex cases, I wonder if the authors could extend it into nonconvex cases. Also, it seems to only work under gradient descent, can it be extended to the stochastic gradient case? These would largely improve the contribution of the proposed work. minior: There is a typo in the right plot of Figure 1, the first legend of R-DIANA should be s = 8. <doc-sep>1. The idea of shifted compression itself is neat and clean, and could be easily applied to a lot of existing distributed optimization algorithms. 2. The theoretical analysis shows that applying shifted compressors to some existing distributed optimization algorithms such as GDCI can result in better convergence. 3. The experiments on ridge regression and logistic regression show good performance compared to DIANA. 1. It is unclear why using DIANA in Rand-DIANA to compress the communication of the shifts. Actually, I cannot even understand how DIANA is used in Rand-DIANA, since Rand-DIANA seems simply a random masking. 2. My main concern is that the experiments are too simple. Both the optimization problems (ridge regression, logistic regression) and the datasets (synthetic, w2a) are very simple and small for modern computation hardwares. These problems could be easily and quickly solved on a single-node CPU machine, which could hardly used to justify the results for distributed optimization algorithms. Distributed training is only necessary when the model and the datasets are both extremely large. 3. The experiments only show relative error vs. number of bits. For distributed training, what the users really care is whether the overall training time could be reduced. However, the wall-clock time for the training are not reported. 4. The uncompressed baseline is not reported in the experiments. 1. In the experiments, is the algorithm called "DIANA" the DIANA itself, or Algorithm 1 with DIANA on the shifts? 2. How is DIANA used in Rand-DIANA? I cannot see any connection between DIANA and Rand-DIANA, since Rand-DIANA seems simply a random masking in the communication. 3. For distributed training, what the users really care is whether the overall training time could be reduced. I strongly recommend to report the wall-clock time for the training, and compare it with the baseline (including uncompressed training baseline). 3. I strongly recommend to report the uncompressed baseline in the experiments, so that the reader can see the gap between the compressed training and the uncompressed training.
Meta Review: This paper studies a distributed/federated optimization setting, with a focus on communication as the bottleneck (specifically, they focus on the case where compressed, noisy estimates of the gradient are being used, in a suitable formalization). The paper's most significant contributions are theoretical. First, they provide a framework which generalizes several extant algorithms for such settings (namely, access to an unbiased gradient and bounded variance with respect to a shift vector). Second, they provide a convergence rate analysis for their framework, along with ways to pick optimal shifts which gives an improved algorithm. Overall, the paper makes nice theoretical contributions, though several reviewers noted the empirical evaluation is somewhat restricted (and the settings apply to the convex setting only, which might be restrictive for some practical applications).
This work focuses on the update regression problem for structured prediction. The authors explore two standard structured prediction approaches (graph-based and transition-based). They conduct a set of experiments and introduce a noveal approach to mitigate the update regression problem. **EDIT after discussion with the authors:** I still disagree with authors, but they made a good point in our discussion, so I upgraded my grades. I strongly recommend the authors to soften some claim made in the paper, refer that in some cases KL div technique for KD can be used and give proper citations about how one could do that. **Strenghts:** - this work focuses on an important and under-studied problem - experimental results are interesting **Weaknesses** Unfortunately, this paper has many weaknesses. First, and importantly, the main contribution of this paper (section 5.3) is not well motivated and the explanation of the contribution itself is really handwavy. Second, explanation of the different baseline approaches is also too handwavy, making the overall paper difficult to understand and not self-contained. On the motivations: the authors argue that previous approaches cannot be applied to structured prediction. I strongly disagree with this. - l 227, "the output distributiion of global prediction is often intractable": in the non-projective dependency parsing case, the distribution can be computed using the Matrix Tree Theorem [1, 2, 3]. If we restrict the problem to projective trees, this can be computed via dynamic programming [4, 5]. For the transition based model, approximated approaches have been explored in the litterature [6] - l. 222, "the instantiations of KD are usually based on a loss function computing the cross-entropy of output distributions [...] however existing methods are no directly applicable to structure prediction tasks": I don't understand why. First, the next sentence is false (see previous point), but also KL divergence between structured distributions has been studied in the litterature. For non-projective trees, see [7, Section 6.3], for methods based on dynamic programming see [8] - l. 263, "furthermore, it is unclear how to adapt existing sampling methods to solutions that are not formalized as sequence generation": Recent work has considered sampling from the non-projective tree distributions, including sampling without replacement [9, 10]. Moreover, previous work has also considered perturb-and-MAP approaches [11, 12, 13]. Finally, in the case of dynamic programming algorithms, it is well known that it is possible to sample from the associated exponential familly distributions, see e.g. [14] **Related work** As suggested by the comment above, the literature is not properly explored or cited by the authors. There are similar problems in the introduction and related work section. For example: - l. 54: authors cite Dozan and Manning (2017) for graph-based parsers, whereas the correct citations is more than 10 years older [15] - l. 21: for transition-based parsers, they cite Ma et al. (2018), better citations would be [16, 17] [1] Structured Prediction Models via the Matrix-Tree Theorem (Koo et al.) [2] Probabilistic Models of Nonprojective Dependency Trees (Smith and Smith) [3] On the complexity of non-projective data-driven dependency parsing (McDonald and Satta) [4] Semiring parsing (Goodman) [5] Differentiable Dynamic Programming for Structured Prediction and Attention (Mensch and Blondel) [6] Globally Normalized Transition-Based Neural Networks (Andor et al.) [7] Efficient Computation of Expectations under Spanning Tree Distributions (Zmigrod et al.) [8] First- and Second-Order Expectation Semirings with Applications to Minimum-Risk Training on Translation Forests (Li and Eisner) [9] Efficient Sampling of Dependency Structures (Zmigrod et al.) [10] Unbiased and Efficient Sampling of Dependency Trees (Stanojevi) [11] Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models (Papandreou and Yuille) [12] Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder (Corro and Titov) [13] Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming (Corro and Titov) [14] See section 17.4.5 of Machine Learning: A Probabilistic Perspective (Muprhy) for the idea and Latent Template Induction with Gumbel-CRFs (Fu et al.) for application to CRF-like distribution [15] Non-projective Dependency Parsing using Spanning Tree Algorithms (McDonald et al.) [16] A Classifier-Based Parser with Linear Run-Time Complexity (Sagae, Lavie) [17] Algorithms for Deterministic Incremental Dependency Parsing (Nivre) nothing to report <doc-sep>The paper examines model update regression in NLP structured prediction tasks. Model update regression is an issue appearing when a model is updated and the new model classifies part of the test examples negatively, while the old model classifies them positively. The novelty of the paper is that it explores the problem in structured prediction, while it has been previously explored for classification. The work proposes a new method for correcting the new model from the old one - backward-congruent re-ranking and experimentally shows that it performs better than other previously used methods, such as model ensemble and knowledge distillation. The work also defines a new measure related to the model update regression problem in structured prediction - negative flip impact. Strengths: - Studying the model update regression in relation to structured prediction is interesting and important. The contributions are novel enough. - The paper is very well-written and clear. The problem, previous work, and the proposed solution are clearly described. - The proposed improvements are meaningful and the experiments show that the proposed method for correction - backward-congruent re-ranking works better than the alternatives on the explored tasks. Weaknesses: I did not find any major issues with this work. I am suggesting some things I found that could improve it further. - Line 120: maybe add a citation for UD 2.2. On the next line only the POS-tagging dataset is cited. - Line 182: an example could be beneficial for understanding how the global prediction is thus decomposed into a sequence of next-word predictions. - Line 217: from the description it is not really clear how checkpoint averaging and prediction ensembling differ. - Table 5: It is not clear what is the difference between rows 5 and 8 (for dropout-p=0.3) n/a <doc-sep>The paper studied a problem called model update regression in the field of structured prediction for NLP, which means a new model that achieves higher scores may perform worse than its baselines on some cases. The authors proposed an approach, BCR, to handle this issue. The basic idea is to use an old model to filter predicted candidates from the new model. A trick called dropout-p sampling is also applied to improve the diversity and quality of produced candidates. The experiments had been done on Dependency Parsing and Conversational Semantic Parsing, which were aimed to show the effectiveness of BCR. Pros: 1) the writing is good and clear; 2) the problem is well defined and the solution is straightforward. Cons: 1) syntactic parsing is indeed very classical for computational linguistics but only stands for a tiny part of tasks in the field of structured prediction. Some other tasks like NER and POS Tagging are far more common and have much more applications in the NLP industry. Plus, a lot of space in the paper is used for introducing syntactic parsing, which causes a bar even for researchers from NLP and is not necessary for its future applications to common scenarios. Therefore, The authors should investigate more representative tasks in structured prediction such that the formulation is more clear and the work has a broad impact on the whole community. 2) knowledge distillation for structured prediction is a mature field in NLP. Please at least see this work, Structure-Level Knowledge Distillation For Multilingual Sequence Labeling, and check the recent literature. Hence, the authors shouldn't say like 'existing knowledge distillation-based solutions cannot be directly applied to structured prediction models' in the paper, and more solid experiments are expected for comparisons. 3) the problem mentioned in the paper is formulated in a quite intuitive manner, which might lead to many explanations that can't be proved false. For example, from my view, the problem may be a cause of randomness in the premise that neural networks can't fit the whole dataset, which may explain why the ensemble model works. Therefore, I believe a theoretical foundation is expected. Besides my concerns raised in 'Strengths And Weaknesses', I think the authors should do more experiments to empirically confirm the problem and describe it in a more acceptable way. I have seen that the authors took a lot of space to show the performance numbers of their models, which I believe should not be the focus of this paper. <doc-sep>This paper tackles the problem of model update regression: when a model with better aggregate performance is deployed, it may make mistakes that a previous model didn't make. How can we minimize these "regressions"? Specifically, this paper is the first to look at this problem in a structured prediction setting. On both dependency parsing and on a conversational dialogue / semantic parsing task, the paper evaluates new models to see how many examples they regress on. The paper describes several techniques to mitigate regression, including a new one, backward-congruent reranking, that reranks the new model's output using the old model. This is combined with a form of dropout at test time to yield diverse samples. The paper shows that BCR achieves good task accuracy while minimizing the number of model regressions. STRENGTHS - The problem setting in this paper is novel. While a somewhat niche problem, I am convinced of the core argument that this could be useful (with some reservations; see below under "Weaknesses"). - The exploration of "new" models is quite thorough, and it's nice to see the contrasts between some different types of model updates, including changing the mode paradigm. - The BCR technique is simple and I could see it being widely adopted for this task - The dropout-p sampling technique is interesting and a novel way of getting diverse samples from these kinds of generation models. WEAKNESSES The main conceptual weakness I see in this paper is the notion of a regression as it ties to model confidence. It seems to me that if you take a model achieving 90% accuracy and perturb its weights slightly, it may still get 90% accuracy but make different predictions. Many of the flipped predictions will be those where the 2nd-best option had nearly the same probability as the 1-best option; that is, cases where the confidence was low / entropy of the label distribution was high. It's not clear to me in these cases that the first model is getting things "right" and the second is getting them "wrong", more that the model had high uncertainty and it broke one way (correct) for some examples and another way (incorrect) for other examples. Considering that BCR's performance is always lower than that of the ensemble, it seems to me that what's happening here is that this is really an ensemble that weights more heavily towards the earlier model. We aren't ensembling the two models symmetrically, but by reranking outputs from the second using the first, we're really only making changes where the second model is significantly more confident and doesn't even return what the first model initially preferred, resulting in a smaller performance gain but also fewer changes. So basically this is a clever way of combining the models but mostly has the effect of preferring the first. I also wonder about the NFR/NFI metrics as they relate to the motivation of this paper. The idea of a system changing and then breaking UX or downstream modules that were depending on certain things working is a problem that resonates with me. However, do NFI and NFR really correlate with this motivation? Again, many of the differences seem like they could most heavily depend on examples that were basically half-working to begin with, which were probably not consistent features a user was very attached to. As a result, while this paper explores a different point in the design space of how to ensemble these two models, I'm not sure what it presents is really groundbreaking in terms of results. There is little discussion of limitations and societal impact, but I don't see this as a big problem for this paper.
This is a good paper with a topic that is very important in practical scenarios but does not have many off the shelf solutions, and I find that this paper makes an attempt to this end. In addition, I am happy to see the discussion with the reviewers, most of whom suggest acceptance.
This paper introduces a new model for performing multi-label classification which is particularly well-suited to situations where there are elements of structure (hierarchy, mutual exclusion) in the label space. The fundamental idea is to use hyperplanes in hyperbolic space as the decision boundaries for label assignment. The main benefit of hyperbolic space over Euclidean space is that there are infinitely many hyperplanes in hyperbolic space which are non-parallel and do not intersect, thus allowing for the regions corresponding to label assignment to capture hierarchical and exclusionary patterns. The previous SOTA model used axis-aligned hyperrectangles, or "boxes", and the authors argue that such a constrained geometric region is not in alignment with the natural decision boundaries provided by neural network architectures, whereas using the hyperplanes in hyperbolic space are more amenable to the output of such encoders. A thorough analysis on 12 datasets, along with additional ablation studies are provided in support of these claims. Originality: The method is novel, and the reasons for this approach are well-motivated. Quality: The submission is technically sound. The experimental results, while only a minor improvement quantitatively, are statistically significant. One caveat, which will be brought up in the questions, is that it is not necessarily clear that the full benefit comes from the use of hyperbolic hyperplanes, but rather the injection of the mutual exclusion constraints, which other models could also potentially easily exploit. Still, even if the model is only on-par with SOTA results when taking this into account, the underlying idea is sound. Clarity: The submission is mostly clear, but there are a number of improvements to the presentation I would recommend in the Questions section. Even so, I feel I would be able to reproduce the model from the description provided in the paper. Significance: The task is of interest to a number of researchers, and more broadly the idea of the incorporation of structure in Hyperbolic space would be of interest to the community of researchers working on such representations. The authors mention that a limitation of the work is that it currently only considers implication and exclusion, and they suggest logical equivalence ($l_a \\wedge l_b \\iff l_c$) as another potential useful constraint to include, however I think that more can be said on this point. The fact that the current model does not include such a constraint is not merely that it has yet to be implemented, there is a representational limitation in that the intersection of two balls is not, generally speaking, another ball, and therefore the existing model does not lend itself to easily encoding this constraint. On the other hand, models which use box regions can very easily incorporate such a constraint, as the intersection of two boxes is also a box. <doc-sep>This paper proposes to model hierarchical structure as an embedding inference using Poincare balls. Hierarchical inclusion and exclusion are used to construct training losses in Poincare space and experiments show the proposed model generally outperforms box-based baselines across multiple datasets. 1. The proposed Poincare hyperplane has some nice attributes on inclusion and exclusion. The construction of the training loss is intuitive and sense-making. 2. Experiments on hierarchical multilabel classification suggest promising results over box-based baselines. The amount of novelty over the NeurIPS 17 paper is unclear. This paper has a substantial amount of analysis which is a bonus to have. But the technical crux is on the model. <doc-sep>This study explores a structured multi-label prediction problem. To this end, the authors propose to convert logical constraints into soft geometric constraints in the hyperbolic embedding space, where the hyperplanes are viewed as convex areas, with insideness and disjointness of these regions representing logical linkages (implication and exclusion). Extensive tests on 12 multi-label classification problems demonstrate the model's capacity to boost performance. Strengths: This study presents a novel translation that converts logical constraints into soft geometric constraints in the hyperbolic embedding space. Besides there is clear geometric intution where the implication is modeled by the geometric insideness while the mutual exclusion is modeled by the geometric disjointness. Weakness: The experiment lacks some case studies that they can faithfully show that the obtained results well reflect the initial motivation. As mentioned by the author, other logical constraints can exist in these datasets and the authors do not currently consider these relationships. <doc-sep>This paper introduces a method for multi-label classification when the class labels have known dependency structures, namely implication and mutual exclusion. Hyperbolic geometry is employed to jointly learn parameters for an encoder that embeds points in a Poincare ball and Poincare hyperplanes for class labels. If the embedded point lies on one side ("inside") of the Poincare hyperplane, it is predicted to have that class label. Due to the curvature of the space, the Poincare hyperplanes can be contained completely inside one another, overlap somewhat, or non-overlapping. The method presented in this work presents a joint training objective to not only correctly classify the examples, but also encourage known constraints to be satisfied by the learned Poincare hyperplanes for each class: implication <-> complete containment and mutual exclusion <-> non-overlapping. The method is compared on standard benchmark datasets against reasonable baselines. Strengths: - Paper is overall well-written - Motivation is clear - Figures 1 & 2 make the method very clear and easy to understand! - Experimental evaluation is adequate and convincing with several reasonable ablations Weaknesses: - It seems like the method is narrowly applicable. It only seems to beat the best baseline methods when constraints are available. How often are these constraints actually available in practice? - As presented, only two types of logical constraints can be (softly) enforced by the objective. These constraints are arguably the most common types of constraints for these problems though. - The claim that their method uses less dimensionality than baselines is somewhat misleading (See Appendix G of Patel et al.) - The authors don't compare their method to the best performing baseline MBM without explicit constraint modeling. The authors have adequately addressed the limitations of their work. The potential negative societal impact of their work is not addressed, but their work has no more potential negative societal impact than any other paper submitted to NeurIPS.
The reviews of this paper are uniformly positive. The novelty is the handling of exclusion edges which expands on previous work. On the negative side the improvements seem small and do not solidly establish the value the value of the hyperbolic hyperplanes. But the reviewers liked the paper and I recommend acceptance.
The paper is an extension of [1]. The task in [1] is Language Modeling, while this paper is doing machine translation with the similar idea. The authors propose a non-parametric method for machine translation via a k-nearest-neighbor (KNN) classifier. Specifically, it predicts tokens with a KNN classifier over examples cached in a so-called datastore and this method can be applied to any pre-trained neural machine translation model without further training. The experiments show that it improves results across a range of settings (in-domain, out-of-domain, and multi-lingual evaluations). Strengths: + The method is simple and can be applied to pre-trained neural machine translation model without further training. + The experimental results across a range of settings are effective. Weaknesses: - Although the method is simple and does not add trainable parameters, it add the computational cost. The authors mentioned the computational cost briefly but there are no detailed experiments. It would be good to see the authors add more analysis on the computational cost, for example, how it varies with k. - Technical novelty over [1] seems to be incremental, where a large portion of the work is essentially regarding machine translation as a language modeling and applying the method in [1] to machine translation. [1] Khandelwal, Urvashi, et al. "Generalization through memorization: Nearest neighbor language models." arXiv preprint arXiv:1911.00172 (2019). <doc-sep>This paper describes a nearest-neighbor enhancement to NMT, where internal token-level context representations are used to index into a large data store to find relevant (source, target prefix) pairs. Since the index representation is taken from a pre-softmax representation in the decoder network, no additional training of the NMT model is required. The authors show a diverse range of strong results, from improvements using a data store over the model’s own training data, to improvement from using a collection of domain-specific corpora not present during training used for domain adaptation, to language specific collections to improve capacity of multilingual models. They are also able to show by example how the model makes MT more interpretable. This is a very strong paper. It's well-written and easy to read, the method is very novel to MT, and the results are great. The method isn’t practical right now (decoding is two orders of magnitude slower), but it’s very interesting and thought-provoking. I can imagine it influencing a lot of work, even if the actual method doesn’t see a lot of use. The only complaint that I could imagine raising against this paper is that the method is not particularly novel in light of recent work on nearest-neighbor language modeling, but in this day and age, with so many papers available, I think it’s actually very important to make these incremental stops in neighboring fields to make the connections explicitly clear. All the great experiments on multilingual MT and domain adaptation also help a lot. To their credit, the authors provide a concise section discussing the changes that needed to be made for the conversion to conditional language modeling (MT). Small concerns: The exp(d) in Figure 1 is missing a negative: exp(-d). Table 1: what does the bolding indicate? It looks like statistical significance, but if so, please be clear about what test was used.<doc-sep>Summary: This submission introduces the kNN-MT approach for neural machine translation, which incorporates the memorize-catching spirit with the nearest neighbor classifier on a large datastore when generating the decoding sentences, together with the neural machine translation model for similarity search. No additional parameters are needed, but the inference cost increases. The authors conduct experiments are different settings, the single language pair translation, the multi-lingual machine translation, and the domain adaption translation. Results show that kNN-MT can easily improve translation performances by searching out related test sentences with non-trivial scores. Comments: Generally speaking, the submission is okay and the proposed approach has no big flaws, however, I feel hard to make this submission to be accepted. The main reasons or concerns are: 1. It is clear that this submission is a direct and straightforward extension of the previously published ICLR-2020 paper: kNN-LM. As the authors also clearly stated in the abstraction. Therefore, in terms of the contributions and differences, they are quite limited. The technique is almost the same, except the key is added with the source language sentence. The presentation of this paper is also similar to kNN-LM. The direct extension of the kNN approach from Language Model to Neural Machine Translation makes me feel hard to recommend, and this makes much more like a technique report of the method extension. 2. To say about the approach, I acknowledge that this method is effective, as the authors have done with multiple experiments. However, the computation cost is also high. The authors also discussed this in Section 3. It is hard for real-time systems to afford the increased inference cost as this approach made. The improved results with a little increased cost are okay, but too much is not a good choice. Though the authors mentioned there is a trade-off and I also acknowledge this, but it is still not clear what is a good trade-off. 3. Also, this method highly depends on the scale of the dataset, also the similarity between training and test dataset, if I understand correctly. This assumption can hold for high resource translations, but for low resource translation, this would be limited. This is another drawback of these search-based algorithms. Minor question: What is the effect if $\\lambda$ is varied? Therefore, shortly speaking, I feel this paper is straightforward to extend from the previous paper (indeed this is the future work and the answer from the review comments of previous work). This concerns me a lot for another one in ICLR-2021. --------------- Update: I thank the authors to give responses to my points, especially the discussion about novelty. But I still feel the success of KNN for NMT is similar for LM, that's why a lot of works study on NMT are also work on LM. Since this KNN method only targets at the decoder side, same as LM model. Therefore, I still feel not novel enough. <doc-sep>This work presents an approach to exploits at decoding time a very large translation memory to improve NMT. An extensive evaluation is performed along with some detailed analysis on the important parameters. Strengths: - the idea is very simple, very easy to understand, and intuitive - can be added to existing pre-trained NMT model - many interesting applications (domain adaptation, multilingual model specialization for instance) are presented and are mainly the reason why I think paper can be accepted for publication. - the paper is easy to read and well-written Weaknesses: - exploiting a translation memory at test time is not novel (exploitation of billions of tokens is rather impressive but in my opinion making this possible is more an engineering problem) - the approach is described within one page, the remainder of the paper is about evaluation and analysis. For ICLR, the paper lacks of substance. - the improvements over SOTA English-German are very small considering that billions of tokens are exploited and the high decoding cost. - the experiments presented in this paper are not reproducible since unpublishable data are exploited to train the system (eg. CCMatrix) - computational cost at test-time is extremely high, as expected. This is probably why nobody tried it before. I do not see how it could be used for real-world applications. Focusing on reducing the computational cost would greatly improve the paper. Questions/suggestions: - "we also provide scores from Aharoni & Goldberg (2020)": did you check that these scores are comparable with yours? It is unclear in the paper whether they also used sacreBLEU (insert the sacreBLEU signature in a footnote in your paper to help future work reusing your scores) - I recommend to add the decoding time in the tables and a description of the hardware used. Since the major issue of the proposed approach is its computational cost, adding the decoding time would probably encourage future work to try to improve it.
This paper extends past work on kNN-augmentation for language modeling to the task of machine translation: a classic parametric NMT model is augmented with kNN retrieval from an external datastore. Decoder-internal token-level representations are used to index and retrieve relevant contexts (source + target prefix) that weigh-in during the final probability calculation for the next target word. Results are extremely positive across a range of MT setups including both in-domain evaluation and domain transfer. Reviews are thorough, but quite divergent. There is general agreement that the proposed approach is reasonable, well-motivated, and clearly described -- and further, that experimental results are both solid and relatively extensive. However, the strongest criticism concerns the paper's relationship with past work. In terms of ML novelty, everyone agrees (including the paper itself) that the proposed methodology is a relatively simple extension of past work on non-conditional language modeling. However, two of the four reviewers strongly feel that, in light of the potentially prohibitive decoding costs, the positive experimental results are not sufficient to make this paper relevant to an ICLR audience given the lack of ML novelty. In contrast, another reviewer strongly takes an opposite stand-point: rather, that the results will be extremely impactful to the MT subcommunity at ICLR since they are unexpected (i.e. that a non-parametric model might compete with highly-tuned NMT systems) and very positive across a range of domains and settings (i.e. in-domain, out-of-domain, multilingual) -- further, that the approach has substantial novelty in the context of MT where parametric models are the norm and that it might inspire substantial future work (e.g. on efficient decoding techniques and further non-parametric techniques) given that it so drastically breaks the current MT mold. The final reviewer shares the concern of the former two about novelty, but is swayed by the experimental results and potential uses for the model (given kNN augmentation is possible without further training) and therefore votes for a marginal accept. After thorough, well-reasoned, and well-intentioned discussion between all four reviewers, the reviews land just barely in favor of acceptance, but with substantial divide. After considering the paper, reviews, rebuttal, and discussion I am swayed by the argument that (a) these experimental results are largely unexpected, (b) they are both extremely positive and offer a new trade-off between test and train compute in MT, and (c) that the paper may therefore inspire substantial discussion and follow-up work in the community. Thus I lean in favor of acceptance overall.
The authors demonstrate a method to perform unsupervised amortized optimization of game solvers that seemingly leads to sharp speedups. They also design an architecture that respects the equivariances of the problem and perform ablations demonstrating that this is useful. Strengths: - The technique seems useful and I could imagine it being helpful in MARL settings - That the objective is "unsupervised" (i.e. you don't need the actual optimal strategy) is very neat - The invariant architectures should be useful for other attempts at this problem. The main weaknesses are in the writing of the paper, I highlight some problems below: - The superscripts in Line 133 come out of nowhere; for example, why does $\\hat{\\sigma}$ have the $L_1$ norm as a superscript? I’m not sure what these superscripts mean on a first read. - Someone from an adjacent field might not know what an NxN game is, I think it’d be worth explaining. - The citation links are broken and link back to the title instead of to the actual citation - Are Table 1 and 2 actually included in this paper? I see them in the supplement, are those the same tables you’re referring to in the paper body? - On figure 4 it’s not clear what “left” and “right” mean - In Figure 4 caption it’s worth pointing out (as you do in the text) that the arms correspond to worst, mean, best - The term MECCE is introduced but not defined anywhere in the main paper - The stack function in Equation 12 does not appear to be defined in the main text - The equivariant architectures section was quite hard to follow; it is unclear what the underlying logic behind each of the transformations is. I would love to see a more expanded version of this section in the appendix or with the extra page if the paper is accepted. - Equivariant is misspelled on line 193 - Figure 4 is somewhat hard to read where the bars overlap, you might consider using different colors and making the arm sizes of one of the bars different than the other? Yes. <doc-sep>This paper aims to create a method for training NNs to solve for NE, CE, and CCE across a set of games with a fixed action space for each player. This is achieved through considering the dual formulation of the LP resulting from each of these equilibria concepts, and preforming gradient decent on the NN given random games in with that action space size so that the strategies the network produce satisfy the constraints of the LP. **Strengths:** The strengths of this paper are * The novel (to my knowledge) idea to train solvers of games in the dual LP space rather than the primal. This is an interesting approach, which could be promising and should be explored. * The idea to train one solver for all games of a given shape rather than solving each game on-by-one * Equivariant networks and payoff invariances which are shown to improve performance. **Weaknesses:** The main weaknesses I see are: * The motivation that other approaches in the space cannot solve for CE seems weak since it seems easy to adapt PSRO or CFRM to such a class. * The scaling argument seems weak as the approach of training on all games of a give size seems much much more difficult computationally than solving a particular game in that class. * If the argument is that this method is scalable, then I would expect experiments that run on much larger than 8 by 8 games. More broadly, as this paper is taking the approach of solving all N by N games at once, but most of the payoff matrices in this space are ones which we don't use in practice, I would expect an evaluation on a "transfer set" of games which we do care about to show that it works to solve them correctly. Otherwise it is difficult to evaluate the method, as it could have "low error on average" but not low error on the problems we care about. In the same vein, since we usually care about getting the CE for a particular game, I would be useful to compare to finding the CE for that particular game directly rather than solving all games of that shape first. **Minor Comments:** Figure 4 is hard to understand, given that the difference between the gray and black error bars are never described. The appeal to "Occam's razor" on line 93 is very suspect. It appears this is alluding to something specific (which should be cited), but regardless of what the citation is there has to be a miscommunication somewhere because there "maximum entropy" isn't well justified by "Occam's razor". In some sense they could be seen to be the opposite, as "Occam's razor" is often phrased as "preferring the simplest solution" and "maximum entropy" is the most-random solution (under some measure) and thus the least-compressible (under that measure), which could be seen as the "most complicated" solution! Regardless, none of this is necessary, because any method of choosing between equilibria is fine for the purposes of this paper. The limitations were adequately addressed except to the extent mentioned above. <doc-sep>Games are a useful formalism in machine learning. There are multiple notions of solution to a game; this paper focuses on Correlated Equilibria (CE) and Coarse Correlated Equilibria (CCE). Given the payoffs of a game, the set of all CE (resp. CCE) form a polytope, with number of constraints exponential in the number of players and actions. Thus, solving for a CE (CCE) can be computationally intractable. This paper proposes to use a neural network to predict approximate CE (resp. CCE) given the payoffs of all players, as well as some auxiliary data. The authors propose several innovations to make the proposed neural equilibrium solver more efficient and flexible: - several choices of secondary criterion used to select a particular CE (resp. CCE) from within the polytope of all CE (resp. CCE). - A permutation-invariant network architecture, capable of exploiting symmetries in the game (e.g. exchangeability of players, permutations of a players action set). - Focusing on solving the dual problem, which has fewer optimization variables as compared to the primary problem. ## Strengths - Predicting/ computing NE/CE/CCE is certainly a worthy topic. - I appreciated the use of strong duality for convex optimization! - Experiments are thorough. ## Weaknesses - Overlooks certain important aspects of the literature. - I found it hard to follow how the proposed network is actually trained. - Scalability of proposed approach to even moderately sized games is unclear. ##Edit Changed score to 7 after rebuttal. These are adequately addressed.
This paper introduces an Neural Network based Equilibrium Solver which utilizes a special equivariant neural network architecture to approximately predict NEs, CEs, and CCEs of normal-form games. Experiments show the effectiveness of the proposed methods across multiple dataset. All reviewers support the acceptance of this paper. While I agree on the merit of this paper worth acceptance, I'd also recommend authors to revise a bit in the final version regarding to the theoretical complexity of finding equilibrium. (1) in line 18 "solving for an equilibrium of a game can be computationally complex [9, 8]", in fact, the cited intractable results only apply to finding Nash in multiplayer general-sum games. Finding CE/CCE can be always done by LP, which is tractable, and can be guaranteed to finish in polynomial time; (2) this paper emphasize that prior methods may take an non-deterministic time to converge, while this method proposed in this paper gives determinism. However, it appears to be the methods proposed in this paper is not provided with guarantees to converge in certain time (thus without determinism either). It's better if the authors can clarify or modify corresponding arguments.
The paper shows how to extend the widely-used expected improvement heuristic from into the contextual bandits setting to create a new basic type of contextual bandit algorithm. They propose two novel algorithms, and propose a method for choosing an improvement threshold for controlling the exploration/exploitation cutoff which provably achieves an $\\tilde O(\\sqrt{T})$ regret rate, even in settings with adaptive adversaries. The paper then shows that the proposed methods have strong empirical performance. Strengths: 1) Paper extends the expected improvement heuristic into contextual bandits, building a connection with the best arm identification and bayesian optimization literature, providing a novel and significant result. 2) Paper provides proofs for the results, and is able to get competitive regret rates for linear contextual bandits and neural bandits. 3) Paper shows experimental evidence for the value of the method. 4) Paper is clearly written, and contextualizes expected improvement in the broader literature, and clearly shows how the analysis of the linear and neural cases differ. Weaknesses: 1) (minor) I wish the paper explained a bit more how the algorithms are able to work even with an adaptive adversary. The paper discusses the limitations of using a Gaussian distribution for the reward model, as well the fact that their usage of the NTK kernel restricts the neural network classes that they can use. The paper does not address potential negative societal impact of this work. <doc-sep>The authors propose a novel contextual bandit algorithm based on the expected improvement (EI) and study the corresponding regret analysis. The proposed algorithm contains a modified element from EI for MAB by suggesting a hybrid of EI with pure exploitation. The paper adds different insights to the body of literature in that EI is an understudied technique to handle the tradeoff between exploration and exploitation in contextual bandits. They propose two novel EI-based algorithms for this problem, one for linear payoff and deep neural networks. The authors provide numerical experiments. Strength: The paper adds different insights to the body of literature in that EI is an understudied technique to handle the tradeoff between exploration and exploitation in contextual bandits. EI can be viewed as a variation of TS, and in that respect, the proposed algorithm improves the upper bound of TS by \\sqrt{d}. Weakness: Explanations for modifying EI by mixing with pure exploitation are not convincing. Why should one use the proposed algorithm over LinUCB since it carries extra \\sqrt{log T}? yes <doc-sep>This paper applies the expected improvement (EI) principle to contextual bandits, which contrasts with the more popular approaches of upper-confidence bound (UCB) and Thompson sampling (TS). For adversarially chosen contexts but realizable (conditional) mean rewards, EI is practically applicable with both linear and neural network predictors. In the linear case the proposed technique essentially achieves information-theoretic lower bounds given sub-Gaussian residuals. In the neural case the network tangent kernel can be used to characterize the regret matching analogous results for NeuralUCB and NeuralTS. Strengths include the novelty of EI in this setting and the connection to NTK theory. Weaknesses: * the lack of clear motivation for EI. * is it just "another way to do contextual bandits that nobody has done yet?" * if so this paper is likely to be ignored by practitioners. * is it statistically distinct from UCB/TS approaches (i.e., works better)? * in theory no, everybody is achieving the same regret bounds essentially. * in practice, maybe, but that's where the experiments are too limited for a strong conclusion. * one advantage of using older datasets is you can provide exhaustive comparative study as in https://arxiv.org/abs/1802.04064 * is it computationally distinct from UCB/TS approaches (i.e., cheaper to compute or scales better to larger action sets)? * if so you could exhibit on a modern dataset such as amazon-3m from https://arxiv.org/abs/2102.07800 I believe this work would benefit from spending more time better differentiating EI before presenting to the general community, otherwise, it's potential impact will be limited. This reviewer is satisfied. **Update after author discussion period**: Authors have addressed concerns, raising score. <doc-sep>The expected improvement (EI) is a classic approach to select the action in Bayesian optimization (BO). This paper extends EI to the contextual bandit setting, including the linear setting and non-linear setting. Based on the fact, the analysis of EI is not well studied, this paper provides the regret analysis of EI in linear contextual bandit and neural contextual bandit. Moreover, authors show good empirical results proposed algorithm. enecccdejhvddktidjggvrltrghcivbtjdnlekdkejtc Weakness: (1) The overall proposed two algorithms, LinEI and NeuralEI, look very similar to TS to me. Take the linEI as an example. In a round, $r_t^+$ is the same for every arm. So the selection criterion can be considered as $E_{\\mu \\sim N}x^\\top \\mu $, which is the expected version of LinTS. Just add the expectation to the TS. Theoretically, we only need to take the expectation over the normal distribution based on TS. So, I doubt the novelty of the proposed algorithms and their analysis. .
This paper proposes and analyzes algorithms based on expected improvement for the contextual bandit setting, and proves that the resulting algorithm can attain $O(d \\sqrt{T})$ regret in the linear bandit setting (the result improves over Linear TS). All the reviewers agree that the modified LinEI algorithm and its analysis are novel and important to the community. I agree with the reviewers, and recommend accepting the paper. For the final version of the paper, it would be helpful to add more details on why the pure EI strategy does not work and add the scaling with $d$ experiments to the main paper (the response to Rev. rpMu). If there is space, it would also be helpful to add a proof sketch for the LinEI algorithm and distinguish it from the LinTS analysis which is more standard and known in the community (response to Rev. KPAe).
This paper studies the dynamic (monopolistic) pricing problem under “markdown constraints” which requires pricing decisions to be nonincreasing. The decision maker does not know the form of the demand function that stays constant over time, receives bandit feedback for decisions made, and aims to minimize cumulative regret compared to the maximum achievable revenue under the optimal price. The paper first proposes a categorization for demand functions called markdown dimensionality to describe the complexity of demand functions in the context of markdown pricing. Then, the paper presents dynamic pricing algorithms for each category, respectively, as well as matching regret lower bounds demonstrating the proposed algorithms are tight. Strength: The paper is well-written, and to the best of my knowledge, the key contributions regarding 1. defining markdown dimensions; 2. developing algorithms to achieve near optimal regret for each markdown dimension regime (i.e. 0, finite, and infinite markdown dimensions); and 3. presenting regret lower bounds for each regime, are novel. In my opinion, the paper presents a valuable framework to characterize the hardness of learning and dynamic monopolistic pricing under monotone constraints, and may lead to interesting research directions. I also think the paper positions itself well compared to existing papers for non-constrained dynamic pricing, as well as existing work on dynamic pricing with markdown constraints. The paper also presented a clear illustration for technical definitions such as the markdown dimension via concrete examples, and also for the proposed algorithms. Weakness: In my opinion, the main weakness of the paper is that individual algorithms are presented for different demand markdown-dimension regimes, meaning that the decision maker would need to know whether the supposedly unknown demand function’s markdown dimension is 0, finite or infinite, in order for a decision maker to deploy the corresponding algorithm. Also, in the finite, non-zero markdown dimension regime, the proposed Algorithm 2 requires the decision maker to know the underlying markdown dimension of demand. The paper would be much stronger if it can either propose a single “best of all worlds” algorithm that can achieve optimal regret under demands with any markdown dimensions while being agnostic to the regime, or analyze how the algorithms proposed in the papers would perform if the regime is misspecified. This suggestion may be beyond the scope of the paper, but perhaps it would be helpful to run some simulations to shed light on relevant aspects. NA <doc-sep>The paper deals with learning pricing with markdown constraints, that is that subsequent prices must be non decreasing. The authors investigate this problem in the case where the demand is a parametric function of the price, with known form but unknown values of the parameters - in this case, a tradeoff must be found between exploring to accurately learn the parameters of the model and losing a lot due to exploration (and the markdown constraint). The main contributions of the paper are (i) the introduction of "markdown dimension", which quantifies the how difficult it is to learn the parameters of the model from data (ii) an algorithm that balances exploration with good performance (under the concept of pessimism in the face of uncertainty) and (iii) an algorithm that matches the regret lower bound when the markdown dimension of the problem is infinity (in which case no algorithms with sublinear regret exist). Strengths 1. The analysis is nice and rigorous. 2. From a mathematical point of view, introduction of the markdown dimension to characterize the difficulty of the problem is an intersting idea. 3. Also, the result that knowledge of the demand model can lead to improvements in the regret (and the algorithms to achieve this) can be a good addition to the related literature of pricing with markdown constraints. Weaknesses 1. The main weakness is that th setting is not convincingly motivated: Since the markdown constraint is motivated by pricing problems in markets of some sort, I am not sure it is reasonable to assume that the form of demand (essentially how the market will react to the price) is known. There is no concrete example where a parametric form of the demand model is reasonable/accurate and no numerical results to illustrate the performance of the proposed algorithms vs standard algorithms in practical problem settings. No potential negative societal impact foreseen. <doc-sep>The paper considers the monotone (markdown) price constraint in the dynamic pricing problem, where the demand has a parametric form in price. The paper introduces a concept, the markdown dimension $d$, which measures the complexity of the parametric family. When the dimension $d=0, 1 \\le d < \\infty, d= \\infty$, the paper proposes algorithms achieving the regret $O(\\log^2 T), O(T^{d/(d+1)}), O(T^{(2s+1)/(3s+1)})$. Here, $s$ measures the degree of smoothness of revenue function at the optimal price. Also, the matching lower bounds are provided for each case. Originality: It’s interesting to consider the monotonicity constraint in the parametric function. The paper shows some new results which are different from the nonparametric case. However, I find the definition of markdown dimension $d$ a little unintuitive, and it is hard to see if the concept can be applied to other applications. Quality: It’s technically sound, except for Algorithm 1 and Theorem 1. See Questions. Clarity: The paper is written clearly. There are three main results in the paper. First, for the simple case when the dimension $d=0$, the paper shows the minimax regret bound $O(\\log^2 T)$ which is different from the unconstraint pricing case $O(\\log T)$. The result is not surprising but still adds to the literature. In this case, there’s a one-to-one mapping from the parameter to the realized demand. Every price is informative and there’s no trade-off between learning and earning. My major concern is whether the prices in the algorithm are actually decreasing. See Questions for details. Second, when $d >=1$, the paper shows the minimax regret bound $O(T^{d/(d+1)})$ which are new results. I think that’s the major contribution of the paper. The learning and earning tradeoff shows up in the hyperparameter $h$, i.e., the smallest magnitude that the price decreases between consecutive periods. Setting $h=O(T^{-1/(d+1)})$, the regret will be $O(T^{d/(d+1)})$. My question is how to connect the parametric result with the existing nonparametric result. See Questions for details. Third, when the function is nonparametric, the paper assumes a higher order of smoothness and proposes an algorithm (Algorithm 3). I think the contribution of this part is marginal because the generalization from Lipschitz continuous to higher-order smoothness is explored by the literature. NA <doc-sep>This work studies a single-product pricing problem under a monotonic/markdown price constraint. In this work, the authors introduce a "markdown dimension" $d$ that indicates the hardness of a demand curve to be learnd from customers' feedback. For each of the following cases: $d=0$, $d\\in\\mathbb{Z}^+$, $d=\\infty$, the authors propose a pricing algorithm with provable expected regret guarantees. Specifically, these bounds are optimal for $s=2$ as the authors also prove matching lower bounds. In conclusion, this work improves existing results in related literatures and introduce a new method of determining the hardness of a pricing problem. Strengths: 1, The problem setting and constraints are practical: customers are indeed more sensitive to price raising than discounting. 2, The new concept on "markdown dimensionality" is new in the field of pricing research. Also, it helps determine the hardness of pricing problems. 3, Most of the regret upper bounds of the algorithms this paper proposes are proved to be optimal (up to log or loglog factors) by the authors. Weaknesses: 1, The writing and organization of this paper is not good. In specific, there are many obvious typos and syntax errors even in the most important definitions. For instance, see Definition 10: I really suspect that the authors made a mistake in this definition, especially (c): all their applications of this definition point at a smoothness, i.e., $\\exists C>0$ such that $-C\\leq R^{(s)}(\\cdot)<0$. Based on this definition, a sensitivity can only be used to lower bound the price perturbation instead of upper-bounding it. E.g., for s=2, it is somewhat a strong convexity. 2, Many key definitions and statements are ambiguous or misleading. For example, in Definition 8: Is there a definition of parameterization? and How does $\\theta$ relate to the following conditions in (2)? Notice that $\\Phi^{-1}_P(y)$ is not necessary a constant unless F is d-identifiable. Also, what does "it" in (1) refer to? 3, For key definitions, propositions and assumptions, there are lack of explanations or insights that help the readers to understand. For example, in Definition 2: I had spent a long time until I realized that $p^*$ is the lowest best price and $p^* (R)$ is an argmax oracle. In contrast, there are redundant explanations on trivial facts. For example, the matrix-form equation under Assmption 5 is not necessary and it can be clearly described just by two equations $D = \\sum_{j=0}^d\\theta_j p^j$ and $\\theta = V_p^{-1}y$. 4, The authors did not show the "markdown dimension" of a variety of common distributions except for the $0$-dimensional ones. In other words, there exists no example on $d\\geq1$ and $d=\\infty$, which qualifies the application of most of their theoretical results. 5, This is very important to get the authors' notice: Notice that the authors added more contents in their main pages in the supplementary materials. This is unfair to the other authors and their submissions as this actually breaks the 9-page-limit rule (although it is in the supplementary materials). The authors should separate their main pages with the proof details of upper and lower bounds and put them into an Appendix. Besides, the format of this paper seems like a "preprint" instead of a "submission" as an option in the Latex template. From my point of view, this paper has strong technical contributions, but the authors convey these contributions in a careless way. Not sure if it is suitable for publishing in Neurips. I'll tentatively give a 6 in advocate of their theoretical results and see what the other reviewers would say. No discussions found in this paper. I recommend the authors to discuss the limitations of their work at the end of the main pages. There seems no negative social impact or ethic issue, and I also encourage the authors to discuss them in their Appendix. <doc-sep>This paper considers infinite-arm MAB with markdown constraints. Since existing results have shown that $T^{3/4}$ is optimal under minimal assumptions, the authors turn to study how to further improve the regret bound by assuming additional assumptions for the demand functions. To this end, they introduce a general complexity notion called *markdown dimension*. Using this notion, they not only show that a better regret bound is possible under certain complexities, but there still exists a separation between MAB with constraints and without markdown constraints. **Strengths** - A comprehensive study of MAB with markdown constraints - Matching upper and lower bounds **Weaknesses** - No experimental results - Writing needs improvement Yes. The authors adequately addressed the limitations
This paper focuses on an interesting problem, dynamic pricing. The paper brings conceptual new ideas (markdown dimension) and associated algorithms. I have 2 concerns: 1) it would be better if the algorithms were adaptive to the aforementioned dimension. 2) it would have been better if the authors had followed the instructions (and especially not updated the rebuttal as the revised version). Point 1 would be future work, while point 2 is ok since the pdf can still be found in the submission files. As a consequence, I recommend acceptance
This paper addresses an issue of transformers that sometimes they fail to find solutions that are easily expressible by attention patterns. The issue is justified to be the same as the problem of learning useful control flow. The authors propose two modifications, namely adding a copy gate functionality and a geometric attention module which facilitates focusing on local useful operations. The resulting method achieves near perfect accuracy on the considered benchmarks for length generalization, simple arithmetic tasks, and computational depth generalization. ## Main strengths: 1) The paper is well written and does a great job in introducing the problem and revealing the flaws of the universal transformer in achieving good performance in the described tasks, as well the authors' intuition of the properties of a good solution. 2) The main components of the proposed method are explained in sufficient details to help reproducing the proposed method. 3) The proposed benchmarks and datasets, the empirical approach, and chosen hyperparameters are provided and discussed in details. 4) The paper is well positioned with regards to the related work. ## Main Weaknesses: 5) It is not clear if the the considered benchmarks cover all required aspects of task generalization, or the generalization is only valid for tasks that are to some extent similar to the considered experiments. The authors should further explain which aspects, if any, are missing and are not addressed in this work. 6) It is not clear if the considered assumptions are always necessarily and correct. The authors should address the following questions in the paper either in form of justified explanations or if required with ablation studies: 6.1) Is there any task which would benefit or require settings that are not covered by the considered settings described in section 2? 6.2) Regarding point 2 in section 2: What if the data dependency graph was too long that memory complexity would not practically allow to use such a depth? In other words, to what extent the proposed depth is necessary? 6.3) Regarding point 3 in section 2: Could the gating function result in a shortcut/collapse in optimization? (Considering a far more complex task that is generally addressed by transformers could reveal such issues.) 6.4) Regarding the final point in section 2: Could a task would prefer non-local operations to local ones? Does the performance of the proposed method degrade in that situation? 7) Some previous works, for example on the ListOps task, consider sequences that are orders of magnitude longer than the ones considered in this paper (A couple of examples are [1], [2]). It is not clear if the claim that previous results did not achieve the perfect accuracy is well-supported? It seems like that to be fair, the authors should have considered some of the SOTA methods and adapt their hyperparameters for these tasks with limited sequence length before testing how they would perform. ### Minor points: 8) In section B.2, the set of values or ranges over which the hyperparameters are searched should also be mentioned. 9) First line of page 14. "sample" -> "sampled" ### References: [1] "Modeling Hierarchical Structures with Continuous Recursive Neural Networks" by Chowdhury, J.R. and Caragea, C., (arXiv:2106.06038v1) [2] "Nystr ̈omformer: A Nystr ̈om-based Algorithm for Approximating Self-Attention" by Xiong et al., (arXiv:2102.03902v3) The paper is well organized and covers the background knowledge required to follow the discussions. It motivates the goal, provides justifications for the choices made in proposed methods, follows a thorough empirical approach, and achieves near perfect results in the considered benchmarks. There are a few points of concerns that need to be cleared before I can fully support the submission. Regardless, I see many of the properties of a good research project and so I lean towards accepting the paper at this stage. I look forward to authors' feedback on my concerns before I finalize my decision. Edit: I thank the authors for responding to my comments in details. After reviewing their response and the changes made in the paper, many of my concerns are resolved. Therefore, I change my recommendation to accept this paper. <doc-sep>This paper proposes two modifications to provide additional inductive bias to the attention mechanism in the transformer architecture. The first modification adds a copy mechanism to simulate a “no-op” at a given transformer layer, and the second modification is an attention mechanism that is biased towards attending to local context. Both of these modifications are motivated as being useful for algorithmic tasks like compositional table lookup and arithmetic. From experiments that are *mostly* concerned with some kind of length/depth generalization, we see very significant improvements. Overall, I enjoyed reading this work. The writing is clear and to the point, and the approach itself is very well motivated (see questions for more), and simple to implement without too many tunable hyperparameters. And as such, the experiments are cleanly setup and do suggest improved improved generalization. From the analysis of the attention maps, we can see that the method is doing exactly what it is supposed to as well (that is, copying previous values until other intermediates have been computed, paying more attention to local hidden states etc). Based on these strengths, I recommend that this paper be accepted to the conference. So now, let me focus on some weaknesses / suggestions / questions: Overall positioning: Firstly, I think the paper should probably make it more clear that it’s only focusing on a very specific notion of systematicity that has to do with length / depth generalization, and not other more traditional notions like generalizing to new compositions (which isn’t really something that is evaluated) like SQOOP from Bahdanau 2019 etc. Evaluation: Secondly, while not a strict requirement, there is no evaluation on language tasks / pseudo language tasks like SCAN - there is a length generalization benchmark within SCAN itself and it would be good to know how this method does on that. Analysis: In Figure-2 bottom, what does It is unclear what the y axis is. Isn’t the copy gate just a single number for each time step, for each layer? If so, i would’ve expected the figure to just be a single number for each time step for the various layers, so I don’t understand what the grid signifies. Overall, I think based on the results I recommend that the paper be accepted into the main conference. The problem is well motivated, the comparisons are fair and the results compelling though there is some scope for improvement that i highlight in the weakness section of the main review <doc-sep> The authors propose Transformer Control Flow (TCF), a set of improvements to the Universal Transformer (Dehghani et al, ICLR 2019). They show that, for three compositional problems, TCF allows trained models to generalize to longer sequences, a common problem of many transformer implementations. As in the Universal Transformer (UT), the encoder consists of one shared transformer layer (self attention + fully connected network) which is iterated through a fixed number of times, by feeding the output of each iteration back into the input of the shared layer. However, whereas the UT uses a sequence to sequence model, TCF is an encoder-only architecture, which decodes the last element in the output sequence as the final result. Two new features are introduced : - a gating mechanism that allows the model to "skip a layer" (the input is then copied to the output), on the basis of the self-attention output, - a weighting system for the outputs of attention heads, which favors short-range attention (i.e. tokens close to the one currently considered), and can be trained to be biased towards a certain direction (before or after the current token). Experiments are conducted over three tasks: - predicting the output of sequences of permutations of 8 elements, in prefix or postfix notation, - predicting the result of of additions and multiplications modulo 10, in infix notation, - predicting the result of operations on lists of small integers, in prefix notation. For each task, TCF is shown to be capable of extrapolation to larger problems (i.e. longer sequences) than those seen at training. The paper is very clearly written, and proposes an interesting solution to an important question. The tasks chosen are meaningful, and the experimental results suggest that the proposed architecture can solve the extrapolation problem. The technical aspects are precisely documented, which makes the research easy to reproduce. My main concerns are related to the experimental comparisons, and the impact of certain design choices, such as the use of a fixed model depth at training and test time, the use of the last word in the encoder representation as the basis for model output, and the absence of the Adaptive Computation Time (ACT) in the Universal Transformer implementation that serves as the main baseline. This makes it difficult to judge the impact of the two improvements suggested (copy-gate and geometric attention), and the benefits of the new architecture, compared to an encoder-only state-of-the-art version of the Universal Transformer (with relative positional embedding, and ACT). I believe improving this part of the experimental design and discussion would greatly reinforce the paper. Below are my concerns and questions, split into four themes. *Computational, and model, depth* At the beginning of section two, the authors argue that four properties are needed for network to extrapolate to larger problems: - shared layers - depth of the computational graph - step skipping - short-range attention I would disagree with the second point, for two reasons. First, computational depth is a relative notion. In an arithmetic task, I can choose to represent modular addition as one operation, or two (addition and modulo), or even three (digit addition, carry propagation, modulo). On the other hand, some linear algebra packages define "add and mul" as a single operation. There is no doubt that network depth should somehow increase with complexity, but defining it from computational depth seems unpractical. Second, since you use shared layers, model depth can be varied without having to retrain. Specifically, model depth could be adjusted to the complexity of the training examples, and then increased at inference to fit the complexity of the test set. Using the maximum depth in both train and test sets (provided it can be defined from computational trees) is not necessary and might not even be beneficial. In a recent paper (https://arxiv.org/abs/2106.04537), Schwarzschild et al. have shown (using different architectures, and testing on different tasks) that adding iterations during inference could help models extrapolate to larger problems. It would be interesting to test this on TCF (and baselines). *Copy-gating and variable depth* The original Universal Transformers paper proposes a copy-gating mechanism, which uses the Adaptive Computation Time (ACT) mechanism (Graves 2016) at the token level. The gating works differently than in TCF: all gates begin closed, and once opened remain so. However, I believe this (universal transformer plus token-level ACT gating) is the correct baseline for TCF. Can such a comparison be provided? This is all the more important as gating has a large impact on performance, for the three problems considered. ACT-gating has another merit: it adaptively controls the depth of the Universal Transformer, which goes on iterating until all gates are open. This means that the model can adjust to longer sequences by iterating for more "ponder time". This would provide an adaptive solution to the depth adjustment problem discussed in the previous section. Do you think an adaptive control for the number of iterations/layers could be implemented? *Encoder-only output, and geometric attention* The original Universal Transformer is a sequence to sequence model. When decoding a solution, all the output sequence of the encoder is attended to. In your implementation, only the last element in the output is taken into account. As you observe in the results discussion of section 3.1, this creates problems at test time because the position of the last word changes as sequence length increases. It also complicates training, since the output position depends on the sequence length (which varies in all the problems you propose). As you show, this can be alleviated by relative positional embeddings and directional encodings, which can be used to force the result to "move right" as the computation proceeds. It also seems to be the main justification of geometric attention (which seems to bring very little when used alone, cf table 1). But could the original problem, variable output position, be eliminated? What would happen if the output position had a fixed positional embedding? This could be done in many ways: reading the output from the first position instead of the last (since the transformer is bidirectional, this should have no adverse effect), or from some other fixed value (e.g. the fifth output word), or enumerating positions so that the last token has a fixed embedding (e.g. counting backward, or from both ends to the center). Another question is the use of a single output element to decode the solution. Would the model be improved by using a longer part of the output sequence (e.g. the N first output words, with N the minimal size of output sequences, or shorter output padded to this size)? An alternative (and in my opinion much better) solution would be to use a special attention-based layer for the decoder (an attention plus a linear layer working from the output sequence of the encoder). This amounts to a minimal seq2seq model, with one non-shared, cross-attention-only layer in the decoder. This would eliminate the variable output position problem, and allow the full output sequence to be taken into account while decoding. I believe these alternative decoders need to be tested. Without them, it is hard to assess the importance of relative positions and geometric attention. *Compositional table lookup: the backward case* Unless their architectures are bidirectional, the backward task is very unfair on LSTM and DNC, which are causal models. To solve the backward task, they would need to memorize all the tables before seeing the value to be operated on, an impossible task given their capacity. Transformers, on the other hand, are bidirectional. Their bad performance on the IID backward case comes as a surprise, and the unusually large error on this observation suggests an experimental problem. Do you have explanations about this high standard deviation of experimental results? On the test data, the fact that relative positional embeddings seem to improve the forward, but not the backward case, might be due to the choice of the last term in the output as the result to be decoded. Would, for instance, the results be inverted if the first output were chosen instead (or some fixed middle position)? Overall, I am not certain this backward case helps the argument about the efficiency of TCF (what it demonstrates, I think, is that causal models like the LSTM need their inputs to be presented in a particular order, which is no new news...). TCF appears to be a promising architecture, and the experimental results seem good. However, some design and experimental choices make them difficult to compare with the original Universal Transformers. In particular, the choice of a fixed depth at train and test time might be an unnecessary constraint, the proposed gated-copy operation should be compared with the ACT-based copy described in the UT paper, and the use of the last element in the output sequence as the basis for decoding might unduly increase the necessity of relative positions and short-range attention (ie the geometric attention and directional encoding recommended by the authors). Hence my note of 6, which I would gladly increase if additional experimental results are provided. Edit: Thank you very much for the detailed response. I have raised my note to 8.
This work proposes a novel Transformer Control Flow model and achieves near-perfect accuracy on length generalization, simple arithmetic tasks, and computational depth generalization. All reviewers give positive scores. AE agrees that this work is very interesting and has many potentials. It would be exciting if the author could extend this framework to more challenging tasks (e.g. visual reasoning [1. 2]). Given the novelty of the proposed model, AC recommends accepting this paper! [1] CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. ICCV 2017. [2] PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning, NeurIPS 21
This dataset includes daily living action data collected by two cameras and several IMUs from 51 stroke-impaired patients and 20 healthy subjects. Different from previous human action datasets, this dataset focuses on short-duration action (functional primitives) recognition, including reach, reposition, transport, stabilization, and idle. The labeling requires a significant amount of effort from trained annotators. The motivation is to facilitate the rehabilitation process after stroke that requires repetition of sub-second actions. In addition, a new action sequence identification method is presented that can outperform other approaches and the method is validated on the presented dataset and other previous video datasets. This dataset is different from other existing datasets and focuses on actions in short duration. This new way of action labeling provide a different view of behaviors that can be helpful for rehabilitation after stroke. With this large-scale dataset, a new action-recognition benchmark is presented using several previous segmentation-based approaches and newly developed sequence-to-sequence models inspired by speech recognition. The result also shows that the model trained on impaired patients can also be applied to healthy subjects, but the opposite is not working. And this action recognition is shown to be useful for quantifying does by counting these short-duration actions (functional primitives). The paper presents a dataset including video data and motion data from IMUs. However, the motivation is not clear. What is the advantage of adding IMU sensor data? Setting up two cameras is very easy, but attaching IMUs to a human body is not very easy. It is also useful to show whether adding multiple IMUs can affect action execution. From Figure 1, it is clear that tow IMUs are attached to human hands and this setup may have some effect on impaired patients. The paper states that it introduces a multimodal dataset but the benchmark is done respectively and the paper doesn't show the advantage of the multimodal data. <doc-sep>In this paper, authors introduce a new multimodal dataset called StrokeRehab. It includes inertial measurements and video data of several daily activities performed by 51 stroke impared patients and 20 healthy subjects. Sequences are labeled at a high temporal resolution for annotating elemental short-duration actions. In addition, a novel approach dedicated to high resolution action identification is proposed. Experiments on the proposed dataset as well as existing ones suggest that the proposed approach is suitable for the task in comparison to state-of-the-art. The proposed dataset is interesting and seems challenging compared to existing ones, in particular for the task of sub-second action identification. As it is multimodal, I think it could be beneficial for the research community in both the field of computer vision and time series analysis. In addition, the datasets also propose additional challenges like distributional shift and quantification of rehabilitation dose that are very interesting. Moreover, the proposed approach for high resolution action identification sounds good and promising. The fact that authors simultaneously propose a new dataset and a novel approach can also be seen as a weakness. Indeed, it results in partial information for both parts. While the supplementary material is significant and provides useful information, the necessity to make back-and-forth between the paper and the appendix does not facilitate the reading and the whole understanding. It would maybe be a good idea to focus the paper on the dataset (with a benchmark using existing methods) as it is the scope of this track, and let the novel approach for a further paper. <doc-sep>This article proposes a dataset for identifying short duration actions for physical rehabilitation that contains both video and IMU data. THe dataset includes data from stroke-impaired patients and healthy subjects performing activities of daily living, and labels at a high temporal resolution. Along with the dataset, the article proposes a novel sequence-to-sequence approach to identify actions, and presents the perfomance on this dataset and other datasets of two versions of algorithm from their proposed approach and compared with state-of-the art algorithms. * The dataset proposed contains data from a large number of patients. The dataset has also been annotated by several people, following a precise protocol, with a good interannotator agreement score. * The algorithms proposed are contrasted against state of the art algorithms on both the proposed dataset and other datasets * the annotators training and annotation protocol is very precise but I have doubts about its adequacy, the process needs to be justified. The training seems to enable all annotators to acquire the same definition of the actions and the labeling process. However, I have concerns this lengthy process also erases the inter-annotator variability : the "coders" could end up emulating the expert's annotation instead of labeling by his own judgement, and this would erase inter-annotator variability. The labeling by experts would be more interesting than by coders, even with a smaller number of labelers. * the approaches proposed lack a detailed description in the article. A high-level explanation and motivation are given, but the final algorithms themselves are not described in the article, only in the supplementary material. * the dataset aims at segmenting elementary actions that are quite similar. However, the data acquisition uses wearable sensors that are placed on the hands : they and the straps can be bulky and change the natural motion of the subjects. This change can have an impact especially on similar movements. Has this impact been evaluated ? <doc-sep>This paper describes the StrokeRehab Dataset for action sequence recognition. The dataset consists of sub-second 3,372 trials (120,891 functional ‘primitives’) in 71 subjects. Data was recorded with 9 IMUs on the upper extremities, and featurized using the IMU acceleration, quaternion output, and joint angle representation from a proprietary algorithm. Video data was also recorded from two views as well. The data was labeled in a highly granular manner, under supervision of a domain-expert. Using this dataset, the authors tested different approaches for action sequence recognition, finding sequence-to-sequence models outperformed segmentation based models for this data as well as others in the field. Overall I was impressed by the granularity of action labeling in the paper and the novelty of the application to the community. I think the authors make interesting and likely valid conclusions about the utility of action segmentation vs. seq-2-seq approaches for action labeling for short primitives, which speaks to the potential for the reuse of the dataset in the community. So I am open to accept this manuscript, however I have reservations below that may affect this decision. While I was impressed by the annotation density, I do have multiple questions that I would like to be resolved before acceptance. The paper contextualizes its results well in the field and is well described. While I believe the dataset is largely sound, I have multiple comments that I would like to see clarified and resolved. ### Data Collection • Were the video and sensor data synchronized? • Can you comment on how similar the video and sensor data were from subject to subject and day to day. Knowledge of domain differences across video recordings (e.g. shifts of perspective) will affect their generalizability. Raw IMU data will depend on the subject size. ### Rehabilitation Applications • I was confused about how AER was chosen as the metric of choice, as opposed to TPR/FDR of actions, or more general task-based quantities (eg effector velocity). The description is cursory, and it is important to communicate for non-domain experts. It also affects the conclusions, as segmentation models were more performant on the basis of TPR. • Is the accuracy of these models sufficient to reach desired capacity for physical therapy applications? Much of the scope of this application track is so domain-agnostic experts , and it is not clear what AER is satisfactory for applications. • Related to the above: what is the inter-human reliability in the AER? • L295 “detailed count-based results are provided in Appendix D” I am not sure what this refers to. ### Domain shift: * Why is there asymmetry in the transfer of models in Table 1? Why does stroke data generalize to health but not vice versa? One trivial reason is that because there are more users in the stroke dataset, they may ‘cover’ domains better, e.g. contain participants of a greater range of sizes and shapes and video conditions, improving the machine learning transfer. Were the comparisons in Table 1 balanced? * Can you comment on how labels were made across domains, that is, were different definitions based e.g. on kinematics made for labeling stroke primitives given that they may be overtly different kinematically * What is the quality of action recognition models on examples from this dataset, e.g. given individual pre-segmented videos? Is the issue with the seq-seq models and segmentation models that actions are hard to recognize or that they are hard to segment? * Table 3: Why don’t confusion matricies sum up to one row-wise or columnwise to 1? Also it should be clarified whether the matricies are normalized row-wise or column-wise. <doc-sep>A new dataset called StrokeRehab provides action information for multiple patients rehabilitating from strokes as well as healthy subjects performing similar actions. The dataset contains information from multiple modalities (sensors and video data). The paper also proposes a new algorithm to improve action recognition on this dataset. 1. The motivation of the work is clear. Such a dataset can help the medical community in multiple ways and design better rehabilitation programs in the future. 2. While the absolute number of identities present in the dataset (patients and healthy subjects) is small, I believe the dataset is sufficiently large in the medical domain. 3. The authors manually label the data and cross-check the quality of labels. 4. The authors first show that naively training the current state-of-the-art approaches on this dataset leads to inferior results and proposes a new approach to tackle the task. 1. I did not find Figure 6 in the paper. I believe it will be the figure which has the architectural definition. The figure is in the supplementary material and should be mentioned properly in the main paper. I believe this is an important figure and thus should be included in the main paper. 2. I believe transformers can also be tried instead of LSTM-based architectures. Data scarcity might be a valid reason not to use transformers which are known to be data hungry, but this should be clearly mentioned and discussed in the paper. 3. While a large amount of sensor data is collected, the information provided in the paper regarding this is less. I believe more information like where the sensors are placed can be provided. <doc-sep>The authors introduce a large-scale action-recognition dataset called StrokeRehab. The contributions of this paper are summarized as follows: 1. This dataset provides a benchmark for sub-second action identification. 2. The dataset consists of 3,372 trials of rehabilitation activities and 120,891 functional primitives. 3. It provides a benchmark for generalization in the presence of realistic distributional shift. 4. It can be used for quantifying dose by counting functional primitives. 1. This dataset consists of a considerable amount of short-duration elemental actions, which required more manual effort to label and more time for training. 2. This dataset contains both video and sensor data, which can leverage both modalities to perform the task of sequence estimation. 3. This dataset provides a benchmark for generalization in the presence of realistic distributional shift and data-driven quantification of rehabilitation dose. 1. It seems not novel to make a dataset for short-duration action identification since there are many similar practices. 2. There are some writing mistakes, e.g.: In Table 2, "on both (HS)" should be "on both (HS + SP)". 3. The experiment part is relatively insufficient. The author should supplement the elaboration of the experimental settings, such as the selection of the backbone of the baseline models used, the implementation details of training and testing, etc. 4. Lacking experiments to verify that this dataset can improve the performance of identifying more elemental motions at high temporal resolution. The models trained on this dataset should be tested on the existing datasets and the real scenes. And it would be better to supply the visualized results. 5. The authors spend a little too much space on the action sequence identification algorithm, but the proposed dataset is the theme. To highlight the content of the dataset, the structure of the article should be adjusted appropriately.
This submission receives reviews from 6 different reviews. Most reviewers (5/6) appreciate the contribution of the new dataset. They acknowledge that the problem setup is interesting and the dataset may be useful for different research communities: computer vision, time series analysis, medical. On the other hand, reviewer 38gN concerns about the novelty of the proposed dataset. AC reads all reviews and comments, and discussions, and is convinced that the proposed dataset will provide a useful benchmark for research, thus AC recommends to accept this submission as a poster. AC recommends the authors to incorporate all suggestions from reviewers for the final camera version.
The authors propose TESSERACT, an aggregation scheme that is robust to the directed deviation attack (proposed in Fang et. al. 2020). Pros: a. The defense is based on an interesting observation that, for a sufficiently small learning rate, as the model approaches optima in the benign setting, a large number of gradients do not flip their direction with a large magnitude. And if such a behavior is observed it is indicative of an attack. The paper defines a Flip Score for every received gradient update and uses it to identify and either reward or penalize the update. b. The history of rewards and penalties for a client is maintained as a reputation score. Normalized reputation scores are then used to compute the global model for the next round (Algorithm 1). c. In addition to defending against the directed deviation attack, the paper also proposes two adaptive attacks (Adaptive-Krum and Adaptive-Trim) in which colluding attackers knowing the parameters of TESSERACT adjust their attack vectors to escape the cut-off flip scores. d. The paper provides a convergence analysis sketch and compares the performance of TESSERACT with a set of Byzantine Resilient defenses (Krum, Bulyan, Trimmed Mean and Median, etc.). Weaknesses: 1. The paper proposes a defense against a specific form of attack and does not provide any guarantees or justification about why it should generalize against other more powerful forms of model poisoning attacks. On pg. 2, the paper argues (without justification) that other attacks are less damaging and essentially weaker than the attack in Fang et. al. However, one can argue that while the directed deviation attack, is designed to prevent model convergence, there are model poisoning attacks that not only insert targeted backdoors but also allow the model to converge (Bhagoji et. al., 2019: https://arxiv.org/pdf/1811.12470.pdf, Bagdasaryan et. al. 2020: https://arxiv.org/pdf/1807.00459.pdf). These attacks are thus more powerful and can also maintain stealth. 2. While the authors provide two very interesting adaptive attacks, it is hard to generalize the strength of defense without any formal guarantees, In other words, can there be no other adaptive attack that can bypass this defense. For example, can an attacker target the reputation update mechanism over time? 3. Typically, for reasons of privacy, the gradient updates are usually encrypted before being sent to the server (secure aggregation scheme by Bonawitz et. al. 2017, https://eprint.iacr.org/2017/281.pdf). TESSERACT in its current form will be difficult to implement with such schemes. 4. In Fig. 2, both sub-figures are labeled (a) and there is an inconsistency between the sub-figure caption and the image caption. Figure caption describes (a) as an aggregation of benign updates and sub-figure caption describes (a) as an aggregation of malicious updates. Finally, the Flip Scores of the figures(y-axes) are very different, but what is confusing is that there are both benign and malicious clients in both. Please see the detailed comments above. <doc-sep>This submission with the title "Tesseract: Gradient Flip Score to Secure Federated Learning against Model Poisoning Attacks " discusses defenses against data poisoning in federated learning. The authors propose a novel defense against the recently popularized attack "Tesseract: Gradient Flip Score to Secure Federated Learning against Model Poisoning Attacks " by Yang et al. This attack reduces model availability by sending malicious updates from compromised client that maximize sign flips in the global model gradient. This defense then proposes a measure of change in gradient direction that can be evaluated for each local update and used to dynamically down-weight clients with a large number of flips in direction. This submission presents a reasonable and timely defense against a strong attack against data poisoning in federated learning, which is nice. The design of the defense is well-executed and the inclusion of a dynamic reputation is a good addition to its robustness. I do have a few comments, mostly regarding the part of the paper concerning adaptive attacks: * The submission discussses *an* adaptive attack against this defense, but I would like to understand and see more discussion on why this is a strong adaptive attack specifically. It is currently not clear to me that this is a strong adaptive attack. It seems that the attacker could mount a stronger adaptive attack by keeping track of its own reputation and send local updates are optimized under an additional linear constraint that includes its own reputation? It would be great if the authors could clarify my understanding in this matter. * How does this defense perform compared to the other considered defenses when the number of malicious clients is unknown? For example, assuming c_max is fixed to m/5, but the actually number of malicious clients is decreased from m/5 down to 1. This would be especially interesting to compare to defense algorithms that require no knowledge of c_max. And some minor comments and questions: * GM and LM are used in the text before the variables are formally explained * "We halve the reputation score of every client if any of the values grows so large that the softmax operation causes an overflow" -> I am surprised that this could even happen, given the sizes of m, n_r and mu? * page 7: "here to how the effect on diverse datasets" ->here to show the effect on diverse datasets * "We find empirically (result not shown) that FABA degrades fast" -> it would be better if the authors could include this result in their appendix (or clarify the sentence - I think a partial answer to this statement is contained in Fig3b?) * The caption of Fig.2 says "(a) shows the results where only benign updates were aggregated using FEDSGD, and (b) shows the case where only malicious updates were aggregated", but the figure headings say the opposite * For CIFAR-10 and Shakespeare only two clients are malicious, is the attack too strong to be mitigated if more malicious clients exist? In summary, I think this is a decent submission. I have some questions (the central one is the strength of the adaptive attack) which I would like to discuss with the authors and I am open to changing my evaluation accordingly. <doc-sep>This paper studied a very important topic in the field of federated learning: how to efficiently resist untargeted model poisoning attacks. In order to defend against such a poisoning attack, the authors developed TESSERACT, an aggregation algorithm that assigns reputation scores to participating clients based on their behavior in the training phase and weights the client's contribution. Extensive case studies have verified the effectiveness of the algorithm. In particular, the experimental results show that TESSERACT provides robustness against even a white-box version of the attack. The strengths and weaknesses of this paper are summarized as follows: Strengths: + The problem studied in this paper is important and needs to be solved in federated learning + Good writing Weaknesses: - Need to include more related work that is highly important - Need more justifications about the novelty claims - Need to add some experiments under non-IID settings - Need to unify the attack paradigm - Insufficient theoretical analysis Comments: 1. The following important references are missing: [1] Kang J, Xiong Z, Niyato D, et al. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory[J]. IEEE Internet of Things Journal, 2019, 6(6): 10700-10714. [2] Awan S, Luo B, Li F. CONTRA: Defending against Poisoning Attacks in Federated Learning[C]//European Symposium on Research in Computer Security. Springer, Cham, 2021: 455-475. [3] Zhang J, Wu Y, Pan R. Incentive Mechanism for Horizontal Federated Learning Based on Reputation and Reverse Auction[C]//Proceedings of the Web Conference 2021. 2021: 947-956. 2. The untargeted model poisoning attacks (i.e., Full-Krum attack and Full-Trim attack) designed in this paper are vague. It would be better if the authors could formally define these attacks. Second, the authors need to explain how these attacks in this paper differ from [4]. [4] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. Local model poisoning attacks to byzantine-robust federated learning. In 29th USENIX Security Symposium (USENIX Security 20), Boston, MA, August 2020. 3. The reputation-based techniques used in this paper are quite common and non-surprising. A lot of previous work (e.g., [1-3]) has used reputation-based techniques to mitigate the poisoning attacks. Therefore, readers may think that the authors just applied the reputation-based methods in federated learning. The authors need to provide more details to justify the novelty of this paper. 4. There are some grammatical errors and inappropriate formula symbols in the context. For example, “k=1,2,…,m” should be “$k=1,2,\\ldots,m$.” 5. The editorial quality of this paper is not always satisfactory. It contains quite a lot of inconsistent/non-precise descriptions, as also reflected in the above comments. 6. Theorem 1 lacks rigorous proof and complete theoretical analysis. It would be better if the author could give complete proof of Theorem 1. 7. The number of attackers has always been a very important hyperparameter (or factor) in model poisoning attacks. Therefore, it would be better if the authors could conduct more case studies to explore the influence of the number of attackers on different defense algorithms. 8. In addition, non-IID data seems also to affect the gradient direction (or value) of the client. Therefore, the authors need to add some experiments to illustrate the effectiveness of the proposed algorithm under non-IID settings. 9. In fact, the poisoning attack defended by the baselines chosen by the authors is different from the attacks designed in this paper. Then, it would be better if the authors tested the proposed defense method on the poisoning attack involved in the baseline schemes. 10. In general, the authors need to add more theoretical analysis and verification experiments. For now, the authors need to add more theoretical analysis and verification experiments. The reason is that there are still unfair comparisons in the comparative experiments in this paper. If the author can address the reviewer’s comments, I will consider giving it a score of "6". <doc-sep>The paper tackles the problem of adversarial attacks in federated learning settings. The main proposal is a defensive technique to address the “byzantine generals” problem in federated learning: how to ensure that the general ML model is not affected by “poisonous” attempts made by corrupted clients. The proposed technique is experimentally validated on four datasets, outperforms previous defensive methods, and the evaluation also considers adaptive adversaries with increasing degrees of knowledge. Overall, the presentation of the paper is very good. The quality of the English text is good. Figures are appropriate, Tables require some editing. The topic addressed by the manuscript is trendy and in-line with ICLR’s scope. The references should be improved The contribution is significant STRENGTHS: + Adaptive adversary + Trendy subject (federated learning) + Evaluation on multiple datasets + Technically sound WEAKNESSES - Unclear assumptions and threat model. - Problem or Feature space attacks? - Lack of a concrete use-case - Tradeoff? I enjoyed reading the paper. In contrast to many research efforts on adversarial ML, this paper makes many security assumptions that set it apart with respect to the existing body of literature. I also praise the the consideration of attackers with varying “strength” and the different datasets. All these points make me lean to recommend acceptance. Nonetheless, there are some issues that the authors could solve to further improve their paper. Let me elaborate on the above-mentioned weaknesses, starting from the most significant ones. **Assumptions and Threat Model?** This is probably the only “true” problem of the paper, which should be absolutely rectified. I was not fully able to understand the assumptions made by Tesseract. Does it work “only” against the “directed deviation attack” proposed by Fang et al.? Or does it also protect against different attacks? In general, Section 2.2, Threat Model, is not very comprehensive. The authors should better expand this section by clearly pointing out all the assumptions and requirements of the proposed method. This is especially true because the Fang et al. attack was proposed in 2020, and some of its assumptions are not yet well-known. Specifically, this statement is suspicious: “We assume a full-knowledge (white-box) attack where the attackers have access to the current benign gradients.”. Does it mean that Tesseract only works under this assumption? I.e., the attacker knows, and exploits, the current benign gradients? This is a rather “unrealistic” assumption: I understand the willingness to work against “worst case” scenarios; yet, if such “worst case” scenarios are not realistic in the first place, then what is the purpose of the proposed mechanism? What benefit is there in protecting against an attack that will never happen in the first place? I invite the authors to restructure this section by using the common taxonomies adopted in adversarial ML papers [I]. **Problem or Feature Space attacks?** The authors perform their experiments on four well-known datasets: MNIST, CIFAR, Shakespeare, FEMNIST; for each dataset, a different (deep) ML model is targeted. Three of these datasets are of images, whereas Shakespeare contains text data. There are different ways to create “adversarial examples”, depending on the ‘space’ where the perturbation is applied. As far as I am aware, the adversarial examples considered in this paper to perform the poisoned updates are created in the feature space. It would be a lot more interesting if at least one evaluation included adversarial examples generated in the “problem” space [A]—or, at the very least, considered samples generated by “physically realizable” adversarial perturbations [B]. I acknowledge that the method should work even in these circumstances, as the proposed Tesseract defense is agnostic of the process used to apply the perturbation. However, considering the strong relationship with (real) security that permeates the paper, I believe that a more convincing use-case would dramatically improve the quality of the paper. This is also motivated by the current state-of-the-art: after almost a decade of adversarial attacks, more recent efforts are leaning towards evaluation that consider more realistic circumstances, where the attacker is constrained by the limitations of the real world; this is even more true in “distributed system” scenarios, such as Network Intrusion Detection Systems, which bear a strong relationship with federated learning (e.g., [C, D, E, F]). As such, I invite the authors to perform an additional “proof-of-concept” experiment where they consider adversaries with constrained capabilities. This is also motivated by the fact that some perturbations may yield different effects when created in the problem space (as shown in [A]). **Tradeoff?** A common problem in adversarial ML countermeasures is that they may degrade baseline performance [G, H]. Hence, I am interested in knowing how the proposed method responds when there are no “malicious” clients. Even if the baseline performance does not decrease, what is the overhead of the proposed method? For instance, in Table 2 the authors report some results for “Attack=None”, which I assume represent the accuracy when no attack takes place. However, all the rows of these experiments (namely, FedSGD, Tesseract, Faba, FoolsGold, FLTrust) consider hardening FL techniques; for instance, on MNIST the proposed Tesseract has an accuracy of 92.52 when no attack takes place—the best among all other defences. Despite being appreciable, I am interested in knowing the performance when NO defense is applied. Surely, the test accuracy in a “fully trusted” FL setting should be superior than 92.52. Hence, I ask: what is the ‘cost’ of Tesseract? **Lack of a concrete use-case.** I believe that the paper could be further improved with a concrete use-case, where the authors explain, step-by-step, how a (single, or multiple) attacker can compromise a federated learning system, and how the proposed method can help in solving such problem. Hence, I request the description of a concrete use-case explaining the abstract scenario reported in Figure 1. Such use-case can be at the basis of the “constrained” attack that I invite the authors to perform in my "problem space perturbations" suggestion. Some additional issues: • In the Introduction, the authors state: “To counter this threat, a set of approaches has been developed for countering Byzantine clients in FL…”. I believe that “Byzantine Clients” is a wrong term: what is countered by Tesseract are not byzantine clients, but "unloyal" clients, that are “against” the byzantine clients (at least by referring to the well-known problem of the byzantine generals, which should agree on a method to reach consensus in the presence of unloyal generals). • The caption of Figure 1 has a typo “c out of m clients maybe be malicious”. • In Figure, the gradient “LM_{c-1}” is out of place. • In Section 2, the authors state “Our simulation of federated learning consists of m clients, each with its own local data, but the same model architecture and SGD optimizer, out of which c are malicious, as shown in Figure 1”. Is there a minimum amount of “m”? • Figure 1 appears before Figure 2, but in the text it is referenced after Figure 2. • Putting Figure 2 so early on is very confusing. The “flip score” is a measure introduced for the first time in this paper. As such, any reader would be thrown off by such graphs before reading the paper, meaning that the findings of Figure 2 are difficult to interpret---during the Introduction---, as the flip score has not been defined yet. As such, such graphs are ultimately meaningless: I have to trust the authors that they correspond to “interesting” observations and “fair” experiments, which is not scientific. • The presentation and notation in the “Flip-score” (page 5) is very ugly and difficult to follow. • Section 5 should be merged in Section 6 • W.r.t. Table 2, the authors state “We see that TESSERACT is the winner or 2nd place finisher in 7 of the 12 cells (benign + two attacks * 4 datasets)”. This should be better highlighted. I only see three bold values for Tesseract in Table 2. • W.r.t. Table 2, the authors state “We have not shown the test loss curve for Krum aggregation because of the large loss values.”. I invite the authors to report such values in Table 2, because the different “formats” of the three subtables (None, Full-Krum, Full-Trim) make this table very hard to interpret. EXTERNAL REFERENCES [A]: "Intriguing properties of adversarial ml attacks in the problem space." 2020 IEEE Symposium on Security and Privacy (SP). IEEE, 2020. [B]: "Improving robustness of ML classifiers against realizable evasion attacks using conserved features." 28th {USENIX} Security Symposium ({USENIX} Security 19). 2019. [C]: "Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems." ACM Digital Threats: Research and Practice. 2021. [D]: "Constrained concealment attacks against reconstruction-based anomaly detectors in industrial control systems." ACM Annual Computer Security Applications Conference. 2020. [E]: "Conaml: Constrained adversarial machine learning for cyber-physical systems." Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security. 2021. [F]: "Resilient networked AC microgrids under unbounded cyber attacks." IEEE Transactions on Smart Grid 11.5 (2020): 3785-3794. [G]: "Adversarial example defense: Ensembles of weak defenses are not strong." 11th {USENIX} workshop on offensive technologies ({WOOT} 17). 2017. [H]: "Deep reinforcement adversarial learning against botnet evasion attacks." IEEE Transactions on Network and Service Management 17.4 (2020): 1975-1987. [I]: "Wild patterns: Ten years after the rise of adversarial machine learning." Pattern Recognition 84 (2018): 317-331. The paper tackles a very interesting problem and the many security considerations as well as the experiments on various datasets and comparisons with existing defenses are commendable. Some issues (unclear threat model, flexbile perturbations in the feature space) still prevent me from recommending complete acceptance. More clarifications are necessary, and by adding some more "realistic" experiments I believe that the paper could be turned into a significant submission of ICLR. I am recommending a "5", but my score can be easily increased to 6 by addressing the many clarifications expressed in my review. Further experiments and the concrete use-case would further increase my score. ___ AFTER REBUTTAL: score increased to 6 which I will increase additionally to 8 if the authors are willing to support the claim that Tesseract is a "secure by design" defense. FURTHER UPDATE: score increased to 8, and I stand by my decision unless other reviewers point out that the claim of Tesseract being "secure-by-design" is flawed.
The paper presents a defense against the gradient sign flip attacks on federated learning. The proposed method is novel, technically sound and well evaluated. The crucial issue of the paper is, however, that this defense is specific to gradient-flip attacks. The authors show the robustness of their method against white-box attacks adhering to this threat model and claim that "an adaptive white-box attacker with access to all internals of TESSERACT, including dynamically determined threshold parameters, cannot bypass its defense". The latter statement does not seem to be well justified, and following the extensive discussion of the paper, the reviewers were still not convinced that the proposed method is secure by its design. The AC therefore feel that the specific arguments of the paper should be revised - or the claim of robustness further substantiated - in order for the paper to be accepted. Furthermore, as a comment related to ethical consideration, the AC remarks that the paper's acronym, Tesseract, is used by an open source OCR software (https://tesseract-ocr.github.io/) as well as in a recent paper: Pendlebury et al., TESSERACT: Eliminating Experimental Bias in Malware Classification across Space and Time, USENIX Security 2019. All of the above mentioned reservations essentially add up to a "major revision" recommendation which, given the decision logic of ACLR, translates into the rejection option.
The paper presents algorithms for optimization using sign-SGD when the access is restricted to a zero order oracle only, and provide detailed analysis and convergence rates. They also run optimization experiments on synthetic data. Additionally, they demonstrate superiority of the algorithm in the number of oracle calls for black box adversarial attacks for MNIST and CIFAR-10. The provided algorithm has optimal iteration complexity from a theoretical viewpoint. The paper was, overall very well written and sufficient experiment were presented. The math also seems correct. However, I think they should have explained the motivation for the need of developing such an algorithm better. Section 3 can be improved. I think this is an important paper because it provides a guaranteed algorithm for zero order sign-gradient descent. However, the ideas and the estimators are not novel. They show applicability of standard gradient estimators for zero order oracles for sign-sgd algorithm. <doc-sep>The authors proposed a zero-order version of the recent signSGD algorithm, by replacing the stochastic gradient with a usual function difference estimate. Similar convergence rates as signSGD were obtained, with an additional sqrt(d) factor which is typical in zero-order methods. Three (typical) gradient estimates based on function values were discussed. Overall, the obtained results are relatively straightforward combination of signSGD with existing zero-order techniques. Quality: The technical part of this paper seems to be solid. The experiments, on the other hand, are quite ambiguous. First off, why do you choose that peculiar least squares binary classification problem on page 7? Is Assumption A2 satisfied for this problem? Why not use logistic regression? The experimental results are also strange: Why would ZO-signSGD converge faster than ZO-SGD or any other ZO variant? Shouldn't they enjoy similar rates of convergence? Why would taking the sign make the algorithm converge faster? Note that the original motivation for signSGD is not for faster convergence but less communication. For the second set of experiment, how do you apply ZO-SGD to generate adversarial examples? Again, why do we expect ZO-signSGD to perform better than ZO-SGD? Clarity: This paper is mostly well-written, but the authors at times largely overclaim their contributions or exaggerate the technical challenges. -- Page 2, 2nd line: the authors claim that "Our analysis removes the impractical assumption of b = O(T)", but in the later examples (page 6, top), they require q = O(T). How is this any different than b = O(T)? Even worse, the former case also require b = n, i.e., there is no stochasity at all... -- Assumption A2: how crucial is this assumption for obtaining the convergence results? note that not many functions have Lipschitz continuous bounded gradients... (logistic regression is an example) -- Page 4, top: "ZO-signSGD has no restriction on the mini-batch size b"? The rates at the end of page 5 suggests otherwise if we want the bound to go to 0 (due to the term sqrt(d/b)). -- Page 4, top: the last two technical challenges do not make sense: once we replace f by f_mu, these difficulties go away immediately, and it is well-known how to relate f_mu with f. Originality: The originality seems to be limited. Contrary to what the authors claimed, I found the established results to be relatively straightforward combination of signSGD and existing zero-order techniques. Can the authors elaborate on what additional difficulties they need to overcome in order to extend existing zero-order results to the signSGD case? Significance: The proposed zero-order version of signSGD may potentially be significant in applications where gradient information is not available and yet distributed optimization is needed. This, however, is not demonstrated in the paper as the authors never considered distributed optimization. ##### added after author response ##### I appreciate the authors effort in trying to make their contributions precise and appropriate. The connection between ZO-signSGD and adversarial examples is further elaborated, which I agree is an interesting and potentially fruitful direction. I commend the authors for supplying further experiments to explain the pros and cons of the proposed algorithms. Many of the concerns in my original review were largely alleviate/addressed. As such, I have raised my original evaluation.<doc-sep>In this paper, the authors studied zeroth order sign SGD. Sign SGD is commonly used in adversarial example generation. Compared to sign SGD, zeroth-order sign SGD does not require the knowledge of the magnitude of the gradient, which makes it suitable to optimize black-box systems. The authors studied the convergence rate of zeroth-order sign SGD, and showed that under common assumptions, zero-order sign SGD achieves O(sqrt(d/T)) convergence rate, which is slower than sign SGD by a factor of sqrt(d). However, sign SGD requires an unrealisitcally large mini-batch size, which zeroth-order sign SGD does not. The authors demonstrated the performance of zeroth-order sign SGD in numerical experiments. Overall, this is a well written paper. The convergence property of the zeroth-order sign SGD is sufficiently studied. The proposal seems to be useful in real world tasks. Weaknesses: 1) out of curiosity, can we improve the convergence rate of the zeroth-order sign SGD if we assume the mini-batch size is of order O(T)? This could help us better compare zeroth-order sign SGD and sign SGD. 2) Figure 2 is too small to be legible. Also, it seems that the adversarial examples generated by zeroth-order sign SGD have higher distortion than those found by zeroth-order SGD on CIFAR-10 dataset. Is it true? If so, it would be beneficial to have a qualitative explanation of such behavior.
This is a solid paper that proposes and analyzes a sound approach to zero order optimization, covering a variants of a simple base algorithm. After resolving some issues during the response period, the reviewers concluded with a unanimous recommendation of acceptance. Some concerns regarding the necessity for such algorithms persisted, but the connection to adversarial examples provides an interesting motivation.
This paper focuses on designing more effective ways for contrastive learning. The author claims that stronger augmentations are beneficial for better representation learning. Different from directly applying the stronger augmentations to minimize the contrastive loss, the author proposes to minimize the distribution divergence between the weakly and strongly augmented images. The experimental evaluations are conducted on ImageNet classification and related downstream tasks, and the results are promising. Clarity: 1. The method is very simple and straightforward. My main concern is the experimental comparisons. As we all know, contrastive learning algorithms like MOCO and SimCLR benefits from longer training epochs a lot (for example, training with 800 epochs is much better than with 400 epochs). Thus I think the comparisons in Table 2 are not convincing. From algorithm 1, we can find that the equivalent batch size of the proposed CLSA method is two times as classical MOCO method. Thus I would prefer to check the results of CLSA at epochs 100 and 400 for fair comparisons. 2. What is the value of the balancing coefficient? It would be nice if some ablation results are provided.<doc-sep> This paper proposes the better utilization of strong data augmentations for contrastive loss functions in unsupervised learning. In Moco set up, typically, weaker augmentations such as color jittering, cropping is applied to construct positive pairs from the same image. In this study, by proposing a modified objective, the authors leverage stronger data augmentations to construct more challenging positives and negatives pairs to improve the quality of the representations. The paper delivers a novel objective together with leveraging existing strong augmentations to improve downstream performance. The authors can find my questions/concerns listed below. 1. The paper is overall well-written, however, it is disappointing to see many typos grammar mistakes throughout the paper. Some examples are in "Thus we proposed the CLSA (Contrastive Learning with Stronger Augementations)", "to train an unsupervised representation", "The contrastive learning (Hadsell et al. (2006)) is a popular self-supervised idea". 2. In section 3.1, the authors mention that the keys in the memory bank is managed with first in first out method. Is it not supposed to be first in last out? I would like to see some clarification on this. 3. The numerator in Equation 3 should be z_i' vs. z_i not z_k. 4. The authors claim that in He et al. an input image is resized and cropped to 224×224 pixels. It should be "an image is first cropped from an input image and resized to 224x224 pixels." 5. In the experiments section, the authors list other methods including MoCo, SimCLR, MoCo-v2, BYOL and compare to what they propose. As a baseline, it would be nice to directly use the stronger augmentations in MoCo-v2 objective and perform comparison to their method. Throughout the paper, the authors claim that strong augmentations hurt the learned representations due to distorted images. It would be meaningful to show this experimentally as well. 6. The authors explain that they choose a strong transformation randomly from the given 14 transformations and repeat it 5 times to strongly augment an image. Is the sampling done without replacement? In other words, do the authors choose 5 unique transformations with the corresponding magnitude and apply those transformations to a single image? 7. I like how the authors point the similarity of their objective to knowledge distillation. In this case, strong augmentations are assigned probability of being a positive pair from the positive pair constructed with weak augmentations. It helps to understand the full picture for the proposed method. 8. Finally, I think the figure 3 is confusing rather than being helpful. Both weak and strong augmentations go to the memory bank and it looks like two distributions come out of nowhere in the figure. It would be more clear to point out that there is distribution of the representations from the strong augmentations and weak augmentations and they supervise the assignment for strong augmentations given predictions on the weak augmentations.<doc-sep>This paper presents a method to incorporate stronger augmentations into the visual representation contrastive learning framework. Specifically, three correlated views of an image are first generated by using two weak and one strong augmentation operations on the same image. Then, the networks are trained to maximize the agreement between the two weak views and also to minimize the distribution divergence between a weak view and the strong view. The method is evaluated on several visual tasks including classification, transfer learning, and object detection, with the standard evaluation protocol for self-supervised learning, and the results are promising. Pros: 1. This paper is well-structured and easy-to-follow. 2. The idea of utilizing strong augmentations for contrastive learning is interesting and novel to me, and the results are promising. 3. The proposed framework seems general which might be easily incorporated into the existing contrastive learning frameworks. Cons: 1. The motivation about using stronger augmentations is not well justified. Specifically, the authors propose to use stronger augmentations based on two reasons: (1) stronger augmentations can expose some novel useful patterns; (2) the effectiveness of stronger augmentations is proved in the semi-supervised learning and supervised learning field. However, no related papers are provided to support the first point, while the papers (Cubuk et al. (2018)); Qi et al. (2019); Wang et al. (2019)) that are cited to support the second point do not explicitly make relevant conclusions. (Chen et al. (2020a)) even demonstrate that when training supervised models, stronger color augmentation hurts their performance. I would like to see a more comprehensive review of related works to clarify the motivation. 2. In addition, some important ablation studies are missing in the experiment. E.g., how does the performance change as the magnitude or usage times of stronger augmentations changes? 3. The proposed DDM loss seems general for different contrastive learning frameworks. I would like to see if it still works when applied to other frameworks, e.g., SimCLR, InfoMin? Overall, given the novelty and strong results of the proposed framework, I remain positive towards this paper. I will be happy to increase my rating if my concerns are addressed in the rebuttal period. <doc-sep>Summary:\\ This work investigate the recent popular direction of unsupervised representation learning using contrastive loss between augmented images. Authors propose to minimize the divergence between the distributions of strongly augmented vs. weakly augmented images. The method reaches competitive performance in recognition and object detection. ----- +Strengths\\ +The main idea is well motivated: that strong augmentation reveal useful cues in visual representation learning but has not been successfully exploited in unsupervised learning.\\ +The proposed solution is novel within contrastive learning to my best knowledge.\\ +Results are extremely strong. ----- -Concerns\\ -The divergence between the two conditional distributions can be a moving target since they are trained jointly. It is not clear if this is will result in stable learning for the unsupervised setting, and what effect that may have on the performance and quality of the representations.\\ -Evaluation only focus on final result and lacks analysis of the proposed method, especially when compared to recent paper of similar nature published in top conferences. For example, strong augmentation is a focus of this paper, but there are no ablation regarding the augmentations. Is the performance sensitive to the choice of strong augmentation?\\ -The paper could also use some more theoretical analysis to address some of the weaknesses stated above. ----- Recommendation\\ I like the proposed idea. It is novel and interesting and seems to achieve good results. However the lack of both theoretical and empirical analysis beyond results on performance raises many questions. As a result I am on the fence but leaning towards accept.
This paper improves MoCo-based contrastive learning frameworks by enabling stronger views via an additional divergence loss to the standard (weaker) views. Three reviewers suggested acceptance, and one did rejection. Positive reviewers found the proposed method is novel and shows promising empirical results. However, as pointed out by the negative reviewer, the paper should have clarified about computational overheads of the method compared to the baseline (MoCo) in several aspects, e.g., their effective batch sizes or training costs, for the readers’ better understanding. As the concern was not fully resolved during the discussion phase, AC is a bit toward for reject. AC thinks the paper would be stronger if the authors include more ablations (and the respective discussions) regarding this point, e.g., CLSA-multi (and -single) vs. MoCo-v2 under the same training time, both at early epochs (~200; as reported in the author response) and longer epochs (after convergence; ~1000 and even more).
This paper presents a mixed-size CNN training scheme, using several different input image sizes for one single model training. The authors assume the training budget, represented as S_i^2*B_i*D_i (i.e., spatial sample size, the number of batched distinct samples and the duplicates for each distinct sample), to be a fixed constant during training step i. Under such an assumption, two mixed-size training scenarios are considered, one for training acceleration and the other for improved model generalization ability. The authors additionally use step-wise image size sampling, gradient smoothing, and per-size BN calibration to enhance the model performance under the above two mixed-size training scenarios. Experimental validation is performed on CIFAR and ImageNet datasets using diverse CNN structures. Mixed-size training is a critical problem and the methods proposed in this paper are interesting. My main concerns to this paper are as follows. --- Critical related works and comparison are missing. Mixed-size training of CNNs for image classification are not new. Here are some recent works, however they are missed by the authors. “Resolution Adaptive Networks for Efficient Inference”, in CVPR 2020 “Resolution Switchable Networks for Runtime Efficient Image Recognition”, in ECCV 2020. “MutualNet: Adaptive ConvNet via Mutual Learning from Network Width and Resolution”, in ECCV 2020. Besides, as described by the authors, NeurIPS 2019 paper “Fixing the train-test resolution discrepancy” also considers how to enhance the performance of CNN models when applying them with different input image sizes. But a performance comparison is missing. To show the advantages of this paper, thorough discussion and performance comparison with the above works are necessary. Taking ResNet-50 trained on ImageNet as an instance, I notice that the proposed method shows obviously worse accuracy compared to some of these works. ---Another critical baseline is also missing. In page 8, “We note that for original fixed-size regimes this calibration procedure resulted, with degraded results and so we report accuracy without calibration for these models”. To my understanding, this is somewhat weird. Why BN calibration does not work on the other image sizes when the model is trained with a fixed image size? It is not clear. Furthermore, in NeurIPS 2019 paper “Fixing the train-test resolution discrepancy”, this line of methods work pretty well. Such a BN calibration should server as another baseline for more fair comparison. ---Regarding B+ design How about the wall-clock training cost (in hours) instead of the number of iterations/epochs? How about the performance of applying “scale the learning rate linearly” to train the baseline model? ---How about model transfer ability? Only image classification tasks are considered in the experiments. How about the performance of the trained backbone models, when transferring them in the downstream tasks, such as object detection and semantic segmentation? Can performance gain be transferred? ---Others Is there any principled way regarding the size sampling strategy? The current strategy is based on manual setting, which limits its use in real applications. I suggest the authors to also provide precise accuracy numbers, etc. regarding some figures (e.g., figure 1, figure 4) shown in the paper. Generally, I am on the fence, to this paper. I encourage the authors to address the questions I raised above. <doc-sep>== Summary == The paper proposes to use different image resolutions during the training of a deep neural network image classifier, and varying the batch size or number of data augmented versions of the images, keeping the computational cost per step roughly constant. The authors apply this approach to several architectures and three datasets, and show that they can achieve or improve the same accuracy as the baselines but much faster; or achieve better results with the same computational budget. == Pros == + The authors conducted their experiments using three different datasets (Cifar10, Cifar100 and ImageNet), and six different architectures (ResNet44, ResNet50, WideResNet, DenseNet and EfficientNet). + The proposed approach outperforms the baselines consistently across the 8 pairs of (dataset, architecture) that they have studied. In addition, MixSize can be easily implemented (the authors also provide a PyTorch implementation). + The authors investigate different "tricks" to apply during training when using MixSize to stabilize training or achieving better results. For instance, they compared randomly sampling the size from a distribution (as they propose) versus increasing the image resolution as training progresses, and showed that randomly sampling yields slightly better results. + Figure 3a seems to suggest that MixSize yields more robust classifiers under a wide range of image sizes. The area under the "Mixed S=144" curve seems to be larger than the area under the "Fixed S=224". However, further experimentation is needed to confirm this, since the area of the "Mixed S=208" seems closer to "Fixed S=224", and in any case the maximum image size was capped around 415. == Cons == - On of the claimed contributions is: "Faster training vs. high accuracy. We show that reducing the average image size at train- ing leads to a trade-off between the time required to train the model and its final accuracy.". However, I would not consider this a novel contribution, since the trade-off between speed and accuracy is well-known. In fact, the authors cite Huang et al. (2018) and Szegedy et al. (2016) which already showed this. EfficientNet is another well-known architecture that takes advantage of this fact. - In the intro, the authors claim "Touvron et al. (2019) demonstrated that networks trained on specific image resolutions perform poorly on other image sizes at evaluation time, as confirmed in Figure 1". This is inaccurate, since Touvron et al. (2019) actually show that slightly increasing the test resolution improves accuracy, due to the discrepancy in object sizes introduced by data augmentation (cropping). In fact, Figure 1 shows the same effect (model trained with 224 res, achieves best results with 284 eval image size). The statement in the introduction is again contradicted at the end of the first paragraph in Section 2.1. - The authors do not report any statistical significance metric. Some datasets have very close results, so it's hard to tell whether the improvements are (statistically) significant or not. - Poor captions in figures and tables. For instance, the difference between solid lines and dashed lines is only explained at the very last figure in the appending (Figure 7). Also, the caption of Figure 3 reads "Test accuracy on validation set" which is ambiguous: Is it a typo, or is it that the authors report the results on the 50k validation set of ImageNet (and use some smaller subset of the training set as validation)? == Reasons for score == Although the proposed approach is simple and consistently improves the baseline results, I'm not convinced that the originality and significance of the work is enough for it to be accepted. Regarding originality, there is a plethora of works exploring the trade-offs between image size and accuracy. The most similar works are Howard (2018) and Karras et al. (2017), which increase the image size through training. It's not clear that random sampling offers a much better result, judging from Figure 8 in Appending E, if one compares the accuracy of "Small->Large" strategy at 125k steps (possibly before the last increase in size). Regarding significance, if one restricts the analysis to the best architectures in each dataset, the increase in accuracy does not seem to be very large. Cifar10: 98.16% -> 98.32% (AmoebaNet), ImageNet: 76.32% -> 76.53% (EfficientNet-B0). Cifar100 shows a larger improvement, but the authors did not use AmoebaNet (which worked best in Cifar10) for some unknown reason. The fact that no statistical significance metrics are reported, does not help to discern whether the improvements are meaningful or not.<doc-sep>This paper proposes to increase training costs to compensate for the reduced costs from multi-scale CNN training by either increasing batch size (and therefore lowering the number of iterations per epoch) or increasing the number of augmented versions (duplicates) of the same samples within a batch. The former allows for smaller total training costs than conventional single-scale training, while the latter maintains the total training costs but improves the final performance. Several training improvement methods are introduced to improve the multi-scale training. Paper's strengths - The paper is quite well-written. - Code and models are provided for reproducibility. - Gradient smoothing is a nice way to mitigate the variability of gradient statistics resulted from different input sizes. As far as I know, this is quite novel, particularly in the context of multi-scale training. Paper's weaknesses - Multi-scale training is a common practice in many computer vision tasks especially in object detection** (less common in image classification). This paper also does multi-scale training but only introduces some minor improvements that are neither breakthroughs nor that they provide any interesting insight. > - Bag of Freebies for Training Object Detection Neural Networks. arXiv. > - YOLO9000: Better, Faster, Stronger. CVPR 2017. > - MMDetection: Open MMLab Detection Toolbox and Benchmark. arXiv. - For "step-wise size sampling", it seems like that conclusion to use this variant of sampling is heuristically chosen and totally ignores the existing practice in other computer vision tasks. One of the straightforward ways to do multi-scale training in object detection is to select different input sizes even for the images within the same batch (by padding zeros for the smaller images). Alternatively, one could sample different input sizes for different GPU batches (all images within a GPU share the same size) while doing multiple-GPU training. - The three training improvements (step-wise size sampling, gradient smoothing and batch-norm calibration) are what separate this paper from prior work but they are not extensively evaluated. Some of them are briefly evaluated in the appendix and some of their effects are just briefly mentioned in the method section. They ought to appear in the experimental section of the main paper. Gradient smoothing seems like a nice idea but it is unclear how important it is given that there is only one figure (Fig.7) showing its impact on the performance. - This paper strives to increase the number of batch samples given a fixed budget of computational and time resources for per iteration step. I wonder why this should be limited to the cost of an iteration step but not the entire training cost from all epochs/iterations. It makes more sense to measure the cost for the entire training process which accurately tells how much is spent to train the model until convergence. - In Sec. 5.1, the size ratios for different datasets are carefully chosen based on cross-validation. This makes it hard to directly apply MixSize to other datasets or settings without going through this step. It also adds additional computations which defeat the purpose of increasing batch size to maintain the same training cost as conventional single-scale training. - Using separate BatchNorm statistics for multi-scale inputs/features has been explored in the following papers (published at least few months before ICLR deadline). They should be cited and compared against MixSize: > - Learning to Learn Parameterized Classification Networks for Scalable Input Images. ECCV 2020. > - Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks. CVPR 2018. Overall, this paper shows good performance and has some good ideas (e.g., gradient smoothing) for improving multi-scale training. But it fails to give more emphasis to or do a deeper dive into the potentially novel aspects of the work. The current performance improvements may come from doing just trivial multi-scale training which was already widely-explored in prior work.<doc-sep>The paper proposes the use of mixed image sizes during training. They argue empirically that such an approach improves generalization for both fixed image size (e.g. 224 in ImageNet) as well as for variable image size. The proposed training algorithm maintains the same computational budget at each step by either changing the batch size or by using more/less augmentation. They show that adjusting the batch size leads to a faster training but using augmentation leads to a better test accuracy (hence a tradeoff). However, in order for their proposed method to work, the authors also propose modifications to standard training procedures (i.e. smoothing the gradient, adjusting the batchnorm layers) but without carrying out an ablation study that shows the impact with/without each of these steps. My particular concern is in the use of "gradient smoothing". If I understand it correctly, this is not very different from using momentum, which is known to reduce the variance and improve generalization. However, the authors use gradient smoothing only in their proposed method and do not use it in the baseline method (why not?). It is possible that the reported improvements (e.g. for fixed image size) come solely from this step. The other concern is when sampling the image size per step. The authors propose distributions that seem odd in their experiments (e.g. why is p=0.6 for size 128 in ImageNet which is much larger than others, and why not uniform in CIFAR10). It is important to know if the results are sensitive to the choice of the distribution, to make sure that the benefit is not due to random chance. Also, if this distribution needs to be fine-tuned, then the discussion about improving the training time would be meaningless. The last issue is the robustness to different image sizes. Figure 3(a) shows that if the average image size during training is small, the network will perform better for small images but not for large images. Conversely, if the average image size during training is large, it will perform better for large images, but not for small images. If the concern here is around using mixed-image sizes at inference time, then the red curve in Figure 3(a) shows that a fixed image size is reasonably robust. If one knows in advance that the average image size would be smaller than 224, one can train with a fixed image size that is smaller. One minor last remark (feel free to ignore) regarding the motivation: the authors study the correlation between the full gradients for the same image with different sizes, on the one hand, and for different images with the same size, on the other hand. They conclude that the first case (different sizes) shows a stronger correlation, which is true according to Table 2, but this statement omits the fact that most correlations were low anyway. For example, for partially trained network, it is 0.08 vs. 0.02. I do not think that one can use such figures to conclude that "smaller image gradients can be used as an estimate to the full image gradients". The improvement in test accuracy is very promising but I believe, some ablation is needed to identify exactly where this improvement comes from and whether it can be obtained using simpler approach (e.g. smoothing the gradient alone or using augmentation alone, etc).
This work proposes to train networks with mixed image sizes to allow for faster inference and also for robustness. The reviewers found the paper was well-written and appreciated that the code was available for reproducibility. However, the paper does not sufficiently compare to related methods. The authors should resubmit once the comparisons suggested by the reviewers have been added to the paper.
**Background for the paper** Semidefinite programs (SDPs) form a popular class of convex programs but suffer from large input size in most useful settings. To circumvent this issue of size, Burer and Monteiro proposed expressing the problem variable as $X=YY^{\\top},$ for some $Y\\in\\mathbb{R}^{n\\times p}$ and, instead of (SDP), solving the following proxy problem, which we refer to as (BM): \\begin{align*} \\min_{Y\\in\\mathbb{R}^{n\\times p}, YY^\\top \\in\\mathcal{C}}C\\bullet YY^{\\top}, \\end{align*} where $\\mathcal{C}$ is the constraint set for the original SDP with $m$ constraints. The benefit conferred by this reduction is that (BM) uses $O(np)$ memory, whereas (SDP) uses $O(n^2)$. (BM) clearly lies in a lower rank space than (SDP), and therefore it shouldn't necessarily be the case that solving (BM) solves (SDP). However, it turns out that when $p$ (in the size of $Y$ above) satisfies $p = O(\\sqrt{m})$, then the solution set of (BM) also contains that o (SDP), following the celebrated result of Barvinok and Pataki (independently obtained) that there exists a rank-$\\sqrt{m}$ solution to (SDP). In opposition to the memory saving, an immediate disadvantage of the formulation of BM is its non-convexity. Standard convex optimization techniques therefore fail to provide convergence guarantees. There has been a flurry of recent work (Boumal-Voroninski-Bandeira, Bhojanapalli-Boumal-Jain-Netrapalli, Cifuentes, and Cifuentes-Moitra) that provide polynomial-time guarantees for the BM formulations of various classes of SDPs, *in the smoothed analysis setting*. **What problem this paper studies** In the line of work mentioned above, ``smoothed analysis'' is central to the theorem statements. In other words, all the guarantees provided by those papers exclude a set of cost matrices. (The reason for this is that those papers prove their guarantees by applying a small perturbation to the cost matrices and showing that the probability of the algorithm ending up in a set of spurious critical points --- matrices that are (second-order) stationary points for BM but do not correspond to solutions to the original SDP --- is vanishingly small.) **The question posed by this paper is if such an exclusion is necessary**. The paper then provides an affirmative answer to this question by an explicit construction of such an SDP. It uses tools from Riemannian calculus to substantiate a point being a second-order stationary point while its corresponding point being suboptimal for the original SDP. Based on my current understanding of the paper's contribution, my preliminary review is rejection. This is because my understanding is that this problem has been solved by Bhojanapalli-Boumal-Jain-Netrapalli (http://proceedings.mlr.press/v75/bhojanapalli18a/bhojanapalli18a.pdf). See the paragraphs after Corollary 4 and also the formal statement in Theorem 5. Could the authors please explain to me how their result differs from this one? It's highly likely that I'm misunderstanding the contribution, in which case I want to learn what I'm missing (and I'll of course update my score accordingly!). I look forward to reading the authors' response. ---------------------------------------------------------------------------------------------------------------------------------------------------- After the rebuttal: it's clear that I am missing the key contribution here. I am going to try to figure this out but, for now, will change my confidence score since I am no longer confident of my original assessment. I hope I can better understand this work's contribution, but if that doesn't happen soon enough, I'll request the AC to ignore my input. Please see above. <doc-sep>This work studies the Burer-Monteiro method, a nonconvex method for solving semidefinite programs (SDPs) that have only equality constraints and whose solution is a $n \\times n$ PSD matrix of rank $p$. The solution to the Burer-Monteiro method is known to coincide with the original SDP when the rank is above the Barvinok-Pataki bound $p \\gtrsim \\sqrt{2n}$. Although the Burer-Monteiro problem is noncovex, it is known to be solvable in polynomial time in a smoothed analysis setting when $p \\gtrsim \\sqrt{2n}$, and it is known to be solvable in polynomial time for worst-case instances when $p > n/2$. The main result of this paper constructs worst-case instances for the max-cut SDP for all ranks $\\sqrt{2n} \\lesssim p \\leq n/2$ whose corresponding Burer-Monteiro problem has spurious local minima, and hence gradient-descent type approaches fail to solve this problem. This suggests that the use of beyond-worst case analysis frameworks is necessary for proving the efficiency of the Burer-Monteiro method. # Strengths - *Significance*: The Burer-Monteiro has attracted interest in recent years, and it has remained open whether or not a smoothed analysis setting was necessary for polynomial time algorithms above the Barvinok-Pataki bound. - *Quality/originality*: The proof of existence of local minima relies on a notion of (strictly) pseudo-PSD matrices and a careful analysis of Riemannian gradient descent. In my view these ideas are interesting and the technical level is moderately high. - *Clarity*: The paper is well-written and organized. # Weaknesses - I do not see any major weaknesses. I just have a quibble in the next bullet about how the contribution of this paper is phrased. - *Clarity*: I find the phrase "The Burer-Monteiro method fails..." to be confusing. It's not clear what it means to "fail" since solving the Burer-Monteiro optimization problem solves the original SDP in a certain regime. The real problem is that the Burer-Monteiro solution may not be efficiently computable. This paper gives evidence of computational hardness by demonstrating the presence of spurious local minima in worst-case instances, but it does not strictly rule out some other clever polynomial time algorithm from solving the max-cut Burer-Monteiro problem. # Minor comments **1.** I think it would improve the clarity of this paper to first introduce the Burer-Monteiro method in general (ie for general SDPs with linear equality constraints and rank p solutions) and then specialize to the max-cut problem. **2.** The preliminaries are introduced abruptly, and if the paper is read linearly, it is not clear initially why some of them are needed. Either (i) a high-level sketch of the proof of Theorem 1 before the preliminary section or (ii) a short sentence for each preliminary describing why it is needed (eg: we use Riemannian gradient descent to prove existence of local minima) could help with this. **3.** The notation $ p \\gtrsim \\sqrt{2n}$ might be confused with $ p = \\Omega( \\sqrt{2 n} )$, although the latter is not what is meant here. **1.** The authors do not foresee negative societal impacts, and I agree with this assessment. <doc-sep>This paper studies the landscape of the Burer-Monteiro optimization problem corresponding to the classical Goemans-Williamson SDP relaxation of Max-Cut. It was known that for generic edge weights, the Burer-Monteiro optimization over n x p matrices has no spurious second-order critical points once the rank constraint exceeds the Barvinok-Pataki bound, p(p+1)/2 >= n. However, it was unknown whether there exist particular edge weight matrices for which spurious second-order critical points or local maximizers can exist. This work provides an explicit construction of a class of (signed) edge weight matrices, for which the BM-optimization of size 2p x p has a spurious local maximizer. By zero-padding, this construction extends also to the BM-optimization of size n x p for any n >= 2p. The construction is to posit a specific matrix Y = [I; -I] in R^{2p x p} as the candidate for the spurious local maximizer, and to then characterize those edge weight matrices for which this Y is a first-order critical point, second-order critical point, and spurious local maximizer, respectively. The characterizations of first-order and second-order criticality are simpler and based upon explicit forms of the Riemannian gradient and Hessian. However, as the Hessian is rank-deficient for p = n/2 and there is an entire sub-manifold of local minimizers containing Y, a more involved geometric argument that explicitly shows convergence of Riemannian gradient descent to this submanifold is used to argue that Y is a local minimizer. Experiments are provided in the appendix that explore the basin of attraction of this local minimizer. I think this is an interesting paper, and would be supportive of its publication in NeurIPS. The result is a bit limited in scope, in that it studies only the specific Goemans-Williamson SDP, and constructs only a specific type of local optimum for BM-optimization of this SDP. However, I find the identification of this particular type of local optimizer Y (the axial position matrix) to be insightful, and it is nice that the construction works all the way to the sharp threshold p = n/2. The analysis to show that it is a local optimizer, rather than just a second-order critical point, is also non-trivial due to the rank degeneracy of the Hessian, and I think it develops an interesting proof idea. Overall, I think the paper is insightful, well-written, and resolves a question whose answer was previously unknown in this literature. Yes <doc-sep>Large SDPs are costly to solve via interior point methods. Therefore, there is great interest in techniques that are much faster and achieve nearly optimal results. Burer-Monteiro (BM) is one such method: it seeks a SDP solution -- a $n \\times n$ matrix $X$ -- which has rank $k$, and parameterizes $X=V^TV$, where $V$ is $k \\times n$. $V$ is then optimized, and the SDP constraints then become restrictions on the rows of V. The present paper studies the BM method for the well-known SDP relaxation of MaxCut. Specifically, the main question is whether local maxima of BM correspond to global maxima of the SDP. The focus is on the regime $k(k+1)/2>n$. In this regime, there exist rank k solutions that are optimal for the (full rank) SDP. However, standard BM is only guaranteed to find local maxima. The question then becomes: is it always the case that a local maximum for BM is a global maximum also (and thus a global optimum for the SDP)? Quite simply, this paper shows that the answer is *no* in general, for a large range of $k$ up to $n/2$. This is shown by an explicit construction, and nicely complements a result by Boumal, Voroninski and Bandeira that showed that the answer is *yes* for "generic" instances of the MaxCut SDP. The present paper also shows that there " bad" local maxima are stable for gradient ascent. The main strength of the present paper is that it in some sense finishes the "landscape analysis" of BM for the MaxCut SDP. This is an interesting result that has strong connections with important recent work. The construction in the proof is nice and original. The analysis of how the objective function behaves around the "bad local minimum" is also interesting, and uses the machinery of Riemannian optimization as well as probabilistic arguments. The main weakness is that in some sense we expect local optimization methods to do quite well, as second-order local minima of Burer-Monteiro are nearly optimal for the SDP. This is shown in: Mei et al. "Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality," COLT 2017. Indeed, a rank-k local maximum achieves the SDP maximum up to a (1-O(1/k)) error, whenever the SDP instance comes from a weighted graph (with nonnegative weights on the edges). For $k\\geq \\sqrt{2n}$ (as considered here) this means that the rank-k solutions are close to the full rank ones. Since MaxCut involves a rounding with an approximation factor anyway, one could argue that the (small) extra error from Met et al is not very substantial. Still, to be clear, I think this is a strong paper, that in a way finishes an important line of research on how "benign" BM is as a SDP solution method. See the weakness above. My only comment is that the contribution by Mei et al should be discussed.
The Burer-Monteiro method is widely used for solving large scale semidefinite programs. It works based on replacing an $n \\times n$ positive semidefinite matrix $X$ with $Y Y^T$ where $Y$ is $n \\times p$. This has the benefit that it is more space efficient to store $Y$ than $X$, but it transforms a convex optimization problem into a nonconvex one. Above the Barvinok-Pataki bound (an analogue of the notion of a basic feasible solution for semidefinite rather than linear programming) we are at least guaranteed that there is a low-rank optimal solution. But does the nonconvex problem have spurious critical points? Recent works have studied the critical points under a smoothed analysis model, and shown that the Burer-Monteiro method works almost down to the Barvinok-Pataki bound. The main contribution of this paper is to complete the analysis of the landscape, by showing that without smoothing, even for the MAX-CUT SDP, there are spurious critical points even for $p = n/2$. One reviewer had doubts about the relationship to the work of Bhojanapalli-Boumal-Jain-Netrapalli, but I found the author reply to be convincing that the setting and techniques are fundamentally different. The other reviewers were uniformly positive. This is a nice contribution to the literature on the Burer-Monteiro method. As a comment to the authors, I would suggest elaborating on the connection to the work of Mei et al. I agree that showing the global and local optima are close in objective value can be somewhat orthogonal to showing that the SDP recovers e.g. some underlying clustering in community detection. This provides a further justification why it is important to understand the loss landscape, and not just bound the suboptimality of any locally optimal solution. Indeed, from what I remember of the Mei et al. paper, the locally optimal solutions do not get non-trivial performance for the associated community detection problem, so I think investigating this further and explaining it would be helpful, since these are subtle distinctions.
This paper proposes an adaptive unsupervised domain adaptation based on a variational particle filter Bayesian posterior on an encoder, which is then used to form predictions on a non-stationary (time changing) task. The particle filter is run with respect to a continuous-time differential equation and some theoretical results are proposed to associate the particle weights with the parameter-space transition distribution, used to form one of the terms of the training objective. Some comparative experimental results are presented. Pros: - this paper attempts to address many difficult questions, including an efficient prediction of the Bayesian parameter posterior, do it in a non-stationary environment (albeit assuming it is stationary which is weird), and doing it with neural networks and solving differential equations to capture the trajectory of the posterior - comparative results (although not against all the relevant baselines) seem impressive Cons: - The overall system seems very complicated and I cannot say that I was able to get an overall satisfying grasp of it - I have some concerns about the proof of Theorem 1. - I have some concerns that the posterior is a unimodal Gaussian over parameters - in terms of related work and baselines, it seems that this paper has a blind spot wrt the family of conditional neural processes architectures (which also propose an efficient way to directly generate samples from the parameter posterior, conditioned on the past training data). - there is a bit of a contradiction between the UDA setup where new tasks can appear at any time and the assumption of stationarity of p(theta(t)) assumed for Thm 1. - a much more transparent analysis of different computational costs is needed - computational costs are not analyzed - results do not compare against the family of neural processes (which have similar properties) <doc-sep>The paper proposes an approach to unsupervised domain adaptation by formulate predictive modeling as a continuous-time Bayesian filtering problem. The works introduces extrapolative continuous-time bayesian neural networks with a particle filtering differential equation at its core that only rely on historical data for inference and models the change of importance weights. The proposed model is trained in a framework that combines the minmax optimization of unsupervised domain adaptation and the ELBO of VAE. To apply the model to non-stationary streaming data, an additional temporal domain invariant loss term was used to encourage the model to generalize to unseen data. Experiment results show strong performance of the proposed ECBNN against other unsupervised domain adaptation models. Strength: Treating the time-evolving NN parameters as a continuous-time stochastic dynamic system and extending bayesian neural networks to continuous-time domains is a novel approach to unsupervised domain adaptation. I also appreciate the proposal of PFDE. It is an original contribution to the reviewer’s best knowledge with strong motivation for efficient multi-step ahead inference. The experiment settings considered by the paper are comprehensive and diverse Weakness: A minor concern I have about the paper is the organization of the method section. The traditional particle filtering method introduced in the background section is not part of the original contribution of the work and may be better incorporated in the related works section. The author address the limitations of the work but not discusses negative social impact. But I think it might not be required for this work which is a primarily methodology paper. <doc-sep>The paper describes a low-latency unsupervised domain adaptation (UDA) framework for non-stationary streaming data. To this end, the method includes a Bayesian neural network, which produces a temporally evolving parameterization to the feature extractor, acting as a meta network with temporally changing output. It includes furthermore a discriminator, allowing adversarial domain adaptation training. Thirdly a predictive module is fed by the invariant representation. The Bayesian inference is tackled via Variational Inference relying on a continuous time particle filtering idea. Domain adaption on non-stationary stream in real time is unquestionably a problem with both theoretical and practical significance. The described framework clearly novel and the description of the method is clear. Despite the clarity of the description the paper is quite dense, as the method is quite complex. It takes 5 or 6 equations to describe its loss function alone. This level of complexity requires further justification. Ablation studies are entirely missing from the paper. The question inevitably appears in the reader, does this complexity really necessary? It is not entirely clear why Bayesian approach is necessary. It is easy to imagine that it helps, it is just far from being trivial that it does. What are the weak points of an evolving point estimate approach versus an evolving Bayesian approach for estimating the encoder parameters? The problem setup could be described with a bit more clarity, as it is not always entirely clear what is the main difference between the source and target domain. In the growing circles dataset for example, Fig 2A shows that the source corresponds to a green wedge in the input space, and the target is the other part. Fig 2B shows, that the Source and the Target is temporally separated (0 -39th frame and 50-89th frames respectively). Is it both? L210-212 describes an unusual solution, the prior is used more like a target, where the KL term in the variational lower bound is used at training time to force the estimated parameters to follow this target. This use seems against the meaning of prior in Bayesian framework. The authors describe that their method is only capable to work with continuously evolving systems at the moment, and they mention handling discrete dynamics as a future work. It is not entirely clear how large datasets can be handled. The method is quite complex. The evaluation time computational complexity seems quite favorable (Table 2). The training time complexity can be interesting as well.
Paper combines the use of a particle filtering differential equation (newly proposed in this paper) for sampling posterior parameters of a bayesian neural network, with unsupervised domain adaptation, and achieves strong results on tasks demonstrated. Reviewers found the method novel and the problem important. Several questions were raised about the motivation of using Bayesian neural networks, about the utility of combining several of the pieces of the loss function proposed, about the computational cost, but the authors provided satisfactory answers to the questions and provided ablations showing the utility of the different components of the loss function. One downside pointed out was that the method is quite dense and involves several parts that are intertwined. I hope the authors will revise their submission, taking into account the points raised by the reviewers.
The paper analyzes whether low- and mid-level feature information is clearly represented in early CNN layers, but not as clearly represented in later CNN layers, a property that is present in early and late vision processing systems in mice. In particular, the low- and mid-level features of luminosity, contrast, orientation, and corners are investigated. Additionally, the paper shows that the intrinsic dimension of representations is relatively low in early CNN layers, but much higher in later CNN layers. Overall, the paper is reasonably written and effectively conveys what the authors did. However, the originality and significance of the paper are, in my opinion, not very high. While specifically looking at luminosity, contrast, orientation, and corner information is nice, the idea that early feature information is prominent in early CNN layers and is lost in later layers is not new. Nor is the idea that the complexity of layer representations increases from early to late CNN layers. One extremely relevant paper on the topic is Güçlü and van Gerven (2015), which showed that information present in early human visual areas is prominent in early CNN layers, but not in later CNN layers. That paper also showed that the Kolmogorov complexity of layer representations increases with depth. The analysis of mutual information between layer representations and labels is not new, either [35]. The comparison of CNNs to mouse data is, also, not new (Cadena et al., 2019). Cadena et al. (2019) found that CNNs were a poor model of the visual system of mice. If the submitted paper could address, and present evidence for, why their results contradict the findings of Cadena et al. (2019), it would strengthen the paper. The authors should add a discussion on the limitations of their work to the Conclusions and Discussion section. One limitation that potentially could be discussed is that high level statistical properties can be shared by systems that process information differently. <doc-sep>In this work, the authors have studied the similarity in the emergence of certain statistical patterns across neural activity recorded from the rat ventral visual stream and a (trained / randomly initialized) deep convolutional network optimized to perform large-scale object recognition on ImageNet. The authors present 3 interesting observations: 1) similar trend of intrinsic dimensionality of object representations across the rat ventral stream and pretrained deep convolutional networks 2) observed the pruning of low-level and mid-level information which followed their distillation in early layers of the rat ventral visual stream and VGG-16, these effects being a result of training and not merely a byproduct of the hierarchical processing architecture, 3) object categorization decoding accuracy closely tracked the increase of object-specific information across the VGG-16 hierarchy. Strengths: - There has been an explosion of recent studies exploring the similarity between biological and machine visual perception via representation similarity analysis and direct comparison of neural representations on a common stimulus set. This work on the other hand presents a less commonly studied yet important complementary direction of observing commonly emergent statistical trends in the representations of biological and artificial neural networks and highlights the change in these trends as a function of training. - While the current study is restricted to comparing object recognition trends of deep networks with the rat ventral stream, the general methodology presented can be extended to primate analyses once the technology to record primate data across the ventral visual stream for a considerably large set of images is available. - The authors provide adequate mathematical background in the Methods section to clearly follow the key results presented in the subsequent sections in comparing the statistical trends between rat ventral stream and VGG-16 representations. Transparent disclosure of which neurons were discarded from the analyses and the criteria for discarding them (poor selectivity to the stimulus features being analyzed) in section 2.3. - I found the paper to be very well-written and organized with clearly labeled plots (for the most part) along with detailed captions describing the key results are very helpful. Weaknesses - While the current analyses are interesting and sufficient to convey the key observations made by the authors, a high-level comment would be to relate the observed statistical trends with those obtained by the RSA studies measuring brain-CNN similarity. For e.g. it appears that a related finding is shown in https://arxiv.org/abs/1807.00053 in Fig 5, wherein there exists a correspondence between the decoding accuracy of successive layers of a pre-trained deep neural network with successive areas of the primate ventral visual stream. - Nit: Some of the plots may be hard to read due to the minor difference in the color or linestyle of the various markers. E.g. the authors must please check if Fig 4’s coloring scheme is accessible to color-blind readers. It was difficult for me to initially identify the solid vs dashed lines in Fig. 5 as they look quite similar here. Similar to Fig. 3, I would add plot titles “orientation” and “contrast” to Fig 4.A and 4.B for improved readability. - The authors could potentially discuss more the limitations of the current work and highlight areas of improvement/extension that could be useful to inspire future work. - As mentioned in the weaknesses, I would like the authors to please discuss limitations and provide directions for extending the current work. <doc-sep>The paper compares internal representations of what could be called the rat's ventral visual stream and the VGG-16 network trained on ImageNet. The authors find that the intrinsic dimensionality (ID) of rat ventral stream and VGG-16 follow a similar pattern, with ID first increasing and then decreasing again. In addition, they estimate the mutual information between some low-level image features and internal representations and also find similar trends. ### Strengths + Understanding similarities and differences of real and artificial neural networks is an important topic + Paper is clearly written and easy to follow + Finding that ID follows a similar pattern in brain & ANN might potentially be interesting ### Weaknesses - Analysis way to restrictive, considering only one (fairly old) CNN architecture - Increasing MI values in Fig. 3–5 are implausible (-> data processing inequality) - Unclear how the analysis advances either ML or neuroscience Yes <doc-sep>The paper investigates a parallel between two processing systems: (1) a deep convolutional neural network (VGG-16), and (2) the rat visual cortex. More precisely, the paper considers previous results obtained on one system, and attempts to replicate them on the other system. These results are related to the processing of information along each system. To investigate which information is represented in each layer of each system, the paper considers a number of metrics computed on the intermediate representations: - intrinsic dimensionality - mutual information with an image metric computed on the input (average luminosity, contrast, orientation, couple of orientations) - decoding accuracy By considering the trend of these metrics over the layers of each system, the paper demonstrates a parallel between the two systems. The results also describe the difference in representations between a pretrained CNN and a network with random weights. **Clarity** The paper is very clearly written in all aspects: exposition of the problem, relation with the literature, proposed method, results, and discussion. It is well structured, and the figures are of top quality. The results seem sufficiently described to be reproducible. Additionally, sharing the code to reproduce all the figures is appreciated (not tested). **Originality** The investigation is relatively original. A large number of paper have investigated mappings between artificial and biological neural networks, particularly CNN and the visual cortex (in humans, monkeys, rats, and more), using methods such as encoding models or representation similarity analysis. This paper follows this line of work, but it is original in that it investigates a number of manually computed metrics. These metrics have a somewhat limited complexity, but they can be computed over different layers to investigate explicitly how the information is processed in each system. **Quality** The experiments are sound, seem well executed, and the results are reasonable. The paper is structured around the replication of figure 1 (ABCD) in the other system: - figure 1A vs figure 2. The hunchback profile in the visual cortex is not as neat as in the CNN, but the results are somewhat consistent. The consistency assumes we ignore (roughly) layers 9-16. - figure 1B vs figure 3A/B. The decreasing trends are consistent, but only if we ignore layers 1-3. - figure 1D vs figure 3C/D. Here the trends are not as consistent as stated. In the visual cortex, the orientation representation decreases with depth, and the corner representation increases with depth. In the CNN, the orientation trend is only decreasing if we ignore layers 3-5, and the corner trend is only increasing if we ignore layers 5-16. - figure 1C vs figure 5. The normalized mutual information mostly increases in the last layers (say 7-16). Overall, the trend consistency between the visual cortex and the CNN is a bit overstated. The trends are only consistent if we consider a subset of layers, and this subset is not the same depending on the representation considered. **Significance** The results are not particularly surprising. A lot of studies have already shown a parallel between CNN and the visual cortex in mammals. This new study adds up to this parallel, describing it with an original approach. The pretrained-vs-random results are all unsurprising, but nicely quantified in the different experiments. 4. The authors briefly discuss the fact that the dataset considered is not capturing the entire visual hierarchy, thus limiting the parallel with the CNN architecture. 5. The authors have not discuss the choice of VGG-16 compared to other existing CNN architectures.
This paper examines the intrinsic dimensionality (ID) of representations in the rat brain and CNNs. The authors show that the rat brain, like the CNNs studied, have distinct expansion-contraction phases, and that in the CNNs one also observes a distillation and pruning of low- and mid-level information, similar to what is seen in the brain. The authors also show that in the CNNs high-level object information only emerges after these steps of distillation and pruning. This illustrates potentially interesting parallels between information processing in the real brain and CNNs. The reviewers were split on this paper. The initial reviews identified issues of novelty and insight. But, after the author responses, three out of four agreed that the work was interesting and technically sound, and they found the paper well-written. One reviewer was still concerned that the paper does not do enough to show that these similarities in ID and information retention tell us anything meaningful. They were also concerned that comparisons were not made across enough architectures. Nonetheless, given the balance in the reviews and post response scores, an accept decision was reached.
The authors present a fully differentiable yet modular pipeline for autonomous vehicle perception and control. It preserves the traditional abstractions such as perception, prediction, planning, and control but implements differentiable versions of these components so that gradients can pass all the way from task performance back to perception. They demonstrate the value of this on a trajectory prediction task. Strengths: - The idea is well-motivated and the approach seems well executed. The approach preserves some of the important interpretability charateristics of traditional methods while still enabling end-to-end learning. - Using task performance to learn perceptual models is a promising area of research, and this paper showcases a possible method for doing so in the case of autonomous vehicles. Weaknesses / Questions: - The authors should highlight to what degree the differentiable components are novel. To my understanding they have mostly taken existing components and assemble them together so the contribution should be mostly considered at the system level. If this is not the case the authors should clarify. - The cost function in (3) contains a lot of weights. How sensitive is the algorithm to the selection of these weights? Similar question for $\\alpha_{1:3}$ in the combined loss function. - How is the discretization of $s$ chosen for the planner (i.e., what is the dimension of the Categorical distribution and why)? <doc-sep>Rather than building an AV stack in a modular fashion, this work introduces a differentiable and modular stack for prediction, planning and control. This enables optimization of upstream components such as prediction via backprop through planning and control. Strengths: - The method is still interpretable in the sense that it is composed of modules with specific purposes. - The method is able to be trained end-to-end due to being fully differentiable. - Comparison to a non-differentiable but modular method is made clearly. Weakness: - no real world or even simulator results. - not differentiable to all possible parameters - Comparison to an end-to-end neural network is not made. The authors note that not all parameters are differentiable. How does this system compare to an end-to-end neural network that doesn't attempt to provide an interpretability? How much performance is being sacrificed for interpretability? Can the authors provide quantitative or qualitative examples of how this interpretability is useful? <doc-sep>This paper presents an end-to-end differentiable stack for autonomous driving. The prediction module is a neural network, and the planning and control modules are hand-designed algorithms. Notably, the hand-designed algorithms are differentiable, which allows training of the upstream prediction module for a downstream control objective by backpropagating gradients through these hand-designed algorithms. The authors make planning differentiable by replacing the argmin operation with sampling from a categorical distribution, and the authors make use of an off-the-shelf differentiable MPC algorithm to make control differentiable. The authors show open-loop, offline results which indicate that this kind of planning-aware training of the prediction module improves performance. Strengths - A step towards end-to-end training of prediction modules, while retaining the interpretability / stability of hand-designed control and planning algorithms. - The paper is well-written and easy to parse. Weaknesses - Only offline / open-loop evaluation, so it is difficult to predict what real world performance will be like. - I found the metrics to be somewhat hard to interpret, but this could be because I’m not a researcher in the AV space. <doc-sep>The authors tackle the task of offering a full autonomous driving software stack. In contrast to classical approaches that focus on modular software containing of perception, planning and control, the authors present here software components that are full differentiable. The authors focus on prediction, planning and control modules that are made differentiable, train it in a simulation environment (nuScenes) and then show the results in comparison to baselines. The authors display that this approach - in contrast to prior End-to-End methods - gives more modularity and interpretability. Strengths: -The paper is well-written and good to read, the technical explanations are exceptional - The paper tackles a very hot topic in the field, especially for AVs were we are currently seeing bottlenecks in hand-engineered algorithms - Honest opinion about the limitations of this approach, it is not a full full differentiable stack - The results indicate that "there is value in moving from purely prediction-oriented evaluation metrics towards downstream task-oriented metrics" -> This was new to me and is a significant result, especially with all the work in prediction-algorithms right now Weaknesses: - No hardware experiments were provided - No Training results were provided. Although it is stated that the nuScens datasets is used there is nothing explained about training quality and training time - No comparison to a modular software stack (classical pipeline) or a complete end-to-end pipeline. I do not understand the comparison with the "baseline" (We compare DiffStack to the same prediction-planning-control stack, but with modules 219 trained for different objectives that capture a varying degree of planning-awareness) ?? - The authors claim with the title its a full-stack but in the limitations, they show that this approach is only performed in open-loop training and evaluation
This paper proposes a modular but fully differentiable stack for self-driving cars. The differentiability enables the gradient to back-propagate from task performance all the way to the prediction module. It demonstrates better prediction accuracy than the traditional modular approach while preserving interpretability and reusability. All the reviewers agree that the paper is well motivated, written and executed. The paper tackles an important problem and contains ideas that will potentially have a major impact in the field of autonomous vehicles. Also, there are also a few areas for improvements brought up by the reviewers: 1) Validation with a close-loop simulation or real-world experiments would significantly improve the quality and the potential impact of this paper. 2) Comparison with a classical modular approach and/or a non-interpretable end-to-end system may reveal more benefits or limitations of this work. Such comparisons are worth adding and discussing. The authors' response and the additional experiments have sufficiently addressed the main concerns in the original reviews. Thus, we would like to recommend accepting this paper.
This paper provides an NTK analysis for a class of residual networks with a mixture of activation functions at different layers. The authors derive bounds on the minimum eigenvalue for both finite and infinite width settings. They also give a generalization bound for this class of network architecture. The main contribution of this work, in my opinion, is a non-trivial derivation of lower and upper bounds on the minimum and for residual networks with a mixture of activation functions. However, analysis and proof techniques are mainly taken from the well-established NTK literature, such as Huang et al., 2020, Oymak and Soltanolkotabi, 2020… Similar generalization bounds are established by Cao and Gu, 2019, and Arora et al., 2019a. There is an interesting connection to NAS that the authors observe empirically where NAS always picks up RELU and/or Leaky RELU in the optimal search. That coincides with the fact that RELU and LeakyRELU imply larger minimum eigenvalue in the NTK sense. However, deep neural networks in practice do not generally operate in the NTK regime. In short, I see this work purely as an NTK analysis rather than an “unifying framework” for NAS. I don’t see how the work studies “optimization and generalization of NAS” or how the derived results guide NAS. In terms of the presentation, the paper is well-written and easy to follow. One thing I would suggest is to instantiate Theorem 1 with a few special cases, such as the standard 2-layer networks with ReLU activation. Illustrating the Hermite coefficients of such cases would help the readers to compare with existing results, e.g., Oymak and Soltanolkotabi, 2020. N/A <doc-sep>This paper extends the Neural Tangent Kernel limit to different (and mixed) activations functions, as well as a varying proportion of skip-connections. Then this paper is able to bound the minimum eigenvalue of the neural tangent kernel matrix, and to link this value to the generalization capability of the model. This allows to guide Neural Architecture search without training. The authors validate their methods numerically by first showing that the derived bounds match empirical results, and that using their bounds to guide NAS improves NAS performances. # Originality While I am not very familiar with the field, this is to my knowledge the first time that an NTK approximation was derived for various and mixed activations functions. # Quality The paper contributions seems sounds, though the empirical validation could be more convincing: how well does the criterion rank different architectures? # Clarity The paper is clearly written. The title and the abstract, while quite clear, appear to be a little misleading: I don't think this paper helps understanding NAS (for instance NAS convergence), rather than providing a criterion which can help guide NAS. # Significance While the empirical results don't show a huge improvement, being able to guide NAS across different activation functions and skip-connections proportions seems useful. The result would be much more significant if it applied to CNN and Transformers, but this is an interesting progress. I wish the authors had tested their approach on more challenging tasks, and with more competitive activation functions. The authors clearly state that a big limitation of their work is that it only works for FC networks, and not more common CNN or Transformers. <doc-sep>This paper provides the theoretical analysis of a fully-connected neural network with mixed activation functions in both infinite and finite width schemes. Furthermore, the authors provide analysis on both lower and upper bounds of minimum eigenvalues of NTK and generalization error bounds in SGD. Finally, the authors provide empirical experiments to support their theoretical claims on NAS problems. Strengths 1. Building upon Cao et al. and Nguyen et al.'s proof framework, the author provides the theoretical analysis of the minimal eigenvalue of NTK and generalization error on mixed activation neural networks. Weaknesses 1. A considerable gap exists between theoretical analysis and the actual NAS problem. Their theoretical results only provide the minimum eigenvalue of NTK and generalization error of a fully-connected neural network with mixed activation functions. I will elaborate on this in more detail in the Question section. 2. NAS-Bench-201 results in Table 2 are not compelling enough. Latest methods such as $\\beta$-DARTS [3] or DrNAS [4] find the optimal architecture in NAS-Bench-201. 3. General NAS problems contain a larger degree of freedom in choosing the components of a neural network, not only just activation functions and skip connections. Similar to the first point, there is a considerable discrepancy between the analysis and practice. The author provides a thorough theoretical analysis of the minimum eigenvalue on NTK and generalization bounds on a neural network with mixed activation functions. However, the analysis is only for a particular family of neural networks (with mixed activation functions and selective skip-connections). Furthermore, the neural network analysis is on the final architecture, assuming that existing NAS algorithms, such as DARTS, random search WS, or Eigen-NAS, can find the optimal architecture from their search phase. Overall, I see the authors' theoretical analysis is not closely correlated to understanding the actual NAS problem. Furthermore, the empirical performance is far lower than existing state-of-the-art in NAS literature. Therefore, I give a reject. 1. Cao, Yuan, and Quanquan Gu. "Generalization bounds of stochastic gradient descent for wide and deep neural networks." Advances in neural information processing systems 32 (2019). 2. Nguyen, Quynh, Marco Mondelli, and Guido F. Montufar. "Tight bounds on the smallest eigenvalue of the neural tangent kernel for deep relu networks." International Conference on Machine Learning. PMLR, 2021. 3. Ye, Peng, et al. "b-DARTS: Beta-Decay Regularization for Differentiable Architecture Search." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. 4. Chen, Xiangning, et al. "Drnas: Dirichlet neural architecture search." arXiv preprint arXiv:2006.10355 (2020). 5. Liang, Hanwen, et al. "Darts+: Improved differentiable architecture search with early stopping." arXiv preprint arXiv:1909.06035 (2019). 6. Dong, Xuanyi, and Yi Yang. "Nas-bench-201: Extending the scope of reproducible neural architecture search." arXiv preprint arXiv:2001.00326 (2020). 7. Yang, Antoine, Pedro M. Esperança, and Fabio M. Carlucci. "NAS evaluation is frustratingly hard." arXiv preprint arXiv:1912.12522 (2019). <doc-sep>This paper presents theoretical analyses on Neural Architecture Search using the recent theory of NTK. Especially, it has provided the lower and upper bounds of the minimum eigenvalue of the NTK matrix as well as the generalization bound induced by the NTK matrix for the architectures in their pre-defined search space. Finally, based on these theoretical analyses, this paper develops a training-free NAS algorithm, namely Eigen-NAS. Eigen-NAS has shown its competitive performance in training-free NAS, which in turn also supports the theoretical analyses in this paper. *Strengths* 1. This paper has provided non-trivial theoretical analyses on NAS using the recent theory of NTK, which may inspire more theoretical studies on NAS in the NAS area. 2. The Eigen-NAS inspired by their theoretical results shows competitive performance on different NAS benchmarks. *Weaknesses* 1. This paper does not really study the convergence or optimization of NAS as what has been claimed in its title or abstract. Instead, it is mainly focusing on the generalization property of its restricted search space (i.e., a small search space compared with the standard NAS search space). 2. The motivation for why the minimum eigenvalue of NTK needs to be studied and how it is explicitly related to the generalization bound (i.e., Theorem 3) have not been well clarified. As a result, the study of the minimum eigenvalue of NTK in this paper may be less related. Using the minimum eigenvalue of NTK to bound the generalization performance of DNN in Theorem 3 will make the theoretical results in this paper more coherent. 3. This paper does not provide a clear or explicit interpretation of how the theoretical results in this paper can help us understand NAS (shown in the title). Instead, from my own perspective, this paper has mainly developed a generalization bound for NAS and then proposed a training-free NAS algorithm (i.e., Eigen-NAS) inspired by their theoretical results. And such a training-free NAS algorithm is similar to the one in [R1] using the trace norm of the NTK matrix. This paper has provided a valuable discussion about limitations and future works.
This work relates NAS to the conditioning of a DNN through the NTK framework. This work is well supported theoretically and empirically, and making this connexion is surprising. Given the potential interest for the NAS community, I recommend accepting this paper.
Clarity: The work is a clear introduction/overview of this area of research. The reviewer enjoyed the connections to Multiple-Gradient Descent and clear distinctions/contrasts with previous approaches to weighting the outputs of multiple discriminators. All in all, the paper is quite clear in what its contributions are and how it differs from previous approaches. The details and motivations of the Hypervolume Maximization (HVM) method (especially as it relates to and interacts with the slack method of picking the nadir point) were a bit harder to follow intuitively given the standalone information in the paper. Originality: Adapts a technique to approximate MGD called HVM (Miranda 2016) and applies it to multi-discriminator training in GANs. As far as the reviewer is aware, this is a novel application of HVM to this task and well motivated under the MGD interpretation of the problem. Significance: Unclear. This work in isolation appears to present an improvement over prior work in this sub-field, but it is not obvious that the findings in these experiments will continue to be robust in more competitive settings. For instance, the worst performing model on CIFAR10, WGAN-GP (according to the experiments run) WGAN-GP also holds near SOTA Inception scores on CIFAR10 when appropriately tuned. Without any experimental results extending beyond toy datasets like MNIST and CIFAR10 the reviewer is not confident whether fundamental issues with GAN training are being addressed or just artifacts of small scale setups. Closely related previous work (Neyshabur 2017) scaled to 128x128 resolution on a much more difficult dataset - Imagenet Dogs but the authors did not compare in this case. Quality: Some concerns about details of experiments (see cons list and significance section for further discussion). Pros: + The work provides a clear overview of previous work on approaches using multiple discriminators. + The connections of this line of work to MGD and the re-interpretation of various other approaches in this framework is valuable. + The author provides direct comparisons to similar methods, which increases confidence in the results. + On the experiments run, the HVM method appears to be an improvement over the two previous approaches of softmax weighting and straightforward averaging for multiple discriminators. Cons: - Performance of GANs is highly dependent on both model size and compute expended for a given experiment (see Miyato 2018 for model size and training iterations and Brock 2018 for batch size). Training multiple discriminators (in this paper up to 24) significantly increases compute cost and effective model size. No baselines controlling for the effects of larger models and batch sizes are done. - The paper lacks experiments beyond toy-ish tasks like MNIST and CIFAR10 and does not do a good job comparing to the broader established literature and contextualizing its results on certain tasks such as CIFAR10 (reporting ratios to a baseline instead of absolute values, for instance). The absolute inception score of the baseline DCGAN needs to be reported to allow for this. Is the Inception Score of the authors DCGAN implementation similar to the 6 to 6.5 reported in the literature? - Figure 3 is slightly strange in that the x axis is time to best result result instead of just overall wallclock time. Without additional information I can not determine whether it is admissible. Do all models achieve their best FID scores at similar points in training? Why is this not just a visualization of FID score as a function of wallclock time? A method which has lower variance or continues to make progress for longer than methods which begin to diverge would be unfairly represented by the current Figure. Additional comments: In section 3.1 Eq 5 appears to be wrong. The loss of the discriminator is presented in a form to be minimized so exponentiating the negative loss in the softmax weighting term as presented will do the opposite of what is desired and assign lower weight to higher loss discriminators. In Fig 6 FID scores computed on a set of 10K samples are shown. The authors appear to draw the line for the FID score of real data at 0. But since it is being estimated with only 10K samples there will be sampling error resulting in non-zero FID score. The authors should update this figure to show the box-plot for FID scores computed on random draws of 10K real samples. I have only worked with FID on Imagenet where FID scores for random batches of 10K samples are much higher than 0. I admit there is some chance the value is extremely low on CIFAR10 to make this point irrelevant, however. <doc-sep>This paper studies the problem of training of Generative Adversarial Networks employing a set of discriminators, as opposed to the traditional game involving one generator against a single model. Specifically, this paper claims two contributions: 1. We offer a new perspective on multiple-discriminator GAN training by framing it in the context of multi-objective optimization, and draw similarities between previous research in GANs variations and MGD, commonly employed as a general solver for multi-objective optimization. 2. We propose a new method for training multiple-discriminator GANs: Hypervolume maximization, which weighs the gradient contributions of each discriminator by its loss. Overall, the proposed method is empirical and the authors show its performance by experiments. First, I want to discuss the significance of this work (or this kind of work). As surveyed in the paper, the idea of training of Generative Adversarial Networks employing a set of discriminators has been explored by several previous work, and showed some performance improvement. However, this idea (methods along this line) is not popular in GAN applications, like image-to-image translation. I guess that the reason may be that: the significant computational cost (both in FLOPS and memory consumption) increase due to multiple discriminators destroys the benefit from the small performance improvement. Maybe I’m wrong. In Appendix C Figure 10, the authors compares the wall-lock time between DCGAN, WGAN-GP and multiple-discriminator, and claims that the proposed approach is cheaper than WGAN-GP. However, WGAN-GP is more expensive due to its loss function involves gradients, while the proposed method does not. If directly compared with DCGAN, we can see an obvious increase in wall-clock time (FLOPS). In addition, the additional memory consumption is hidden there, which is a bigger problem in practice when the discriminators are large. SN-GAN have roughly the same computational cost and memory consumption of DC-GAN, but inception and FID are much higher. From my perspective, a fair comparison is under roughly the same FLOPS and memory consumption. The paper is well-written. The method is well-motivated by the multi-objective optimization perspective. Although the presentation of the Hypervolume maximization method (Section 3.2) is not clear, the resulting loss function (Equation 10) is simple, and shares the same form with other previous methods. The hyperparameter \\eta is problematic in the new formulation. The authors propose the Nadir Point Adaption to set this parameter. The authors conduct extensive experiments to compare different methods. The authors emphasize that the performance is improved with more discriminators, but it’s good to contain comparison of the computational cost (FLOPS and memory consumption) at the same time. There are some small questions for the experiments. The reported FID is computed from a pretrained classifier that is specific to the dataset, instead of the commonly used Inception model. I recommend the authors also measure the FID with the Inception model, so that we have a direct comparison with existing reported scores. Overall, I found that this work is empirical, and I’m not convinced by its experiments about the advantage of multiple-discriminator training, due to lacking of fair computational cost comparison with single-discriminator training. <doc-sep>The paper investigates the use of multi-objective optimization techniques in GAN-setups where there are multiple discriminators. Using multiple discriminators was proposed in Durugkar et al, Arora et al, Neyshabur et al and others. The twist here is to focus on the Pareto front and to import multiple gradient descent and hypervolume-maximization based methods into GANs. The results are decent. The authors find that optimizing with respect to multiple discriminators increases diversity of samples for a computational cost. However, just scaling up (and carefully optimizing), can yield extremely impressive samples, https://arxiv.org/abs/1809.11096. It is unclear how the tradeoffs in optimizing against multiple discriminators stack-up against bigger GANs. From my perspective, the paper is interesting because it introduces new methods into GANs from another community. However, the results themselves are not sufficient for publication.
The reviewers found that paper is well written, clear and that the authors did a good job placing the work in the relevant literature. The proposed method for using multiple discriminators in a multi-objective setting to train GANs seems interesting and compelling. However, all the reviewers found the paper to be on the borderline. The main concern was the significance of the work in the context of existing literature. Specifically, the reviewers did not find the experimental results significant enough to be convinced that this work presents a major advance in GAN training.
The authors begin by discussing the discrepancy in association between normal and anomalous points and then suggest an anomaly transformer based on an anomaly-attention mechanism that is further improved using a minimax technique. On six different datasets, empirical analysis demonstrates that the proposed method outperforms state-of-the-art anomaly detection methods. ### Strength: 1) The paper is well written and easy to follow 2) Figure 1 and 2 are very intuitive and easy to understand. 3) Detailed empirical analysis. ### Weakness: 1) If I am not wrong then the SMAP and MSL dataset are from [a] but the author cite Su et al. 2019b. 2) There are repetitive ve entries in references. For example Su et al. 2019a and Su et al. 2019b. I suggest that the authors recheck all entries in bibliography carefully. 3) I think it is important to at least provide the reader with different types of methods used for anomaly detection. The authors have done a good job at it but in my humble opinion literature review is still missing some important papers. For example, LSTM based method and SMAP & MSL dataset [a], Dimensionality reduction & clustering method [b], Spatiotemporal method using Convolutional LSTM [c], Tensor decomposition based method [d]. 4) I have several question/concerns/suggestions about the experiment section. - How the threshold δ is set? - Can you add more details and provide a solid reason on why r=0.5% for SMD, 0.1% for SWAT, 1% for other dataset. - The author should discuss the false-positive rate in more details inside the main text as minimizing false-alarm is really important in practical scenarios. - What is the reason behind selecting 3 layers for Anomaly Transformer? - channel states of hidden state $d_{model}$ is set to 512. Can you provide a reason for selecting this number and also can you discuss the impact of increasing and reducing this number on performance, efficiency, memory etc. - In my humble opinion, the term robustness is very loosely used in the paper. I suggest the author to tone down the sentences about robustness. - Building on the previous point, the robustness of Anomaly Transformer is not evaluated against adversarial attacks. Latest research have shown that almost all anomaly detectors such as MSCRED fails against simple FGSM and PGD attack. It would be interesting to see how the robustness of proposed method in those scenarios. - Figure 5 and 6, are hard to understand. It suggest that the author add some background shading for anomaly regions and also add the threshold line so that the reader can easily understand how your method is outperforming other methods. Also, Figure 6 need more context and detail in the main text. 5) I was unable to locate the link to the code repository. It is critical to validate the paper's claims, and one of the simplest ways to do so is to access and run the code. I believe that the authors should consider making the code for their method and empirical experiments available to the reviewers and later release it publicly. [a] Hundman, Kyle, et al. "Detecting spacecraft anomalies using lstms and nonparametric dynamic thresholding." Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018. [b] Yairi, Takehisa, et al. "A data-driven health monitoring method for satellite housekeeping data based on probabilistic clustering and dimensionality reduction." IEEE Transactions on Aerospace and Electronic Systems 53.3 (2017): 1384-1401. [c] Tariq, Shahroz, et al. "Detecting anomalies in space using multivariate convolutional LSTM with mixtures of probabilistic PCA." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019. [d] Shin, Youjin, et al. "ITAD: Integrative Tensor-based Anomaly Detection System for Reducing False Positives of Satellite Systems." Proceedings of the 29th ACM International Conference on Information & Knowledge Management. 2020. Well written. Good empirical Analysis. But in some places the paper lacks the justifications and proper reasoning. Experiment/results section need some better explanations. <doc-sep>This paper introduces an Anomaly Transformer for detecting anomalies in time series with association discrepancy. This paper introduces two discrepancies: the series association (e.g., period and trend) and the prior association (e.g., the local smoothness or continuity). For abnormal points, these two associations have a small discrepancy and for normal points, there is a large discrepancy between these two associations. This is because abnormal points have a strong local association while the normal points have a global association. In the model, a two-branch strategy is adopted to model the two associations separately. The model is trained by minimizing the reconstruction loss and maximizing the discrepancy. The experimental results demonstrate the effectiveness of the proposed model. Strengths: 1. The observation that the abnormal points have a strong local association (or prior association) and a weak global association is interesting. 2. To better model the association discrepancy, the paper proposes a novel min-max association learning strategy to avoid the Gaussian prior reduction problem. 3. Comprehensive evaluation on a variety of anomaly detection datasets demonstrate the effectiveness of the proposed association discrepancy learning strategy. Weaknesses: 1. What's the convergence property of the min-max strategy? 2. Some details are not very clear. - For example, in equation (2), what's the shape of $W^l$, and what does the operation $*$ mean? - Equation (6) is ambigous, $||\\mathcal{X}-\\hat{\\mathcal{X}}||^2_2$ is a scalar, while the AssDis score after Softmax is a N-by-d matrix. - In the implementation details, for the $r$, why not set $r$ similar to AR of the datasets as shown in Table 1? - In table 3, for the "Recon" of Anomaly Transformer, if you only use the reconstruction error, then there should be no optimization strategy for the association discrepancy. Why the optimization strategy is "minmax"? - In figure 5, what are the labels for the y-axis? For the reconstruction and association criteria, are they the values of loss? In general, the observation that abnormal points have a large discrepancy between the prior association and the series association is very interesting. A novel Transformer based model is proposed to model this discrepancy. Experiments demonstrate the effectiveness of the proposed method. However, it is unclear about the convergence property for the proposed min-max strategy. Besides, there are many ambiguous details that hinder the readability of the paper. <doc-sep>Summary: This paper is looking at anomalies in time series data. In particular they define anomalies as ones that lie outside some learned Gaussian-like distribution. They propose a minimax objective function that tries to minimize and maximize the discrepancy between a Gaussian distribution with a learned variance parameter and an “empirical” one learned directly through self-attention on the data. They measure discrepancy using the symmetric KL divergence between the two distributions and also use this for their anomaly score. Major Comments: - Is this approach a generalization / extension of change point detection? That is not mentioned at all. - The prior association is learned as one that minimized the difference with the series association and the series association is one that is learned to maximize the difference with the prior association? This will promote the series association to learn as non-Gaussian-like of a distribution as possible, while the prior association is finding the closest Gaussian distribution to the series association? Will this minimax strategy not lead to a degeneration where both distributions becomes extremely wide and almost uniform like? - “The reconstruction loss will guide the series-association to find the most informative associations, such as the adjacent time points of anomalies.” Why are adjacent time points of anomalies most informative? Is this because this “window” of time can be altogether be considered anomalies as opposed to noise from a single out of distribution point? - The anomaly score just indicated that there is anomalies or not within the N time point window? In order to narrow down where the anomalies are with the time points, do you need to then test smaller windows of time? Wouldn’t smaller windows cause there to be a shortage of data samples needed for learning? - KL divergence is not very good at measuring differences in the tails of distributions (it does not put enough importance on that area). But shouldn’t the anomalies be in the tails of the distribution as they are rare? Minor Comments: Some of the sentences are grammatically strange in the abstract and introduction sections. Overall the paper has goods results and seems sufficiently interesting and novel. <doc-sep>This paper proposes a new anomaly detection approach based on a Transformer architecture. The main idea is to leverage self-attention to capture the temporal dependency structure of the observations in a sliding window as a measure of anomaly. Since anomalies can be defined as a sequence that is inconsistent with regular ones, the attention matrix is expected to reflect some major aspects of anomalies. The authors propose a two-branch attention architecture to handle multi-dimensional real-valued time-series data, where prior- and series-attention matrices are computed and utilized in a certain min-max competitive learning framework. The final anomaly score is defined as the product between the re-construction error and the KL distance between the two attention matrices. Update after the discussion with the authors. The authors have addressed all of my immediate concerns. The paper now looks very strong. I recommend acceptance. ---- The key idea of using Tranformer's self-attention as a measure of anomalousness sounds novel. It is an excellent idea. The proposed architecture featuring a two-branch attention mechanism also looks novel. The novelty seems undisputable, to the best of my knowledge. The problem is, however, that the paper mostly ignores almost all the existing anomaly detection approaches for **time series**. It obviously does not make sense to use point-wise anomaly detection methods such as SVDD to capture sequential anomalies. Many researchers in this domain may agree that the baseline model in the present context can be the vector autoregressive model, which naturally realizes a particular type of self-attention in the form of the lag-dependent covariance matrices. You can find many works that leverage a dependency graph for anomaly or change detection. I'd also suggest looking at the literature on time-series segmentation. Another issue is that the paper lacks sound justifications for the proposed approach. Given much theoretical/empirical work with VAR or state-space models (or their neural extensions), we expect a much more understandable derivation of the proposed model. Many "functions" such as AssDiss lack a proper definition. (For example, I didn't understand how the probability distributions had been defined from P^l and S^l --- just writing SoftMax or just showing Eq.(2) does not mean you have defined a distribution for a **matrix**. The Gaussian distribution in Eq.(2) is defined in the entire real domain. But probably, you are on a regular time grid. Apparently, some mathematical inconsistency exists.) That's not a problem if this were part of API documentation of a software library. But as a technical paper submitted to a top machine learning conference, there may be different expectations. I think the main idea deserves much more careful and deep thoughts. I encourage the authors to re-do the text to perfect it. I'm sure that the new version will be a great piece of work in the community. Updated after the discussion - Novel idea of using the degree of self-attention as a metric of anomalousness. - New approach of positional encoding based on Gaussian kernels - Comprehensive empirical comparison with alternative methods ----- Original summary ----- - Novel idea of using the degree of self-attention as a metric of anomalousness. - Unclear and unjustified descriptions. - Lack of the critical baseline (major issue in this domain).
The paper proposed a novel approach that leverages the discrepancies between the (global) series association and the (local) prior association for detecting anomalies in time series. The authors provided detailed empirical support to motivate the above detection criterion, and introduced a two-branch attention architecture for modeling the discrepancies and establishing an anomaly score. All reviewers acknowledge the technical novelty of this work (including the key insight of modeling anomalousness with Transformer’s self-attention and concrete training mechanism via a minimax optimization process) as well as the comprehensiveness of the empirical study. Meanwhile, there were some concerns in the positioning of the work, in particular in the clarity in connection to related work, and some reviews concern the clarity of the presentation (e.g. missing some details in experimental results), and the clarity of the exposition of the training process. The authors provided effective feedback during the discussion phase, which helped clarify many of the above concerns. All reviewers agree that the revision makes a solid paper and unanimously recommend acceptance of this work. The authors are strongly encouraged to take into account the feedback from the discussion phase to further improve the clarity concerning the technical details as well as the reproducibility of the results.
This paper designs sublinear algorithms for hierarchical clustering in dynamic streaming model, query model and MPC model over very large graphs. Minimum cost hierarchical partitioning is the optimization objection (Dasgupta’s cost function). From the theoretical aspect, this paper proved the bounds including lower and upper bound respectively for the three given models. They prove a general structural result that shows that a cut sparsifier can be used to recover a (1 +o(1))-approximation to the underlying HC instance. Although this research problem is not novel, sublinear algorithms for hierarchical clustering is one of the fundamental issue in the studies of graphs. Lots of applications need the sublinear algorithms especially for massive graphs. The paper is well written with a good presentation. It supplies enough theoretical proofs for the complexity and (1 +o(1))-approximation is a good general structural result in the three models. However, I have some concerns about the approach and applications. 1. Some recent works prove a seem better bound for the same problem. The authors should compare to them. For example, Assadi S, Chatziafratis V, Mirrokni V, et al. Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower Bounds[C]//Conference on Learning Theory. PMLR, 2022: 4643-4702. 2. Although proofs on bounds are good evidence for showing the performance, it would be better and attractive to give some practical experimental studies to show the real effectiveness in massive graphs. In some cases, a well theoretical complexity still be not enough for the real huge graphs. Minor problem: Although the presentation is relatively well, some necessary examples are needed for the readers to follow clearly. 1. Some recent works prove a seem better bound for the same problem. The authors should compare to them. For example, Assadi S, Chatziafratis V, Mirrokni V, et al. Hierarchical Clustering in Graph Streams: Single-Pass Algorithms and Space Lower Bounds[C]//Conference on Learning Theory. PMLR, 2022: 4643-4702. 2. Although proofs on bounds are good evidence for showing the performance, it would be better and attractive to give some practical experimental studies to show the real effectiveness in massive graphs. In some cases, a well theoretical complexity still be not enough for the real huge graphs. <doc-sep>This paper considers developing resource efficient algorithms for hierarchical clustering of weighted graphs in several settings (streaming, graph query model, mpc model). Following recent work which developed an objective function for hierarchical clustering (Dasgupta 2016), the goal is to develop algorithms with sublinear resource usage (in the number of edges) in each model while producing an $O(\\phi)$-approximation to the optimal tree for Dasgupta's objective function. The main results of this paper are to achieve exactly this. By making a key observation about the structure of the objective function, the authors show that this problem reduces to efficiently constructing a cut sparsifier. In some settings (e.g. streaming using $\\tilde{O}(n)$ space), this can be done immediately via known techniques for constructing cut sparsifiers. For the other settings (graph query model and the mpc model), a more relaxed notion of cut sparsifier is proposed (which allows for some additive error in addition to the multiplicative error), and the authors give sublinear algorithms for constructing such a relaxed cut sparsifier in the remaining settings. Lower bounds showing near optimality of the results are also given. There has been significant interest in hierarchical clustering recently and one issue that many algorithms suffer from is scalability. This paper helps to address this problem in several settings from the perspective of Dasgupta's objective function. The techniques are natural and are well explained. One thing that is missing from this paper (but not probably not necessary) is an experimental evaluation to demonstrate the practical effectiveness of the proposed algorithms. N/A <doc-sep>Hierarchical clustering over graphs ha been mathematically formalized with a natural objective function introduced by Dasgupta [STOC2016]. Unfortunately, this function is hard to optimize and approximation algorithms have been proposed in the TCS literature. This paper studies this problem in the regime of sublinear computational resources, specifically, for three models of computations: - streaming model = sublinear space - query model = sublinear time - MPC model = sublinear communication For each model, upper and lower bounds are provided. The paper presents informal statements for the main results proven in Appendix. The results appear to be new. There is a recent COLT paper [5] with similar results in the streaming setting. This paper is well-written and deals with theoretical results. The motivation for these results is very far from any practical application. NA <doc-sep>This paper focuses on hierarchical clustering over graphs with the objective function of Dasgupta. Sublinear algorithms w.r.t. the number of edges in the input graph are developed in three models of computation, including the dynamic streaming model, the query model, and the massively parallel computation (MPC) model. Interesting matching lower bounds are provided for the upper bounded resource above. Specifically, a 1-pass semi-streaming algorithm naturally follow from existing algorithm [1] and their observation. In the query model, combining existing methods [14,37] and their observation leads to sublinear time algorithms for graphs of different sparsity. For the MPC model, a 2-round $\\tilde{O}(m)$-memory algorithm and a 1-round $\\tilde{\\Theta}(m^{4/3})$-memory algorithm are developed based on [1]. All above algorithms rely on the crucial observations that, from a graph cut perspective, the objective function can be well approximated in a cut sparsifier of the input graph. Moreover, two hard instances are designed to derive time and communication lower bounds in the query and MPC models, respectively. Finally, the extensions to other related hierarchical clustering objectives such as the dissimilarity [15] are discussed. Strengths: 1. The idea of using cut sparsifiers in improving the computational resource of graph algorithms has been widely used. But using it in hierarchical clustering and providing theoretical analysis on the required resource and approximation factor is not well studied with only a few related work, e.g., [5]. The problem of hierarchical clustering is reduced to the problem of constructing cut sparsifiers with limited resource based on the observations that cut sparsifiers can approximately preserve the cost of a dendrogram. Comprehensive studies on 3 models of computation are performed and upper bounds are complemented with almost matching lower bounds. 2. The developed lower bounds in the query and MPC models are quite interesting and need to overcome multiple challenges. The authors are able to describe the high-level overview in the main text and then refer to the full details in the Appendix. 3. The organization of this work is good. I like the discussions in Sections 4 and 5, providing context on related works and extensions to other objective functions of hierarchical clustering. I browsed some contents in the Appendix and the quality of writing remains good compared to the main text. Weaknesses: 1. The computational upper bounds developed in Section 2 are not very challenging and based on existing methods in the respective models. The upper and lower bounds in the dynamic streaming model are straightforward. 2. As a reader of a hierarchical clustering paper in NeurIPS, I would expect to see some experimental results on the developed algorithm unless the theoretical contributions are very novel and significant. It would be much better if experiments on some of the algorithms had been implemented and evaluated in real-world datasets. No
The paper presents new algorithm for hierarchical clustering in different regimes. In particular they show a new algorithm for a (dynamic) edge streaming model, for a neighbor query model and for the MPC model. The paper contains both nice theory results and in the rebuttal phase the author(s) supported them with interesting experimental results. Overall, we suggest to accept the paper as poster.
The paper considers the task of learning GANs that decompose the image formation process into foreground, background and mask generation and composition. Compared to previous methods, the proposed ComGAN aims to avoid trivial solutions (where masks do not correspond to foreground objects) mainly through the network architecture instead of regularizations, which often require extensive hyperparameter searches for suitable regularization strengths. - Strengths - The proposed approach simplifies the design of compositional GANs compared to previous methods and demonstrates improved performance in terms of synthesis quality as well as unsupervised segmentation performance. - Compared to similar compositional generative models like FineGAN [25], the supervision requirements regarding weak background supervision is further reduced. - Finding a compositional generator architecture that is stable to train has many applications beyond GANs. Thus, the work is potentially interesting for a larger audience. - Although there are still a few hyperparameters (mask consistency loss weight, binary regularization weight, dimensionality of latents, relative sizes of subnetworks), experiments demonstrate some robustness to these parameters. - Weaknesses - The core idea and differences to previous approaches are not clearly stated. - The requirements for Proposition 1 regarding what it means for an architecture to be "similar to ComGAN" are not stated clearly. I assume the key point is that M consists of only an sigmoid layer. However, the formulation in l. 149 seems to be the only place where this is stated and even there it remains vague and could be interpreted as containing the same layers as F and B and in addition a sigmoid layer. The latter interpretation is also what Fig. 3 suggests (albeit with one residual block less). If this (a shared decoder with a minimal mask decoder) is the key idea of the paper, it should be communicated more clearly and probably also on a higher level already in the introduction. Without clear restrictions on the architecture, one could also think that FineGAN satisfies the requirements with G set to the identity. - The lemmas, propositions and proofs seem a bit vague as they do not clearly state assumptions or define all involved quantities. In l. 160-161 it is not clear what is meant by $\\bar{x}_m = M^{-1}(G(z))$ - why would the input of M equal its output? Intuitively, I also don't see how "it is clear that any change of foreground or background affects the mask, [...]". Since F and B do contain additional layers, foreground and background could be affected by changes in those layers even though G(z) and hence the mask would remain unaffected, no? The other way, that any change of the mask affects both foreground and background, seems to be true. - Motivation for DS-ComGAN architecture is unclear: Why is the $\\bar{x}_z$ output needed in addition to the output composited from foreground, background and mask? Limitations and potential negative societal impact have been addressed adequately. <doc-sep>The authors point out two factors that lead a scene decomposition model to fall into trivial solutions in a mathematical analysis. Those are related to the vanishing gradient phenomena that goes into a mask generator. To avoid these, they propose a novel network architecture, where features for generating decomposed scene elements are composed of the ones that used for generating the entire scene at once. With this architecture, they achieved the SOTA scores on both the mask prediction and the image quality evaluation metrics. Strengths - The authors tackle the important problem in the scene component generation models. - They propose a novel architecture for robust mask generation based on the theoretical analysis on the problem. - The authors did a thorough ablation study to show that each element proposed are all effective. - The comparison of both quantitative and qualitative results with previous works imply that the proposed method is effective in boosting the scene decomposition performance. Weaknesses - The connection between the theoretical analysis and the proposed architecture design is not well established. More details are needed to understand why the proposed architecture is helpful for model to not fall into the vanishing gradient phenomena. Authors did not address the limitations and potential negative societal impact of their work. Suggestions - I want to know the authors’ opinion on how much would it be difficult to apply this method on coarse grained datasets like ImageNet? <doc-sep>This work analyses the reason for trivial solutions during mask learning in image composition GANs, and introduce a new model architecture ComGAN to solve the trivial solution issue. Furthermore, an unsupervised object segmentation module is also involved to construct the DS-ComGAN model. DS-ComGAN can perform both disentangled image generation and object segmentation, and outperforms semi-supervised and weakly supervised baselines. Strengths - It is claimed that this work is the first to solve the trivial solution in disentangled image generation by changing the network architecture. The change is simple to apply and obtained significant improvement. I believe this technic can also be useful for other models and tasks. - Both disentangled image generation and object segmentation tasks can be performed in a single framework. More importantly, the learning can be achieved in an unsupervised way by a carefully designed adversarial learning strategy. - Sufficient experiments and analyses have been done to demonstrate the effectiveness of the proposed method. Weaknesses My main concerns are about the description. - The description in sub-section 3.1 is not that readable. I suggest improving the description with a more intuitive illustration to point out the key contribution: how to avoid vanishing gradient from the network architecture perspective. Besides, some symbols seem not consistent. Do the variable f in equation 9 and the variable F in Figure 3 represent the same thing? What does the variable mean? - The description of the mask distribution alignment could also be improved. If I understand correctly, the proposed Segmentation Networks S does not need any paired/unpaired segmentation data. For $D_m$, $\\bar{x}_m$ is regarded as the real and $\\hat{x}_m$ and $x_m$ are regarded as the fake. - The learning process is not clear. Does DS-ComGAN need to be trained in two stages?
 Besides, the overall objective is missing and the loss term βLbinary is introduced in the experiment section instead of the method section. The limitations and potential negative societal impact have been well described.
The paper proposes a compositional GAN model with a novel network architecture that solves the vanishing gradient problem underlying trivial solutions. The proposed model achieves strong results on image disentanglement and unsupervised segmentation tasks. The rebuttals by the authors have successfully addressed most of the concerns of the reviewers. All the reviewers are positive about this paper. Reviewer Tkuw's main concerns regarding the evaluation and the clarity of the method were addressed. The reviewer raised the rating. Reviewer 19KW felt positive about section 3.1 in the revised version and the additional empirical results regarding the gradient values observed during a training stage as given in Figure 5. The reviewer also updated the initial rating. Reviewer rWMN's concerns have also been addressed. The reviewer appreciates the additional detailed theoretical analysis on the problem.
This work addresses fine-grained entity typing by leveraging semantic relations of the mention with other words in the same sentence. Specifically, the authors showed how to use hypernym relations and verb-argument relations. For the first, they used Wikidata to train a string match based candidate extraction and BERT-based verification model. For the second, they used an existing SRL system to extract relations between the verb and the mention. Then the two system each produce a prediction that is then combined with the base model through a gating network. The proposed method significantly improves over baselines. And they performed ablations studies to show the help from hypernym and verb-argument. Strength: 1. The proposed approach is well motivated, and described clearly. 2. The advantage of the proposed modules (HR and VR) is validated through ablation studies. Weakness: 1. The proposed method for combining leveraging different semantic relations is an ad hoc ensemble of separate systems. And each system has some other dependencies (extra data, for example, Wikidata, or external trained model, for example, AllenNLP SRL), which introduces more complexity in training. 2. It would help to show some examples to demonstrate the advantages of HR and VR. For example, what kind of sentences do they help and what kind of sentences do they hurt. Questions: Since the model combines three systems, I was wondering if the accuracy would drop, comparing to no HR or no VR, on sentences where there is no hypernym or no verb-argument structure detected. In other words, would adding HR or VR hurt performance on sentences where they only output zero vector? <doc-sep>The paper shows that semantic relations associated with mentions can be used to improve fine-grained entity typing. The whole model contains three parts: 1) Base FET Model 2) Hypernym Relation Model 3) Verb-argument Relation Model. Experimental results show that the integrated semantic relation information improves the final performance. The comparisons are extensive. The submission is well suited to the akbc conference.<doc-sep>The paper describes an approach that models linguistic features extracted from the entity context and applies them to the fine-grained entity type (FET) prediction task. Experiments show that incorporating models for hypernym relation detection and semantic role labelling improve the performance. * I would like to see more motivation for the FET task in the introduction. It is not clear why explicit type modelling is required for the down-stream tasks. * There are many papers that report increase in performance on the NLP tasks, such as question answering, from incorporating these and other linguistic features that should be mentioned in the related work, e.g. [1] Fabian Hommel, Philipp Cimiano, Matthias Orlikowski, Matthias Hartung: Extending Neural Question Answering with Linguistic Input Features. SemDeep@IJCAI 2019: 31-39 [2] Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Dan Roth: Question Answering as Global Reasoning Over Semantic Abstractions. AAAI 2018: 1905-1914 * Semantic role labelling should be illustrated with examples and clearly motivated for the FET task. * It is interesting to see dataset statistics with respect to the extracted features, e.g. how many hypernym mentions where detected, how many arguments for each of the roles in each of the datasets were extracted? * Error analysis is missing. How many errors are propagated from the previous stages? * "the hypernyms extracted by our approach are of high precision" What is the precision of hypernym extraction? * Gating network architecture is not clearly specified. Is it the "MLP with two fully connected layers"? Formula 3 suggests a linear combination of vectors but the description above does not correspond to this formula. * Abstract should contain more details on the datasets and results: "We conduct experiments on two commonly used datasets. The results show that our approach successfully improves fine-grained typing performance. "
The paper make use of semantic relations (hypernym and verb-argument) to obtain the state of the art performance in entity typing, especially compared to strong baselines such as BERT. The paper presents an interesting message that linguistic features could be still important among the age of end-to-end methods. It is also clear that entity typing is crucial for constructing knowledge bases, making the paper quite appropriate for the proceedings of AKBC.
The paper addresses the problem of predictive modelling with missing input features. The authors formulate the problem as a latent variable model, and in addition to the standard variational lower bound (ELBO) propose to use a variational upper bound based on CUBO (Dieng at al 2017) modified by an exponential divergence to solve the MC estimation of CUBO. They further propose surrogate parametrization to reduce the variance in the gradients. The experimental evaluation over standard regression UCI datasets with randomly dropped features shows marginal improvements over existing baselines. (+) pros / (-) cons ------------------- (+) Predictive modelling over inputs with missing features is an important problem arising often in many application domains. This paper contributes this somewhat underexplored field. (-) The rather low documented performance benefits over simpler baselines do not justify the use of the complex model (combining 5? networks) proposed here as opposed to simpler VAE or CVAE variants. (+) The method and the various bounds introduced are mathematically intriguing, well motivated and potentially useful in follow-up research, however, ... (-) the paper is difficult to follow and at places the reader is left guessing what the authors meant. This should be improved. Concretely 1. last para of section 2 - the optimization of negative L "is relatively difficult". Why? What makes it difficult? 2. last para of section 2 - "... have been no equivalent of VI ... " What about CUBO and its variants pick up from in your work? These have some specific flaws for which they do not qualify here? 3. before equation 3 - "exponential divergence". You mean the Bregman exponential divergence? A citation to help the reader? 4. $p(u, z | \\theta)$ in equations (11) and (12) seem to use the same parameters $\\theta$ though for (11) $u = (y, \\tilde{x})$ and for (12) it is $u = \\tilde{x}$. Is this in practice the same network with two outputs? 5. But then in equation (15) these use different $z_{\\theta}$ and $z_{\\psi}$ samples. How is this designed and trained in practice? 6. page 5 - clarify notation for and explain the gain function; what is the intuition / purpose for it? 7. Def 1 - effective parameters are those with gradient zero. ".. i.e. the set of parameters inducing tight variational approximation." How zero gradient achieves this in a complex non-convex problem, i.e. can't this be a local non-tight extremum? 8. page 7 DVAE/ DVAE* - you say these are MNAR and MCAR model variants as in Collier at al. 2020. Can you clarify how these translate into your rather more complex model formulation and what specifically changes in the loss (especially the EUBO part)? Further questions for clarification/discussion ----------------------------------------------- 1) Why do you condition $y$ and $x$ on $m$ in equation (1). These are the complete $x$ data so they should not depend on the masking so that $p(y, x | m) = p(y, x)$. Or is this not true? Or is it the $y$ that depends on $m$? Or is it rather the $m$ which depends on $x$? (As in some values being more likely to be masked?) 2) You introduce two latent variable models in equations (7) and (8). My understanding is that the latent $z$ is shared as (8) is just a marginalization of (7) over $y$. You then formulate to approximate posteriors $q(z| y, \\tilde{x})$ and $q(z| \\tilde{x})$ the first learned through ELBO maximization, the 2nd through EUBO minimization. There is currently no link between the two (approximate) posteriors. Would it make sense to somehow link them? (Sorry, I don't know how and may not be possible, or not easily.) 3) You used the Bayes rule to decompose the predictive conditional log likelihood into two terms in equation (4) of which one you are bounding from the bottom (ELBO) and the oher from top (EUBO). What is the effect on the predictive conditional $p(y | x)$? Is it somehow sandwiched or not really due to splitting and modelling the two non-conditional log likelihoods separately? Minor text problems / typos --------------------------- 1. This first proposition in in page 6 is numbered 2 (not 1) - confusing The paper contributes to practically very important yet relatively little explored area of research - that of predictive modelling with missing data. The proposed method is rather complex, composed of multiple steps adding onto each other to solve a problem arising in the previous steps. These are all well motivated, however, overall the current presentation of the method is difficult to follow and should be improved to help the reader (see main review). Moreover, the documented performance benefits seem to be rather little to justify the use of such a complex method over simpler baseline. These two (lack of clarity, low performance given the complexity of method) are for me the reasons not to consider the paper for this conference. <doc-sep>The paper considers the problem of prediction with missing (incomplete) features. The authors propose a class of generative models that includes missingness of features, and develop a discriminative learning algorithm that maximises the conditional (posterior) log-likelihood of the training data approximately. Experiments show that the method is competitive compared with existing approaches, and in particular with approaches based on VAEs. The ab-initio generative model class proposed by the authors for handling predictions with missing features is convincing. It has the advantage that the involved distributions are simple (factorising), however at the price of introducing latent variables. To learn it discriminatively requires to maximise a difference of concave functions. The first term is lower bounded by ELBO as in VAEs. The second term requires a tractable upper bound. The authors develop a novel upper bound (starting from alpha-Rényi divergence) that admits a stochastic gradient estimator. They further introduce a data dependent surrogate reparametrisation in order to achieve an estimator with low variance. The technical part of the paper is concisely written and correct. The authors prove that the transformation used for the reparametrisation preserves the effective parameter subset, i.e. the subset of parameter combinations for which the overall bound is tight. This is indeed a desirable property, but is in my view not sufficient. The reason is, that this effective subset can be very small and cover a subset of simple models only. Moreover, there is no guarantee that the respective gap will become small during learning. The experimental section first analyses the learning properties of the method in an ablation study. The authors then show competitiveness of their method by comparing it with existing approaches on a subset of tasks taken from the UCI Machine Learning Repository. The description of the experiments is clear and reproducible. The experiments are however not fully convincing w.r.t. the scalability of the approach. All networks used for the model and bound construction have only one fully connected hidden layer. This seems to be sufficient for the considered tasks from the UCI repository. However, this would be not sufficient e.g. for image classification tasks where the involved networks are usually deep CNNs. Further comments: - You mention earlier works (Ghahramani & Jordan, 1994; Smola et al., 2005), noting that their applicability is restricted to exponential families. Please explain whether these approaches are / are not applicable for the model class analysed in your work. As I understand it, the models p(y,x,z|m) considered by you are exponential families, but of course after marginalising over z, the resulting mixture model p(y,x|m) is not any more. It remains however unclear to me, whether a DCA (difference of convex functions algorithm) for learning p(y|x,z,m) can be somehow generalised for learning p(y|x,m). - I would suggest to drop the data instance superscript earlier in the text, e.g. starting from subsection 2.2. the latest. This would in my view improve readability and reduce clutter. The conceptual part, i.e. the model and the proposed learning approach are in my view concise and sufficiently novel. This outweighs the missing scalability analysis in the experimental part. I would however expect the authors to clearly address the raised conceptual questions. <doc-sep>The authors propose a new method, DIG, for discriminative tasks with missing input features. It uses latent variable models to marginalize out the label and the missing part of the features given the latent variable, in order to compute the objective, the conditional log likelihood of the label given the corrupted features. As this objective is intractable, the paper builds a conditional evidence lower bound (CELBO) that can be unbiasedly approximated using Monte Carlo samples. CELBO consists of the regular ELBO as the lower bound for the log joint probability of the label and the observed features, and an evidence upper bound (EUBO) that bounds the log marginal probability of the observed features. The derivation of EUBO involves the alpha-renyi divergence and the exponential divergence. The stochastic CELBO contains a density ratio that can lead to large variance in the stochastic gradients during optimization, so the authors propose a surrogate parameterization to bound the gradient norm. Experiments on real datasets justify the effectiveness of the variational approximations to stabilize the optimization. When compared with VAE, CVAE, and MICE, the DIG algorithm shows better or comparable predictive performance and robustness against feature corruption. **Strengths** The paper is overall clearly written. The issue it tries to solve, discriminative tasks with missing input features, has great impact for a wide range of practical machine learning problems in real life. Technically, the paper has quite some novelty including the creation of a rigorous lower bound to the true objective using recent advances in the variational inference area, and designed an effective surrogate parameterization to stabilize the optimization. **Weaknesses and Questions** 1. Sec. 3.1: More detailed explanation of the exponential divergence would be beneficial. Is there a reference for it? What role does $f(\\boldsymbol{u}; \\xi)$ play? If it can be any real-valued function, why was it chosen to be a Gaussian pdf, as shown in the appendix? 2. Sec. 3.2: Which standard automatic differentiation library was used? The submission mentioned both the reparameterization trick and the REINFORCE trick - which one was actually used in the experiments? 3. Sec. 3.3: I don't quite understand how this part works. All I can see that in Eq(17) the problematic ratio term is multiplied by $G$, which is always smaller than 1 and non-increasing based on Figure 3. But why $G$ was defined in that math format? What does $\\vee$ mean? Why does $G$ represent the ratio before and after the transform (equation between proposition 2 and 3)? 4. Sec. 4.1: Missing value processes (MCAR and MNAR). What do they mean? Are they different ways to decide what values are missing in the feature, and thus leading to different versions of a dataset? If so, shouldn't we also add CVAE*, Simple* and MICE*? Could you give more explanation for the last sentence of Section 4 (saying DVAE* is more robust than DVAE)? 5. Size of the test datasets. Based on Table 3, the datasets are all quite small, ranging from 353 to 10k data points. And we further split these points into training and test, which makes the training sets even smaller. In the appendix it's said the minibatch size is 521 -- what if the entire training set is smaller than 512? How long did the algorithm take to run on YearPred? Is the algorithm able to be easily extended to larger datasets? 6. How does DIG works compared with other more recent imputation baselines such as MIWAE (Mattei & Frellsen, 2019) and GAIN (Yoon et al., 2018)? Given the strengths of the paper listed above, I would recommend acceptance for this paper, if the authors can figure out a clear feedback for the questions I summarized when reading the paper. <doc-sep>This paper proposes a new method for learning with missing data. Compared with previous approaches, the authors choose to perform discriminative learning with generative modeling so as to borrow the benefits from these two types of methods. To optimize with the underlying intractable loss function, the authors start from the traditional variational lower bound ELBO and one upper bound CUBO from a previous work (the $\\chi$-divergence lower bound [1]) and derives a lower bound for the original loss function. To solve the issue with the estimation bias as well as the potential huge variance, the authors change the divergence function in CUBO as well as add the surrogate parameterization so that the Monte Carlo estimation of the loss can be unbiased and (potentially) with smaller variances. Experiment results show the proposed methods run stably and perform comparably or better compared to baseline methods. References: [1] Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, and David Blei. Variational inference via χ upper bound minimization. In Advances in Neural Information Processing Systems, pp. 2732–2741. 2017. Strengths: 1. The idea of learning missing data using discriminative learning together with generative modeling is interesting. As mentioned in the paper, performing such kind of learning will resulting a loss function as a subtraction on two integrals with respect to the latent variables, which makes it harder to derive a lower bound compared to the traditional variational inference cases. To solve the term being subtracted, the authors found a upper bound that could be estimated in an unbiased way with Monte Carlo methods. 2. The exponential function in the first version of CELBO will potentially has a bigger variance when estimated using Monte Carlo. To solve this issue, the authors propose adding a regularization to the loss function while remain the optimal solution unchanged under the zero gap case. This also helps a lot in making training more stable, as shown in the experiments. Weakness: 1. The topic of this paper is to focus on missing data. However, this paper does not put enough efforts on learning various cases of data missing patterns. As in the experiments, the authors only test the case where the data are missing completely at random (MCAR), which may not be the most common case in reality. MNAR case might be a more interesting situation to study with. The proposed method is mostly focusing on solving the variational upper bound, while overlooks modeling the missing patterns. Suggest the authors could add some experiments with MNAR data. Also it would be better if the authors could add some modeling part on the missing pattern into the loss. For example, let the mask $m$ to depends on $(x, y, z)$. This will make the proposed model more useful in practice. 2. From Proposition 2, we know that the surrogate parameterization can make the optimal solution of CELBO remains under the zero gap case. However, it is nearly impossible to reach the zero gap case in reality since it is unlikely to select variational distributions (i.e. $q(\\cdot)$'s) to perfectly estimate the model posterior distributions. What about the "sub-optimal" cases? Is the CELBO-SP optimal solution close to the CELBO optimal solution in a small gap (but not zero gap) case? I understand this might not be easy, but it will be better if the authors could add some theoretical analysis on this. 3. The performance metrics in Table 1 is not showing the proposed methods can outperform the baselines with big gaps, meaning that the proposed methods is not much better compared to previous approaches empirically. However, I think there are many ways that the authors could try to improve the performances. For example, the authors could try a different divergence function, a better way to add the regularization, etc. The author propose an interesting discriminative learning approach with generative modeling to solve the missing data modeling problem, by extending the traditional variational lower bound (ELBO), with a novel and stable upper bound that can be estimated without bias with Monte Carlo estimation. It is better if the author can study more on the missing data pattern (both empirically and theoretically) and the optimal solution preservation (under sub-optimal cases). Also, the empirical performance still has some space to improve.
While generative model can be used to input data, this work propose to a novel discriminative learning approach to optimize this data imputation phase by deriving a discriminative version of the traditional variational lower bound (ELBO). The resulting bound can be estimated without bias with Monte Carlo estimation leads to a practical approach, leading to encouraging experimental performances. The reviewers recognised the novelty and suggest that this approach, given its novelty and wide applicability, could be considered for an oral presentation.
The paper studies an online Deep Equilibrium Models (DEQ) method for Regularization by Denoising (RED). The proposed ODER incorporated randomized processing of measurements. The ODER algorithm aims to bypass the high computational/memory complexity of DEQ led by the high dimensionality of measurement space. The introduced online backward pass is demonstrated to lead to a more scalable and flexible DEQ framework for inverse problems. Based on standard assumptions in the analysis of fixed-point and SGD, the authors also have given some analysis of the ODER's convergence. The authors have also conducted experiments to analyze the behaviour of ODER, and demonstrate its effectiveness. As an accelerated DEQ-RED, the proposed ODER is a useful extension to DEQ and RED. Strengths: 1. The studied problem is interesting and worth devoting to, and the author has done significant work. 2. The analysis and experiments are extensive and can support the claims. 3. The paper is well written and easy to follow. Weakness: 1. 'Online processing of measurements' and its analysis tricks are not new. 2. Some experimental settings and studies on hyper-parameters are not well discussed (see the points in [Questions]). My initial rating of the paper is weak accept. I am open to raising my score if the authors can address my concern during the rebuttal. Suggestions: there are other relevant works that are worth mentioning. As far as I know, in addition to the denoiser prior, the below DL paradigms also incorporate the knowledge of data acquisition to solve inverse imaging problems which have been studied for the purpose of learning Generative prior [1], Neumann inversion [2] or Equivariance prior [3]. To a broader view, it's necessary to add some review or comparison with these related paradigms. [1] Bora, A., Price, E., & Dimakis, A. G. (2018). AmbientGAN: Generative models from lossy measurements. In International conference on learning representations. [2] Gilton, D., Ongie, G., & Willett, R. (2019). Neumann networks for linear inverse problems in imaging. IEEE Transactions on Computational Imaging, 6, 328-343. [3] Chen, D., Tachella, J., & Davies, M. E. (2021). Equivariant imaging: Learning beyond the range space. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4379-4388). <doc-sep>Edit - score raised by 1 point based on author revisions. This work proposes ODER, an online learning method for deep equilibrium learning for Regularization by Denoising. DEQ is a recently proposed framework for memory efficient learning of an infinite-depth unrolled network as implicitly defined by a fixed point of an operator. RED is a specific type of iterative algorithm that can incorporate learned priors, and has the fixed point operator structure. The authors therefore apply stochastic gradient descent across the measurement direction for each input in the training set, and derive the corresponding update equations for gradient-based learning. They show that their approach is able to get similar quality reconstructions with reduced memory and training time, compared to full-fledged DEQ-RED. Strengths: The paper is well-organized and presentation is clear. The work nicely connects online learning with DEQ, with specific application to RED. There is a "free-lunch" result: both memory and time to train are reduced, without any sacrifice in image quality. For this reason this work is exciting. The work is also supported by theoretical results and is demonstrated for a several different applications. Weakness: As the authors state, online-learning for RED is not a new concept, and both are special cases of running SGD across the measurement direction (e.g. coils in MRI) which is not novel on its own; for example: [1] Ong, F, Zhu, X, Cheng, JY, et al. Extreme MRI: Large-scale volumetric dynamic imaging from continuous non-gated acquisitions. Magn Reson Med. 2020; 84: 1763– 1780. https://doi.org/10.1002/mrm.28235 The CT experiment had only one subject in the test set, which seems prone to overfitting. I don't think that Section 6 sufficiently describes limitations of the work. Could the authors discuss limitations of the approach, for example the required theoretical assumptions, implementation considerations, etc.? Another example is the relative improvement of ODER over RED (Unfold) or RED (Denoising). As both of these alterntives can also be implemented with online learning, how will the training time compare? <doc-sep>This paper introduces an online deep equilibrium learning approach for large-scale inverse imaging problems. The method is built upon the recently proposed deep equilibrium architecture that unrolled an optimization algorithm (e.g., steepest gradient decent for regularization by denoising) into a neural network with a potential infinite number of depth (iterations). This paper goes beyond that by presenting an online variant to enable large-scale inverse imaging applications where the full evaluation of data consistency term is quite expensive. It therefore demonstrates superior performance on three data-intensive inverse imaging tasks -- its reconstruction quality is comparable to the full batch solution while is 2-3x faster, suggesting the benefits of the proposed method The major strength of this work lies at the strong empirical evidence on three practical large-scale inverse imaging problems. Plus, the paper is nicely written and easy to follow. The theoretical convergence of the algorithm is also rigorously analyzed. Nevertheless, the technical novelty of this work is rather incremental. The methodology here is mostly credited to [A], which first connected the deep equilibrium model and PnP/RED method. The nontrivial technical parts thus can only be attributed to the online version, as well as theoretical analysis of convergence, but none of them could be viewed as significant contributions unfortunately. Virtually, the deltas here are well known in the PnP literatures, e.g., online version of PnP and RED algorithms are proposed in [B] and [C] respectively, even both of them have already established the theoretical convergence. Consequently, given the pros and cons on balance, I feel this is a very borderline paper, and I vote for borderline accept tentatively *** **I raise my score because of the originality of the main theory highlighted in the rebuttal.** [A] Deep Equilibrium Architectures for Inverse Problems in Imaging, TCI 2021 [B] An Online Plug-and-Play Algorithm for Regularized Image Reconstruction, TCI 2019 [C] Block Coordinate Regularization by Denoising, NeurIPS 2019 N/A <doc-sep>In this paper, to address the issue that the training of Deep Equilibrium Models (DEQ) can still be a significant computational and memory challenge in applications that require processing a large number of sensor measurements, the authors propose Online Deep Equilibrium Learning for Regularization by Denoising (ODER) for inverse problems that adopts stochastic processing of measurements within an implicit neural network. Experiments on three applications demonstrate the potential improvements in training/testing complexity. This paper solves data-intensive imaging inverse problems and owns some strengths: 1) The main advantages of the presented technique are the reduced execution time w.r.t. the main baseline RED (DEQ) and the lower memory consumption of the measurement. 2) The paper is overall well structured and written. 3) The provided references are comprehensive and adequate. 4) The work seems correct since the proposed approach lays its foundations on robust literature papers, and the authors theoretically analyse ODER regarding its convergence and ability to approximate the traditional DEQ approach. However, there are some issues that I would like to highlight: 1) Novelty/Originality/Contribution: Although integrating the online processing into the DEQ framework is overall a novel attempt for solving inverse imaging problems using PnP/RED operators, it is a straight borrow from other literature and stitching. I worry that the contribution of the paper is limited since the only improvement to DEQ and Ref.[35] is the addition of online processing. This seems like a small and straightforward contribution to pre-existing works and hardly motivates the publication of a new paper. 2) Significance: This paper specifically addresses issues in imaging applications that require processing a large number of sensor measurements, which may limit its impact. 3) Experiment: In Figure 3, Tables 2 and 3, the SSIM and SNR of the proposed approach are not better than RED (DEQ) in experiments on CT and MRI images, so the only real contribution is the reduced execution time. Yes
The paper proposes a learning method (specifically a deep equilibrium learning approach) for 'regularization by denoising', a plug-and-play method for solving inverse problems. After the rebuttal, all reviewers support acceptance of the paper. The reviewers find the paper to be well written, the problem to be interesting, and the claims to be well supported (reviewer Hjnn), both empirically (reviewer uDGc) and through theory. Reviewer A7f5 finds the work particularly exciting since both memory and training time are reduced, without sacrificing image quality. Based on my own reading and the unanimous support of the reviewers, I recommend acceptance of the paper. A nice contribution!
This paper proposes a new knowledge distillation framework for directed graphical models based on the reparameterization trick. The new distillation framework overcomes the intractable marginals in marginalized distillation, and error accumulation in local distillation. Empirically, the new distillation framework surpasses the baselines on deep generative model compression, VAE continual learning, and discriminative DGMs compression. I think the proposed distillation framework is novel, and the entire paper is well-organized. However, I have the following concerns/questions, 1. The title claims for a unified KD framework for deep DGMs. However, the proposed distillation loss only applies to DGMs where the latent variable z has a reparameterization form. I believe the author should highlight this limitation of their framework. 2. In equation (3), the expected KL divergence is computed over $p_{\\phi}(y_{<j}|x)$. I am wondering if the following three ways will improve the performance of local distillation? Though they are no longer equivalent to equation (2), my intuition is that, for the original objective (equation (3)), the conditional KL divergence is computed with respect to the teacher's distribution $p_{\\phi}(y_{<j}|x)$, however, at inference time, the conditional distribution of the student model is computed based on its own distribution $p_{\\theta}(y_{ <j}|x)$. This makes the training objective does not consistent with the test objective. - $ \\mathcal{L}\\_{kd} = \\mathbb{E}\\_{p_{data}(x)}\\left[\\mathbb{E}\\_{p_{\\theta}(y\\_{<j}|x)} [KL(p_{\\phi}(y_j|y_{<j}, x)|| p_{\\theta}(y_j|y_{<j}, x) )] \\right] $ - $\\mathcal{L}\\_{kd} = \\mathbb{E}\\_{p_{data}(x)} \\left[ \\mathbb{E}\\_{p_{\\theta}(y_{<j}|x)} [KL( p_{\\theta}(y_j|y_{<j}, x) || p_{\\phi}(y_j|y_{<j}, x)) ] \\right]$ - $\\mathcal{L}\\_{kd} = \\mathbb{E}\\_{p_{data}(x)} \\left[\\sum_j (1-\\lambda)\\cdot \\mathbb{E}\\_{p_{\\phi}(y_{<j}|x)} [KL(p_{\\phi}(y_j|y_{<j}, x)|| p_{\\theta}(y_j|y_{<j}, x) )] + \\lambda\\cdot\\mathbb{E}\\_{p_{\\theta}(y_{<j}|x)} [KL(p_{\\phi}(y_j|y_{<j}, x)|| p_{\\theta}(y_j|y_{<j}, x) )] \\right]$ (for some $\\lambda \\in [0, 1]$) 3. Can you explain more on why do you remove $ \\epsilon_i$ in equation (6)? I don't understand why such a change will better penalize the dissimilarity of latent variables $z_i$? 4. Can you provide some qualitative comparisons between the samples from the distilled DGMs on the CelebA dataset? Overall, I like the solution proposed by this paper to address the issues in marginalized distillation and local distillation. However, I believe the authors did not fully investigate the failure reasons for local distillation. The argument in the current paper is kind of vague and not well-supported. I believe a more rigorous analysis and carefully designed experiments are needed to illustrate the argument (e.g., as I mentioned above). I will increase my rating if the authors can provide more convincing results. <doc-sep>The authors propose a method for knowledge distillation specifically for Deep Directed Graphical Models. The authors compare their method with marginalization methods, which integrates the latent variables out, and the factorized (local) method, which distills knowledge between teacher and student on each factor. They validate their approach on continual learning, model compression, and discriminative learning. # Writing: * The paper is relatively readable. The main idea came across clear, but it can enjoy thorough editing, definitely in the experiment section. # Method: * The paper's main claim is that the local method can result in the accumulation of errors. However, the second term in Eq 7 somewhat does the same. In fact, one can view the proposed method as a combination of the marginalization and local methods together. * I am not convinced by the idea of the semi-auxiliary graph that yields the loss function in Eq 3,4. The example in Fig 1 is too simple. For example, when we factor out the z's, the resulting graph on the observed variables is no longer DAG. Consider, $y_1$ <-- z --> $y_2$ if $z$ is factored out then y_1 and y_2 are connected with undirected edge. How does Eq 3 handle such a case? * The advantage of the method is mostly for DGMs with continuous latent variables. For discrete latent variables, the local distillation is used. I'm not sure how well this scales to models with the large number of discrete latent variables. * One limitation is that the teacher and students should have the same architecture. Can it be extended to cases when the architectures are not necessarily the same? # Experiment: * The results in the experiment section are not convincing. For example, the difference between the performances of different methods is within the same standard error, especially for the local method and the proposed method. * The choice of some of the tasks is questionable for the knowledge of DGM distillation. For example, why is continual learning a good task to show that the knowledge distillation method working? The argument made in the paper is that the Delta of forgetting the previous task is small, which is acceptable for continual learning; however, it is not clear why continual learning is a good task to show knowledge distillation. * In figure 5: it is not clear why the authors claim that the proposed method produces better results. For example, why (c) is better than (e)? This experiment is very qualitative and subjective. * None of the experiments (except HM) really motivate the knowledge distillation for DGM. VAE is a very simple DGM, and there is no real structure in the graphical model. The authors could have simulated data from a hierarchical graphical model and used a complex teacher to learn that model, then applied their method to show their approach can recover the true model from the teacher. For example, see [1] for examples such simulation -- a simple mixture model. * I strongly recommend the authors look into the metrics introduced in [2]. Several metrics and experiments presented in that paper can be adopted or adapted for the DGM knowledge distillation. # References [1] https://arxiv.org/pdf/1603.06277.pdf [2] https://arxiv.org/pdf/2106.05945.pdf * The paper is relatively clear. * The experiments are not well-chosen and the results are not convincing. * The method section does not explore the idea of knowledge distillation for DGM deeply. There are questions about the generalizability and scalability of the proposed method. <doc-sep>This paper proposes to use the reparameterization trick to convert the latent variables in DGMs to deterministic variables in the context of KD. It then proposes a surrogate distillation loss and latent distillation loss and evaluates the performance of the proposed method in three applications. Experimental results confirm the effectiveness of the proposed model. Strengths: The paper proposes a knowledge distillation framework for deep DGMs on various applications. Weaknesses: After reading the paper several times I still don't see the significant novelty of this paper. (A) The paper converts each hidden random variable to a deterministic variable via the reparameterization trick. This is a well-known technique. The VAE, for example, uses this technique during training. Although the paper argues that "we do not primarily use reparameterization trick for model training. Rather, we leverage it to convert the latent variables z in DGMs to deterministic variables so that we can effectively distill knowledge from a compact form of DGM", but isn't this very straightforward? I don't see any big difference between using the reparameterization trick during training and KD. The authors should provide a discussion on this. (B) I don't see a difference between equations (4) and (2) when applying to VAE because, during VAE training, the sampling over the auxiliary random variables \\epsilon is implicitly included even though we just apply equation (2). (3) Equations (5) and (6) look very intuitive and straightforward. I am more interested in knowing what theoretical guarantee we can have when using these losses. (C) For experimental evaluation, could you compare your model with more state-of-the-art KD baselines? (eg., Figures 4 and 5 and table 1). I am mainly concerned about the novelty and clarity of this paper. At the current stage, I don't recommend the paper for acceptance. <doc-sep>The authors propose an unified Knowledge Distillation technique for general deep directed graphical models. They use the reparameterization trick on the intermediate latent variables of the original DGM network and the student network. This converts the networks to a compact semi-auxiliary form. Then they use a surrogate distillation loss (combined with latent loss) to reduce the error accumulations over the chain of random variables. They discuss the similarity of their technique with others and demonstrate its performance for 3 applications. Pros: 1. The authors do a good job of giving basic preliminaries and of ruling out the naïve marginalized distillation and local distillation approaches 2. After the compact DGM reductions for both the teacher and student networks, each target variable has direct dependence on the input x and prior y_i’s. This is a neat approach. 3. Proposition 3.1 looks correct to me, when KL divergence is chosen. Concerns & questions: 1. Just verifying: The reduction to a compact semi-auxiliary form is a novel contribution of this work, correct? (Fig. 1c, 1d, 3e) 2. How do we get the correct choice of deterministic transformation g(.) for the reparameterization trick of the original teacher network? p_\\phi() is a neural network, so I am curious how to get g(.) without loss of accuracy. (Algo.1, lines 11-12). Please give a detailed example. 3. The chain error accumulation is reduced as the number of layers reduces in the semi-auxiliary form, right (pg5, para2) ? Or is there any other rationale to it? 4. I wonder how the performance is with only using L_{sd} in eq(7), \\lambda=0 setting? 5. I am a bit confused about the VAE compression expts. Each layer is considered a latent variable `z’, I presume. In this case, what is the student network chosen (I might have missed it)? Also, curious to know, how well the proposed L_{sd} loss works with \\lambda=0 setting. Kindly clarify this expt. The work is good and the paper is well written. I feel the contribution of the work is not that novel. My confidence in evaluation will increase once the authors address the concerns raised above.
The paper proposes a framework for distilling deep directed graphical models where the teacher and student models have the same number of latent variables z. The key idea is to reparameterize both models in terms of standardized random variables epsilon with fixed distributions and train the student to match the conditional distributions of the observed variables/targets given the values of the standardized RVs epsilon. The approach aims to avoid error compounding that affects the local distillation approach, where the student is trained to match conditional distributions of the teacher model (without the above reparameterization). To deal with discrete latent variables and vanishing gradients the authors augment the target matching loss with the latent distillation loss that matches the local distribution for each z_i given the standardized variables epsilon it depends on. Positives -The paper tackles an important problem. -The idea of using reparameterization for distillation in this way makes a lot of sense for continuous latent variables and could be impactful. -The experiments provide some evidence in support of the idea. Negatives -There are considerable issues with the clarity of writing: for example, it is really not clear how (and why) the method is supposed to work for discrete latent variables. The explanation provided by the authors in their response to the reviewers was helpful but still not clear enough. -The fact that the teacher and student models need to have the same number of latent variables (and perhaps even the same structure) is a big limitation of the method given the claim of its generality, and thus needs to be clearly acknowledged and discussed. For example, the method cannot be used to train a student model with fewer latent variables than the teacher, which seems like a very common use case. -The experimental evaluation is extensive but insufficient, in large part due to the evaluation metrics. Given that VAEs are trained by maximizing the ELBO (and distilled by minimizing a sum of KLs), it makes sense to also evaluate them based on the ELBO rather than solely on the FID, is done in the paper. The VRNN experiment would be much more informative if it included a quantitative evaluation (e.g. based on ELBO). In summary, the paper has considerable potential but needs to be substantially improved before being published.
Summary: The authors propose a novel combination of VAEs and Flow models, where the decoder is modelled through a conditional flow taking as input a “local” representation of the size of the input image and a “global” representation output by the encoder. The authors evaluate the proposed method on density estimation, quality of generations and linear probing on a variety of datasets and show improvement over state of the art. Great: * Conceptually simple method that seems to work quite well in practice, for this class of models. * The linear probing experiment is quite convincing in justifying the use of “global” and “local” characterizations of the learned representations. So are the interpolations. Could be improved: * It’s not clear to what extent each of the proposed refinements to Glow (reorganization, different splits, fine-grained multi-scale architecture) improves Glow’s performance. The authors propose a novel combination of known methods, evaluate it extensively and show considerable improvements over current state of the art. A clear accept. <doc-sep>##### Summary This paper aims to improve a Normalizing Flow generative model, in particular Glow, by conditioning the flow on global information of the image in the form of a latent vector learned with the VAE framework. This latent vector is injected at the scale and bias terms of the affine coupling layers of the flow (inspired by what Style-GAN does at the batchnorm layers). Unfortunately, critical aspects of the method remain unclear or unspecified. To the best of my understanding, the paper lacks a clear explanation of the complete pipeline used for training the method and the final objective function. For evaluation, sampling and likelihood computation, procedures are not completely specified either. ##### Pros - The general ideas of the paper are well motivated. Combining the advantages of explicit likelihood and latent variable generative models is in my opinion an extremely interesting research direction. - The authors propose architectural improvements to Glow and craft a conditional version that can effectively incorporate additional information to the flow. - The authors model achieves improved or competitive density estimation, sampling, and downstream performance across various datasets, compared to the state-of-the-art. - Some degree of local and global properties disentanglement is demonstrated in the qualitative results, showing the proposed direction is a promising one in that regard. ##### Cons The main drawback is in my view the presentation of the method. The method part of the paper (Section 2) mixes theoretical justification with architecture details and fails to clearly explain the full pipeline and training objective. This made it very difficult if not impossible to analyze it and draw conclusions. The second half of the paper is dedicated to experimental results, yet the sampling procedure and likelihood computation are not clearly explained either. I think the submission would have been much stronger if the authors clearly explained the whole method and dedicated more of the paper to a careful analysis and justification of the design decisions and of some of the claims (e.g. avoiding posterior collapse, global/local disentanglement). Going into detail, by reading the abstract I get the impression that the VAE framework is used, and the normalizing flow is used to model the generative distribution $p(x|z)$. Yet the abstract also claims to only use a plain log-likelihood objective as in explicit likelihood models, instead of the VAE ELBO. Alternatively, I thought the latent code was learned with a separate VAE, but the introduction states that the generative flow is "[embedded] in the VAE framework to model the decoder". Assuming that the VAE framework is used with a (conditional) normalizing flow for decoder, Section 2 introduces more confusion when the authors state "we feed $z$ as a conditional input to a flow-based decoder, which transforms $x$ into the representation $\\nu$ with the same dimension." This is confusing since typically the decoder input should be a low-dimensional representation, but here it seems to be the image as well, so the concept of decoding seems ill placed. Moreover, if this is the case, what would be the point of the reconstruction term in the ELBO if invertibility guarantees perfect reconstruction? Shouldn't the flow output $\\nu$ be a stochastic latent code as in VAEs? Is a prior density regularization imposed on $\\nu$ as well? Finally, another option would be that everything is trained with the negative log-likelihood cost of the normalizing flow. This would be consistent with the claim in the abstract that only a plain log-likelihood is utilized. But in that case, what is the justification for using a stochastic encoder if it is not regularized? How can it be guaranteed that $z$ will not be ignored? What is the role of $z$ during sampling? Does the likelihood computation involve $z$ or just $\\nu$? I apologize for writing my internal thought process but I also wanted to convey that even if the method was clarified to be some of the options I described, or another one, many of the design decisions would still require extra justification and analysis. Other comments: - Although Figure 1 shows capturing of some global color properties, looking at the interpolations in Figure 5, there seems to be little variability w.r.t. $z$, so maybe the claims of learning "decoupled" and "disentangled" representations require more justification. - About the initialization of the weights of the last linear layer with zeroes (Section 2.1). Wouldn't this create a null output at thus a null backward gradient in the first iteration? Even if a non-zero bias was used, wouldn't an unbreakable symmetry condition be produced? ************ After Rebuttal: I thank the authors for their multiple clarifications, and apologize for my initial misunderstanding. I understand now that the flow model is used to compute $p(x|z)$ as a function of $x$ and $z$. Maybe the "decoder" terminology is a bit confusing here, but this is quite a nice idea overall. It would have been nice to see multiple samples from $p(x|z)$ for a fixed $z$, to evaluate the expressiveness of the model. I'm raising my score to acceptance. PS: Some typos remain in the revised version, eg: "varnishes", "tne" <doc-sep>The paper introduces a mixture of flows with the specific intent to disentangle global and local representations, to improve visual quality of samples. Architectures are borrowed from style-gan literature. The contributions are mainly architectural and empirical. The demonstrates improved visual quality over other normalizing flow methods. Strengths The paper introduces a straightforward latent variable model for flows which is designed in such a way that training on NLL gives good sample quality, which is measured in FID. Even further, the model also performs quite well on the NLL objective itself. Since the focus of normalizing flow literature has been mainly on NLL, I think this paper is a nice complement to existing literature. Weaknesses: - The paper never puts the equations from the background section together in the final objective. It would be helpful to have an equation representing the final objective with some of the relevant variables (i.e. log p(x) >= ... with variables x, z, and v). - The paper does not introduce many novel theory or methods. This is not really a problem, but the paper can be better connected to existing work and clarified in this respect. The model the authors propose is very reminiscent of "infinite mixtures of flows" as outlined by (Papamakarios et al. "Normalizing Flows for Probabilistic Modeling and Inference" page 32). An example would be "Continuously Index Flows" by Cornish et al.. Note their method was introduced with a different intend in a different manner, so I think it would only make the paper better by citing these methods. - Conditioning on a context variable in flow layers is not new (see Kingma et al., "Improved variational inference with inverse autoregressive flow." and Lu et al. "Structured Output Learning with Conditional Generative Flows."). This is not really a problem, but again this should be clarified. - How much added computation is required by the encoder model plus FCnet in eq. 7 compared to the Glow-refined model on which the proposed method is based? - Perhaps the naming "compressing encoder" is not particularly useful. It implies a direct connection to actual image compression, which is as far as I understand not the case. Other than that, this seems like a fairly standard VAE encoder other than the size difference between x and z. <doc-sep>Pros: -> I like the idea of conditional generative flows, where a low-dimensional embedding captures high-level features and a larger embedding latent space captures local representations. -> Use a compression encoder to enforce informative low-dimensional embeddings -> Good experimental results Cons: -> Can you really say that the low-dimensional embedding z is capturing GLOBAL variables? To me, global variables in a generative model drives non iid samples, hence correlating samples with each other. For instance, see the paper Diane Bouchacourt, Ryota Tomioka, and Sebastian Nowozin. Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In Thirty-Second AAAI Con- ference on Artificial Intelligence, 2018. -> The author should clearly explain the training cost function. At the end they are combining a flow generative model (typically trained using ML) with an encoder VAE like posterior approximation. I guess they optimize some soft of ELBO, but it is not clear to me yet. Overall, I like the paper and would like to see it accepted.
The paper proposes a hybrid VAE-normalizing-flow for extracting local and global representations of images. While the reviewers found the model itself to be "conceptually simple" and "straightforward", all were convinced by the empirical evaluation that, indeed, interesting representation learning is going on, resulting in a unanimous vote to accept.
This paper proposes a few tricks to improve the stability of BERT fine-turning, which include a standard Adam optimizer (with bias correction), the top BERT layers re-initiation and longer training. It provides extensive study on the GLUE benchmark showing how important these tricks are for small tasks (such as RTE/MRPC) which have less 1K training samples. The paper is well written and provides an insightful analysis. Although it provides several useful tips for practitioners, it lacks novelty: for example the adam bias correction is from the original adam paper (also pointed it out by [2]) and training longer helping the performance is also observed by [1]. Gradient clipping may also help stabilize the training and it will be great to have a discussion as well. At last, does these approaches help large tasks, such as MNLI/QQP? It will be great to have a few settings: experiments on small or large tasks. [1] Nakkiran et al, Deep double descent: where bigger models and more data hurt. [2] Mosbach et al, On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines <doc-sep>The paper focuses on instability issues in BERT finetuning on small datasets. They list three factors which leads to instability, and provide simple fixes for each: 1. Lack of bias correction term in BertAdam -- Fix was to use standard Adam 2. Using all pretrained layers for finetuning -- Reinitializing the last few layers before finetuning. 3. Training for a predefined number of epochs -- Train for a large number of epochs. The fixes proposed reduces the variance in the results, and in most cases also improves performance. They also show that several proposed solutions to fix training instability lose their impact when the aforementioned fixes are incorporated. Overall, I like the paper; the observation about reinitializing top layers of BERT was interesting and counter intuitive to me; and I think this will be the most important contribution of the paper. Although not directly related to BERT, this paper (https://arxiv.org/pdf/1804.00247.pdf) also suggests training for longer epochs. This paper should be cited here. The tasks considered in the original BERT paper had large datasets, so I think the 2-3 epoch suggestion was tuned to those. The result about BertAdam being unstable in low data settings, was a nice contribution. I feel this algorithm was also suggested considering the large datasets considered in the BERT paper. <doc-sep>### Summary This paper investigates fine-tuning BERT for few-sample datasets. Notably, the authors find debiasing omission in BERT-adam. They find original debiased adam is better than BERT-adam. Besides, they also find re-initializing top layers can speed up learning and achieve better performance. These two findings are interesting. Another finding fine-tuning BERT for Longer is incremental to some extend. ### Strengths * The two findings mentioned above are notable. * The authors conduct extensive experiments to support their claims. ### Weaknesses and Questions * Table 1 shows the results of re-init but does not show re-init how many top layers for each task. * I suggest the authors can investigate debiased adam and re-init on the datasets with enough samples, like MNLI or QNLI. If they can achieve slight improvement or at least do not degrade the performance, we can just conveniently use the same fine-tuning method for most datasets. * Lack of explaining the meaning of Int. Task. <doc-sep>Large language models (LM) architectures, such as BERT, XLNet, etc., are not generally trained from scratch, but rather used as pretrained models. Among all, BERT is one of the most widely used ones, and its use on downstream tasks mainly consists on a stage of fine-tuning, where the new layers added are trained, and the rest of parameters of the network are left unfrozen, and hence are adjusted slightly to better fit the new task. However, this step of fine-tuning BERT is known to be quite unstable, and depends on a large set of factors, especially the initialization. Since the final performance on these downstream tasks can vary notably, different approaches has been proposed to circumvent this, but still the most common solution consists simply on choosing the best performing model, from a few random initialisation, using the validation set. ##### Summary In the current paper, the authors aim at tackling the aforementioned problem in a more grounded way. They investigate deeper the possible causes for these instabilities, and propose methods to counteract these pitfalls. Hence, they propose three simple, yet effective, approaches to stabilise the training and ensure better performances: a modified optimiser, the use of randomly initialised top layers, and more training steps. They provide a large collection of results, compare all these solutions to previous works, and discuss differences and similarities. Thanks to the analyses carried out, the current paper results in an exhaustive study on how “safely” fine-tune BERT, and the different factors that are to be taken into account when making use of these models. ##### Strong and weak points I would like to start with the weakest point of the paper: it actually does not present anything clearly novel, nor innovative or groundbreaking. All the solutions proposed are inspired by previous approaches, or are just slight modifications of existing methods. But, this does not mean the paper is not valuable, as I do believe it is. The instability while fine-tuning large LMs on downstream tasks is a well known problem, but yet it has not been tackle exhaustively, and I do believe there does not exist clear guidelines and/or modifications that enable easily circumventing a critical weakness of these models. But I consider this paper succeeds at precisely this important task, thanks to the extended and exhaustive study it presents, and how it proposes three simple modifications that seem to solve this pitfall on most scenarios. Besides, the paper is quite well written, and presents in a clear manner the problems with the models, some intuition about the cause of those issues, and then, the solutions to overcome them. All the solutions are sufficiently justified, and are intuitive and simple. The latter, instead of being a weak point, for this precise problem it is more an advantage, as will allow an effortless adoption. Its improved performance is ensured thanks to the large set of benchmarks, on various datasets, the authors have compiled on the current manuscript. This is indeed another strong point, as all the solutions proposed are also tested under different conditions, with more or less training steps, and different numbers of top layers randomly initialised. ##### Decision, and key reasons I believe the paper is ready to be accepted. Overall, it is an interesting and useful paper that will help many NLP researchers, and end-users of BERT, fine-tune better models, obtain improved performances, and therefore, start from a better baseline for their endeavours. And all this, with just some simple and intuitive modifications and guidelines. All the proposed methods and suggestions are not drawn from a few bunch of tests, but rather from a large collection of simulations, for different and varied datasets, with disparate starting conditions, and run over a fair amount of random initialisations. Therefore, I believe the authors have taken their time, and simulation time, to ensure that the presented results are robust and consistent, which is something to remark also. ##### Questions and additional evidence Although I believe the paper is nicely written, and compiles all the required results and tests, I would appreciate if the authors could comment further on the following points: * I do believe there is a reason for not performing bias-correction on BERTAdam, and therefore, introducing it back might be affecting BERT training and fine-tuning in some specific, I guess negative, way. Could the authors comment on this? Or their understanding on why the correction was removed for BERTAdam. * In Figure 4, you suggest that with 5 to 10 random trials, the bias correction will achieve good results. However, observing the plots for all the datasets, we realise that indeed that number of random trials may benefit more the non-corrected version, as in most of the datasets the performance is either higher, or at least comparable. And although the variance is larger, we might still ensure at least a similar result . Could you comment on this? Would not be the corrected version a better option when no random initialisations are envisaged? * For the re-init, when just training for 3 epochs, it surprises me that indeed we could train the last 6 layers with just this reduced amount of data and training steps. And more surprisingly, according to Figures 14-16, is that the weights for these last 6 layers are the first to stabilise, even though they started from scratch, and they are supposed to be critical for the downstream tasks. Could you comment on this? I guess my understanding is wrong, and I would appreciate therefore some further insights. * Also, on the Re-init scheme, you mention that the number of layers to re-initialize depends on the task. Could you in any case offer here a general rule of thumb? ##### Extra feedback Finally, I would like to conclude listing some small typos and errors I could spot in the manuscript: * Page 7, after Results, the reference to the Table is wrong. * Page 8, table 2: I believe the result for the RTE - Int. Task is mistype. I guess it should be something around 71.8. * Page 14, section E, Effect of Re-init… : the reference to the figure. * The caption for all figures 14 to 17 is wrong, as it should read fine-tuning. These are the ones I could find, but it is not an exhaustive list. In any case, I would like to highlight the quality of the present manuscript, in terms of clearness and writing.
This paper addresses some of the well-documented instabilities that can arise from fine-tuning BERT on a dataset with few samples. Through a thorough investigation, they highlight various bizarre behaviors that have a negative impact on stability: First, that BERT inexplicably uses an unusual variant of Adam that, in fact, harms behavior; and second, that people tend to undertrain BERT on some downstream tasks. Separately, they find that reinitializing some of the final layers in BERT can be helpful. Since fine-tuning BERT has become such a common way to attack NLP problems, these practical recommendations will be quite welcome to the community. These findings address issues raised by recent work, so the paper is timely and relevant. The paper has through empirical analysis and is clear to read. There is a concurrent ICLR submission with similar findings, and this paper stands on its own. Reviewers all agreed that this paper should be published.
Authors assessed how their adaptive temporal fusion network performs on public datasets such as Something V1&2, Kinetics, etc.. The contribution of this paper is in proposing an approach to automatically determine which channels to keep, reuse, or skip per layer and per target instance that can result in efficient action recognition. STRENGTHS: The proposed method is model-agnostic, making it easy to use as a plugin operation for other network architectures. Reusing history features when necessary to make the network capable for strong temporal modeling. CONCERNS: The paper has examined the temporal fusion module on BN-Inception and ResNet models, while more recent models’ evaluation is missing. While the policy network is defined as two FC layers and a ReLU, it is not clear why the authors chose this architecture and how they have tuned it? In section 3, Using 2D-CNN for Action Recognition, a citation to one of the recent works in modeling the temporal causality is missing: Asghari-Esfeden, Sadjad, Mario Sznaier, and Octavia Camps. "Dynamic Motion Representation for Human Action Recognition." In The IEEE Winter Conference on Applications of Computer Vision, pp. 557-566. 2020.<doc-sep>################################# Summary: The paper presented an adaptive inference model for efficient action recognition in videos. The core of the model is the dynamic gating of feature channels that controls the fusion between two frame features, whereby the gating is conditioned on the input video and helps to reduce the computational cost at runtime. The proposed model was evaluated on several video action datasets and compared against a number of existing deep models. The results demonstrated a good efficiency-accuracy trade-off for the proposed model. ################################# Pros: * The paper has a novel idea (adaptive temporal feature fusion) and addresses an important problem in vision (efficient action recognition). * Solid experiments on multiple datasets. The analysis of the learned policy is quite interesting. * Well-written paper ################################# Cons: * Limited technical novelty The idea of building adaptive inference models with a policy network for video classification has been previously explored by Wu et al., Meng et al. and others (e.g., skip part of the model, select a subset of frames, choose the input resolution to the model). The main technical component of the model is also very similar to the channel gating network (Hua et al.). The key innovation seems to be the perspective of modeling temporal feature fusion for adaptive inference. This is probably best considered as in parallel to previous approaches for adaptive video recognition. The technical components thus look less exciting. * Lack of comparison to other adaptive inference models / temporal fusion schemes There isn’t a real comparison between the proposed method and recent works on adaptive inference video recognition (e.g, Wu et al, Meng et al.). The benefit of model temporal feature fusion --- a main contribution of the paper, thus remains unclear with respect to other design choices (e.g., input resolution or frame selection). I’d suggest some experiments that compare to those work. Another important experiment is to contrast the proposed method with other temporal feature fusion schemes (e.g, LSTM, TSM). For example, TSM --- a hand-crafted feature fusion module, seems to have less number of parameters, slightly higher FLOPs and comparable accuracy (Table 3). If that is the case, the contribution of the proposed adaptive fusion scheme is much weakened. ################################# Minor comments: It is not totally clear to me how the FLOPs of the proposed model are computed. As the proposed model will have a different FLOP conditioned on the input video, were the reported FLOPs averaged across the dataset? I was not able to find a description in the paper. It will be great if the authors can report some run-time performance (e.g., wall time). To achieve the theoretic FLOPs, the proposed model will rely on filter re-arrangement on the fly and sparse convolution kernels. Both can be less efficient on certain devices, e.g., GPUs. ################################# Justification for score: All in all a good paper. My main concern is the missing link / comparison to previous works on adaptive video recognition. If this concern can be addressed, I am happy to raise my rating. <doc-sep>#### General This paper proposes an adaptive temporal fusion network called AdaFuse for action recognition, which adaptively removes temporal redundancy and reuses past features for accuracy and efficiency. I listed the Pros and Cons I found in the paper below as well as some questions to clarify some of the details. #### Pros 1. The idea of learning a decision policy to dynamically determine whether channel-wise features at time $t$ are calculated normally, reused from $t-1$, or skipped, is interesting and reasonable. 1. The experimental results show that the proposed method achieves good accuracy with reasonable computational budget. 1. The ablation study in Table 4 reveals that the performance is greatly affected by the policy and it is important to fuse the futures from different frames to captures the temporal dependencies. #### Cons 1. The propsoed method is not compared with some of the recent methods such as [1-3] ([4] is optional because the publication date is very close to the ICLR 2021 submission deadline). Especially for Jester and Mini-Kinetics dataset, the proposed method is compared with only TSN, which is old and weak as baseline as it does not incorporate the temporal information. 1. In Table 3, it seems that the proposed method achieves good accuracy, but I am afraid that it is just because of the strong base network, TSM. Merely adding AdaFuse to TSM indeed saves some computation but degrades the performance as described in the paper. The proposed remedy indeed slightly improves the accuracy but it requires much more parameters compared to the vanilla TSM. Overall, I find it benefitial to use the proposed method on top of simple base networks such as TSN, but the benefit of using the proposed method on top of strong base networks such as TSM may be marginal. Combined with the point 1 above, I am not well convinced of the effectiveness of the proposed method. 1. Some of the important details are not clear. I would appreciate if the authors could answer the questions I listed below. #### Questions 1. Is it necessary to use Gumbel softmax? I think there are two kinds of tricks involved in Gumbel softmax. One is a trick for sampling from a categorical distribution, and the other is a trick for making the opperation differentiable. In my understanding, which may be wrong, the required characteristic for the present method is the latter one, and the sampling from the categorical distribution is not necessarily required. In this case, I think simply using $q$ instead of $\\log{r} + G$ in equation (7) is enough. 1. Related to the point above, please clarify the type of output (hard or soft) of the policy net. The sentence after equation (2) says the output is integer values (0, 1, or 2), while the sentence before equation (7) says it is a real-valued vector. 1. Suppose $p_t^i = 1$ (reuse) and $p_{t-1}^i = 1$ (reuse again). In this case, is $y_t^i$ copied from $y_{t-2}^i$ ? Or is the feature map of $i$-th channel at time $t-1$ calculated on the fly for "reusing" at time $t$? In other words, if the policies for a channel is "reuse" $n$ consecutive times, does the method take the feature from $n$ frames before? #### Other comments 1. Figure 1 may be incorrect or misleading. I think $p_t$, the output of the policy net, should go to the 2D Conv. block. Otherwise the block never knows which channel to compute at time $t$ and which channel to reuse or skip. [1] Sudhakaran+, Gate-Shift Networks for Video Action Recognition, CVPR 2020 [2] Martinez+, Action recognition with spatial-temporal discriminative filter banks, ICCV 2019 [3] Jiang+, STM: SpatioTemporal and Motion Encoding for Action RecognitionSTM: SpatioTemporal and Motion Encoding for Action Recognition, ICCV 2019 [4] Kwon+, MotionSqueeze: Neural Motion Feature Learning for Video Understanding, ECCV 2020<doc-sep>In this work, the authors introduce an AdaFuse network for efficiency action recognition in videos. Specifically, they design a policy net to decide which channels should be kept, reused or skipped, according to the input features of two adjacent frames. Strength 1 The paper is written well, and the organization is OK 2 The idea of adaptive temporal fusion is somehow novel and interesting Weakness 1 How to save computation. I understand the general idea of saving computation, if some channels are reused or skipped. However, in the training phase, the policy net would produce the real-value vector by Eq. (7), instead of the one-hot vector. In other words, the 'keep' entry for each channel is always used during training. Then, I guess computation saving is not claimed for training. It is for testing, right? How to do test? The policy net produces the real-value vector and then you make it as one-hot vector for saving computation? 2 Missing SOTA. Compared with this paper, many recent approaches can achieve a competitive computation with better accuracy. It significantly reduces the potential value of this paper. *Jiang et al., STM: SpatioTemporal and Motion Encoding for Action Recognition, ICCV 2019 *Li et al., TEA: Temporal Excitation and Aggregation for Action Recognition, CVPR 2020 *Sudhakaran et al., Gate-Shift Networks for Video Action Recognition, CVPR2020 *Liu et al., TEINet: Towards an efficient architecture for video recognition, AAAI 2020 3 Please correct the abstract. The experiments are performed on mini-Kinetics, rather than Kinetics. I indeed suggest that, it would be better to perform the proposed method on Kinetics to further show the effectiveness.
This paper presents a model for video action recognition. The reviewers appreciated the development of a novel dynamic fusion method that examines channels from feature maps for use in temporal modeling. After reading the authors' responses, the reviewers converged on an accept rating. The solid empirical results and analysis, the fact that is is a plug-in method that could be used in other models, and the clear exposition were deemed to be positives. As such, this paper is accepted to ICLR 2021.
This paper modifies the GAN objective by defining the TRUE and FAKE labels in terms of both the training sample, and a newly introduced random variable s. The intuition is that by progressively changing the definition of s, and its effect on the label, we can prevent the discriminator network from immediately learning to separate the two classes. The paper doesn't give any strong theoretical support for this intuition. And it I found it a bit surprising that the discriminator doesn't immediately learn the one extra bit of information introduced by every new level of augmentation. However, the results do seem to show that this augmentation has a beneficial effect on two different architectures in different data scenarios, although the increase is not uniform over all settings. The approach presented in this paper is motivated primarily as a method of increasing stability of training but this is not directly investigated. Figure 3 and Table 2 both suggest that the augmentation does nothing to reduce variance between runs. There is also no direct comparison to other methods of weakening the discriminator, although these are mentioned in the related work. I think the paper would be much improved by a thorough investigation of the method's effect on training stability, to go along with the current set of evaluations.<doc-sep>This paper proposes a new trick to improve the stability of GANs. In particular the authors try to tackle the vanishing gradient problem in GANs, when the discriminator becomes to strong and is able to perfectly separate the distribution early in training, resulting in almost zero gradient for the generator. The authors propose to increase the difficulty of the task during training to avoid the discriminator to become too strong. The paper is quite well written and clear. However there is several unsupported claims (see below). A lot of work has been proposed to regularize the discriminator, it's not clear how different this approach is to adding noise to the input or adding dropout to the discriminator. Pros: - The experimental section is quite thorough and the results seem overall good. - The paper is quite clear. Cons: - There is a major mistake in the derivation of the proposed method. In eq. (6) & (7), (c) is not an equivalence, minimizing the KL divergence is not the same as minimizing the Jensen-Shannon divergence. The only thing we have is that: KL(p||q) = 0 <=> JSD(p||q) = 0 <=> p=q . The same kind of mistake is made for (d). Note that the KL-divergence can also be approximated with a GAN see [1]. Since the equivalence between (6) and (7) doesn't hold, the equation (11) doesn't hold either. - The authors say that the discriminator can detect the class of a sample by using checksum, the checksum is quite easy for a neural networks to learn so I don't really see how the method proposed actually increase the difficulty of the task for the discriminator. Especially if the last layer of the discriminator learns to perform a checksum, and the discriminator architecture has residual connections, then it should be straight-forward for the discriminator to solve the new task given it can already solve the previous task. So I'm not sure the method would still works if we use ResNet architecture for the discriminator. - I believe the approach is really similar to adding noise to the input. I think the method should be compared to this kind of baseline. Indeed the method seems almost equivalent to resetting some of the weights of the first layer of the discriminator when the discriminator becomes too strong, so I think it should also be compared to other regularization such as dropout noise on the discriminator. - The authors claim that their method doesn't "just memorize the true data distribution". It's not clear to me why this should be the case and this is neither shown theoretically or empirically. I encourage the author to think about some way to support this claim. - The authors states that "adding high-dimensional noise introduces significant variance in the parameter estimation, which slows down training", can the author give some references to support that statement ? - According to the author: "Regularizing the discriminator with the gradient penalty depends on the model distribution, which changes during training and thus results in increased runtime". While I agree that computing the gradient penalty slightly increase the runtime because we need to compute some second order derivatives, I don't see how these increase of runtime is due to change in the model distribution. The authors should clarify what they mean. Others: - It would be very interesting to study when does the level number increase and what happens when it increase ? Also what is the final number of level at the end of training ? Conclusion: The idea has some major flaws that need to be fixed. I believe the idea has similar effect to adding dropout on the first layer of the discriminator. I don't think the paper should be accepted unless those major concerns are resolved. References: [1] Nowozin, S., Cseke, B., & Tomioka, R. (2016). f-gan: Training generative neural samplers using variational divergence minimization. NIPS<doc-sep>Authors argue that the main issue with stability in GANs is due to the discriminator becoming too powerful too quickly. To address this issue they propose to make the task progressively more difficult: Instead of providing only the samples to the discriminator, an additional (processed) bitstring is provided. The idea is that the bitstring in combination with the sample determines whether the sample should be considered true or fake. This in turn requires the decision boundary of the discriminator to become more complicated for increasing lengths of the bitstring. In a limited set of experiments the authors show that the proposed approach can improve the FID scores. Pro: - A simple idea to make the problem progressively more difficult. - The writing is relatively easy to follow. - Standardized experimental setup. Con: - Ablation study of the training tricks is missing: (1) How does the proposed approach perform when no progressive scheduling is used? (2) How does it perform without the linear model for increasing p? (3) How does the learning rate of G impact the quality? Does one need all of these tricks? Arguably, if one includes the FID/KID to modify the learning rates in the competing approaches, one could find a good setup which yields improved results. This is my major issue with this approach. - Clarity can be improved: several pages of theory can really be summarized into “learning the joint distribution implies that the marginals are also correctly learned’ (similar to ALI/BIGAN). This would leave much more space to perform necessary ablation studies. - Comparison to [1] is missing: In that model, it seems that the same effect can be achieved and strongly improves the FID. Namely, they introduce a model in which observed samples pass through a "lens" before being revealed to the discriminator thus balancing the generator and discriminator by gradually revealing more detailed features. - Can you provide more convincing arguments that the strength of the discriminator is a major factor we should be fixing? In some approaches such as Wasserstein GAN, we should train the discriminator to optimality in each round. Why is the proposed approach more practical then approaches such as [2]? [1] http://proceedings.mlr.press/v80/sajjadi18a.html [2] https://arxiv.org/abs/1706.08500
The submission hypothesizes that in typical GAN training the discriminator is too strong, too fast, and thus suggests a modification by which they gradually increases the task difficulty of the discriminator. This is done by introducing (effectively) a new random variable -- which has an effect on the label -- and which prevents the discriminator from solving its task too quickly. There was a healthy amount of back-and-forth between the authors and the reviewers which allowed for a number of important clarifications to be made (esp. with regards to proofs, comparison with baselines, etc). My judgment of this paper is that it provides a neat way to overcome a particular difficulty of training GANs, but that there is a lot of confusion about the similarities (of lack thereof) with various potentially simpler alternatives such as input dropout, adding noise to the input etc. I was sometimes confused by the author response as well (they at once suggest that the proposed method reduces overfitting of the discriminator but also state that "We believe our method does not even try to “regularize” the discriminator"). Because of all this, the significance of this work is unclear and thus I do not recommend acceptance.
This paper takes the negative-free contrastive learning methods as an example to explore the disentanglement property of the self-supervised methods experimentally. To address the limitations of existing disentangled metrics in high-dimensional representation models, the author proposes a new decoupling metric-MED based on mutual information. Experiments on real-world datasets and high-dimensional representation space demonstrate this metric's superiority and applicability. Strengths: 1. Existing work on Disentangled Representation Learning is limited to the generative model. This paper empirically studies the disentanglement property of negative-free contrastive learning, which is an exploratory work. 2. This paper proposes a new metric, MED/top-K MED, which extends the decoupling metric to high-dimensional space. 3. The paper is well organized, so the reader can get the gist of the article. The authors clearly describe the proposed method. In addition, the experimental results show the effectiveness of the proposed method. Weaknesses: 1. The author designed a version of MED/top-k MED, to evaluate the disentanglement. Section 5.4 analyzes the effect of dimension on Top-2 MED. However, the effect of dimension on MED is unclear. In the upper part of Table 1, the authors provide results in 1000-dimensional representation space. However, the paper does not directly show results in lower-dimensional spaces (no additional PCA required, just set the projection dimension), such as 100 or 200. 2. The author found that contrastive learning without negatives learned a well-disentangled subspace of latent representation from experiments. It is not yet known how this subspace and the subspace learned by disentangled representation learning perform on downstream tasks, such as classification tasks. The authors have not addressed the limitations and potential negative societal impact of their work <doc-sep>This paper provides experiments on disentanglement for negative-free contrastive learning. The results indicate that current metrics have limitations on this setup. As a result, the authors present a novel metric for evaluating disentanglement in the proposed scenario. Strengths. - Investigating and expanding current metrics for quantifying disentanglement can address an important issue in deep learning. - The assessment of current issues on disentangled metrics is done properly - They assess disentanglement on real-datasets rather than synthetic - Validation of the new metric with qualitative results on generative factors Weaknesses. Missing comparison with other recently proposed metrics. - https://openreview.net/pdf?id=HJgK0h4Ywr - https://openreview.net/pdf?id=EbIDjBynYJ8 - https://arxiv.org/abs/2106.03375 The work validates the proposed metric empirically, however would be also interesting to investigate the theoretical justification of this method. <doc-sep>The paper tries to empirically study the disentanglement of negative sample free contrastive learning methods, e.g. BYOL, SimSiam. The authors find that the existing disentanglement metric does not fit with the disentanglement of high-dimension feature representation space. Thus, the authors propose a new running time-efficient metric named “Mutual information based Entroy Disentanglement” and MED for short. The authors evaluate the new metric on some popular synthetic datasets and a real-world dataset CelebA and argue that negative sample free contrastive learning methods can learn a well-disentangled subset of representation. Strength: 1. The paper points out previous methods’ drawback and propose a time-efficient metric that is designed for high-dimension space. 2. The author report various ablation studies to show the properties and effectiveness of the new metric, i.e. uniqueness of factor-representation correspondence, and influence from manipulating factors. Weakness: 1. The main weakness is that the authors do not theoretically prove the new metric is sound. The major message of this paper is that the previous disentanglement metrics are not good enough and the proposed new metric can fix these problems. However, the paper only showed the drawback of previous metrics and for the new metric, the authors only empirically show it may work e.g. Figure 4. Why should we believe the new metric is better than other methods? Why the value of MED can be regarded as a measure of disentanglement property? From my perspective, the best way to prove a proposed metric work is theoretical analysis, i.e. [1], and the paper misses that part. I recommend the authors consider a high-dimension linear case and prove that MED can beat some other metrics or prove some property of MED. 2. Also, in Section 3.2, the high-level intuition of MDE is not clear. The metric has three parts, Equation (1) (2) (3) and it is quite complicated. I did not get a clear insight, i.e., $R_{ij}$ used in $\\rho_i$ and $S_i$. Why should we combine them in the way in Equation (3)? The last paragraph of Section 3.2 should be longer and provide more explanation. 3. The authors argue that contrastive learning can learn a well-disentangled subset of representation without negative samples. The conclusion here is weak. There are many further important questions that the paper does not give answers to. Why the model can learn a well-disentangled subset of representation without negative samples? What is the disentanglement difference between methods with and without negative samples? Is there any high-level intuition or further suggestion about this conclusion? More discussion is needed here. [1] Kornblith, Simon, et al. "Similarity of neural network representations revisited." International Conference on Machine Learning. PMLR, 2019. Limitations are the theoretical analysis mentioned above. <doc-sep>This paper empirically studied the disentanglement property of self-supervised methods, such as MoCo, BarlowTwins, and BYOL. Besides, the authors validated the disagreement of current disentanglement metrics for the models with high-dimensional latent space. The authors proposed a new metric based on mutual information to measure high-dimensional representations. Massive experiments conducted on synthetic datasets and a real-world dataset showed that negative-free contrastive methods can learn a disentangled subset of representation. Contribution: 1. This work studied the disentanglement properties of negative-free contrastive models for the first time. 2. This work proposed a new metric for high-dimensional models and a selection strategy to pick a disentangled subset of representation. Strength: 1. The experiments were comprehensive and massive to cover popular conventional disentanglement methods and negative-free contrastive models. 2. This paper aims to address the disentanglement measurement of high-dimensional representations and bring negative-free contrastive learning into disentanglement learning. Weakness: 1. line 34: “latent representation, This” → “latent representation. This” 2. Comparing metrics for a model in a subfigure may better show the disagreement or agreement of metrics in Figure 4. 3. We know Orientation is hard to be disentangled, but the authors still need to show ALL experimental results without selection. 4. What is manipulating a factor in section 4.3? Getting a set of images by traversing one factor? Could you color the latent index with the factor you picked? I cannot see the existence of a well-disentangled subset for orientation. The authors did not show the superiority of the proposed metric. They need to find some cases where all metrics fail to measure the disentanglement except the proposed metric.
There was a consensus among reviewers that this paper should be accepted. The key convincing arguments that this paper studies a novel setting: how to measure the disentanglement in high-dimensional spaces. For this, the authors perform extensive experiments and come up with a novel metric. The reviewers further felt that concerns raised in the initial reviews were subsequently addressed in the author rebuttal.
Overall, I vote for rejecting. The core idea of the new algorithm looks interesting, but this paper does not provide convincing evidence theoretically and empirically. Pros: 1. This paper introduces a new variant of Double Q-learning to reduce the overestimation risk and reduce variance. 2. In part of the experiments, the new algorithm demonstrates better performance than previous ones. Cons: 1. This paper does not provides enough insightful intuition and theoretical guarantees on the design of the new algorithm. There should be more explanation and evidence to help reader understand why for instance, the definition of Q-value in equation (8) makes sense. 2. This paper is not well-written. There are missing references links, typos and missing definitions of some notations. For example, in the second paragraph of introduction, “Lillicrap et al.” is just text and not linked to a paper in the references and there are many other cases like this. Typos can be seen in many places as well, such as in the first paragraph of section 3.3, it should be ‘two-fold’ instead of ‘two folder’; in the last paragraph of section 5, ‘categories’ should be ‘categorised’. Moreover, in the convergence analysis, the meaning of many notations are not explained at all (e.g. $\\alpha_t$). The author only says “we provide sketch of proof which borrows heavily from the proof of convergence of Double Q-learning and TD3”, but without the definitions of the notations, the completeness of the paper is greatly undermined. <doc-sep>The paper proposes a method that modifies double Q-learning by eliminating a linearly correlated part of one Q. I am not familiar with the proof of Double Q-learning and TD3, and thus find the proof of this paper hard to read as it omits the majority of proof by claiming it is similar to the proof of the aforementioned two algorithms. To name some of the part that confused me while reading: what is the definition of F^Q_t and c_t in (14)? Why does a small delta_2 exist in (16)? Why does Delta_t converge to zero as claimed in the line after (16)? Why is the randomness of s_{t+1} not mentioned in the subscripts of E's in (5) and (9)? etc. Therefore, I suggest the author(s) write a thorough proof and put it in the appendix to make the convergence analysis readable. I am also curious how the de-correlation term helps to improve the convergence in the analysis as it is the main contribution of this paper. Besides, double Q-learning and TD are mostly used in function approximations. I wonder if the analysis can extend to some simple case of parameterized Q functions, e.g. linear approximations. The experiment part looks good to me as it compares D2Q with several sota algorithms and get satisfying results.<doc-sep>Summary The paper suggests an improvement over double-Q learning by applying the control variates technique to the target Q, in the form of $(q1 - \\beta (q2 - E(q2))$ (eqn (8)). To minimize the variance, it suggests minimizing the correlation between $q1$ and $q2$. In addition, it applies the TD3 trick. The resulting algorithm, D2Q, outperforms DDPG and competes with TD3. Recommendation I hope I haven’t misunderstood this paper, but I’ve found neither the theory nor the experiment convincing. Therefore I recommend a rejection. Strengths The proposed algorithm is simple and straightforward to use. Weaknesses 1. Theory (a) Minimizing the variance of eqn (8) requires maximizing the correlation between q1 and q2. If they are independent, what’s the point of including q2? Check out https://en.wikipedia.org/wiki/Control_variates (b) $E(q2))$ is the “average over all possible runs”. It’s unclear how it’s calculated. Maybe run a few identical RL experiments with different random seeds, just to get $E(q2)$? Feels wasteful to me. (c) Why would minimizing the squared cosine between last layer feature vectors lead to minimum correlation? If the feature for q2 is obtained from that of q1 through a deterministic 90º rotation, wouldn’t that result in a zero cosine but really strong correlation? (d) Why is it ok to ignore $var(q1)$ while computing $\\beta$? No theory is given here. 2. Experiments In Fig 1 - 3, D2Q sometimes outperforms and sometimes underperforms TD3. Because these two algorithms are so similar, I can’t tell whether the comparison is statistically significant. Other feedbacks Please address questions raised above. Perform additional experiments to make the paper more convincing. <doc-sep>The proposed "decorrelated double Q-learning" algorithm combines a few techniques to improve the performance of model-free RL, including control variates for reducing variance, decorrelated regularization for reducing bias, and a technique from TD3 for stabilizing learning. Overall, the ideas of this work are interesting and bring some insights for tackling the overestimation issue of Q-learning. Empirically, the proposed method shows some improvements over the existing ones. However, a few major concerns are as follows. - The theoretical analysis of convergence seems hand-waving and confuses me. For example, does the analysis only apply to the tabular case? (The authors don't seem to state this explicitly.) How does Eq. (17) follow from Eq. (9) (are we missing the gradient of the decorrelated regularization term)? - All experimental results are about reward vs. iteration curves, which are not convincing or insightful enough. For example, is there empirical evidence showing that the proposed algorithm does learn two decorrelated critics? - The structure of Section 3 may need some adjustment. In particular, in the current version, the formal definition of the correlation term (Sec 3.2) and the description of the full algorithm itself (Sec 3.3) appear after the convergence analysis of the algorithm (Sec 3.1), which looks weird. Based on the above comments, I think substantial improvements are needed for publication of this work.
This paper investigates some variants of the double Q-learning algorithm and develops theoretical guarantees. In particular, it focuses on how to reduce the correlation between the two trajectories employed in the double Q-learning strategy, in the hope of rigorously addressing the overestimation bias issue that arises due to the max operator in Q-learning. However, the reviewers point out that the proofs are hard to parse (and often hand-waving with important details omitted). The experimental results are also not convincing enough.     
This paper highlights important variables impacting the effective robustness (ER) of a pre-trained, fine-tuned model. The authors identify that increasing model size, dataset size, and example difficulty improves the ER of a pre-trained, fine-tuned model. The experiments suggest that the zero-shot component of CLIP plays a significant role in the high value of ER CLIP achieves. The investigation of ER on dominance probability shows that models with high ER have high dominance probability. The authors also present a negative result showing that several reasonable approaches to maintaining high ER while fine-tuning fail. Strengths: The paper is very clearly written and has a thorough experimental section validating the authors' claims. The authors have a thorough selection of experiments that validate their claims. Weaknesses: One weakness of this paper is that the authors do not properly define fine-tuning. While its meaning is implicit, fine-tuning is a key concept in this paper, so having a clear definition of the term seems necessary. This is especially true when considering multiple fine-tuning steps such as when fine-tuning a fine-tuned model BiT-M-21k on CIFAR-10. The authors use a pre-trained or randomly initialized model on a large dataset, fine-tune on a smaller dataset and measure OOD accuracy on an analogous dataset to the fine-tuned dataset. It would help if the authors give some examples of when such a training procedure would be useful. Usually fine-tuning is carried out on the distribution that the model is going to be evaluated on. An analysis of the relation between the fine-tuning dataset and the OOD test set would be useful. Right now the relationship is alluded to based on natural distribution shifts, but it's not clear how this might generalize to other types of distribution shifts. Overall this paper is very thorough. The authors set out to investigate the role fine-tuning has on OOD robustness and they successfully identify several key variables to consider. There are many experiments in the main paper as well as in the appendix that validate their claim. This work will be very valuable to the community as it provides some insight into what variables lead to OOD robustness for pre-trained, fine-tuned models. <doc-sep>In the manuscript entitled, "The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning", the authors present an empirical investigation of model exhibiting a property known as 'effective robustness'. In particular, their focus is on how 'effective robustness' changes during fine tuning and on the characteristics of these models. Apologies to the authors for what may sound like a rather glib judgement of this submission (and for which I acknowledge through the confidence scores below that my opinion is not absolute as I do not work directly in the space of image classification), but the results and conclusions of this paper seem remarkably obvious. Namely, that models that have been pre-trained on a large collection of different datasets tend to lose their strengths at predicting out of distribution as they are progressively fine-tuned towards predicting a specific type of data. And that when these models are performing in the 'effective robustness' mode the types of in sample problems they find easy (alt. hard) are different to those that models trained on the dataset at hand find easy (alt. hard). The perils of over-fitting to a particular training set are well known and strategies to avoid this and improve generalisation are a major component of ongoing work in the machine learning (see e.g. Roger Grosse's comp sci lecture notes: https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/readings/L09%20Generalization.pdf ). To change my mind on this point would require additional discussion by the authors to connect this work to general principles of machine learning and establish the novelty of the insights reached from these numerical experiments. That said, my many years in research have taught me that sometimes results that seem to me to be 'remarkably obvious' are actually not so for the general audience, and that 'simple' examples demonstrating such principles can actually have a large impact and generate huge citation indices. I mean this genuinely; not trying to be cynical here. So for that reason I would respect the decision of other reviewers and the AEs if this paper was in fact recommended for the conference series. Certainly, having a reference to point to for e.g. the fact that self-driving cars probably shouldn't spend too much time refining their algorithms to over-fit to a commuter's every day journey to work (this being inevitably at the expense of performance when he/she wants to take a drive in the countryside), could actually be very useful. Conclusions seem 'obvious' to this reviewer, but willing to consider other opinions. <doc-sep> The paper conducts an empirical study into an interesting problem of robustness of deep models on out of distribution data. The paper finds that the pre-trained models exhibit better effective robustness during training which will disappear upon convergence of the same models. ### Summary: The paper conducts an empirical study into an interesting problem of robustness of deep models on out of distribution data. The paper finds that the pre-trained models exhibit better effective robustness during training which will disappear upon convergence of the same models. ### Pros: The paper is well-written, easy to understand and follow along. Most significant of all is that this paper has an extensive study of various initializations with pretrained models for vision problems. The breadth of explorations such as pre-trained models ER during training, data set size, example difficulty, ### Cons: are there proper bounds for the ER values, what would it really mean to have higher value, lower value etc, can you briefly explain? ER ~ 0 for CIFAR-10 (Figure 3a) at the end of training, exactly the point at which any of the models are having the corresponding best accuracies on IN set. The only difference at that point is, the accuracy of various models is different which is already known and well-studied in literature. Similar for Imagenet, it is visible at low accuracy, and the trends are visible as the accuracy gets better similar to CIFAR. The question is, why should anyone care, if the ER is high in the middle of training at low accuracy? This is not well-justified in the current version of the paper. Also, the reasons for the peaks in ER during training are not justified, why are they intriguing? -- is it because the pre-trained models change significantly to the down-stream tasks, or something else? The random initializations don’t fluctuate that much, why not investigate these observations in detail? In Figure 4b, why further fine-tune only the BiT-M-1k model, what happens if you further fine-tune all the models? This experiment is not a fair comparison, not all models see the same amount of data. Again, in Figure 4c and the corresponding appendix, why would anyone use a low accuracy classifier when one knows it will perform bad on the hard to classify examples, in that case ER is not even a thing to worry in the first place, accuracy becomes the first concern. Fine, at least the ones that the classifier can classify, there is better robustness but not entirely convincing though. This paper relies heavily on Taori et al. (2020), which seem to have a number of unresolved concerns, most important of all is that the paper is a bit short on novelty, however, the empirical study in itself is interesting. Show the same findings hold for at least one more domain, for example, NLP. Overall, the paper has breadth in the number of experiments and the directions that it explores without enough depth and justifications to a majority of findings. Overall, the paper has breadth in the number of experiments and the directions that it explores without enough depth and justifications to a majority of findings. Also, the paper lacks novelty or detailed analysis of the proposed concepts. I would give it a score of 4. <doc-sep>In this paper, the authors conduct a thorough empirical investigation of effective robustness during fine-tuning and have several observations: 1. models pre-trained on larger datasets in the middle of fine-tuning, as well as zero-shot pre-trained models, exhibit high amounts of effective robustness, but the effective robustness vanishes at convergence; 2. the effective robustness increases with the larger size, more diversity, and higher example difficulty of the dataset; 3. models that have effective robustness make different predictions than standard models and are able to correctly classify examples that no standard models get right. Besides, they discuss several potential solutions to mitigate the problem of vanishing of effective robustness during fine-tuning, but find that none of them are able to maintain high effective robustness at high in-distribution accuracy. I think this paper has the following strengths: 1. I think identifying models that have effective robustness and understanding their properties is an important and interesting problem. This paper has some empirical observations under this direction. 2. Enough details are included for the experiments. 3. Overall, the paper is well-written and the related work is properly discussed. However, I think this paper has the following weaknesses: 1. My major concern is that the contribution is not very significant. The authors have some empirical observations, but those observations are not very useful and don't help us understand the problem better. For the models in the middle of fine-tuning, although they exhibit a high amount of effective robustness, the accuracy of those models on the in-distribution dataset is not high and thus such kinds of models may not be useful. Also, when the fine-tuning converges, the models have high accuracy on the in-distribution dataset but don't have effective robustness. Thus, the models obtained via fine-tuning don't have clear advantages over previous models. Besides, although the authors discuss several strategies for scaling effective robustness to the high-accuracy regime to improve the out-of-distribution accuracy, none of those methods work. So such a discussion may not be useful. 2. They only have some empirical observations, but don't have analysis for them. For example, they only show that the effective robustness generally increases throughout fine-tuning, peaks, and then gradually disappears towards the end of fine-tuning empirically, but don't analyze or explain why such a phenomenon exists. It is unclear whether such a phenomenon is general or it just exists on some datasets. It seems Figure 3 in the paper shows that such a phenomenon doesn't exist on ImageNet-R and ObjectNet when using ImageNet as in-distribution. 3. Some claims are not well supported by results. For example, the authors claim that the pre-trained models in the middle of fine-tuning, as well as zero-shot pre-trained models, represent an entire class of models that exhibit high amounts of effective robustness. I think this claim may not be true. There might be other training methods that could lead to better effective robustness and also high accuracy. The authors only explore the methods of fine-tuning and zero-shot evaluation. Thus, it is hard to claim that they represent an entire class of models that exhibit high amounts of effective robustness. The claim that the models with effective robustness make different predictions than standard models and are able to correctly classify examples that no standard models get right, is also not well supported by the results. They only select 4.8% of images that none of the testbed models get correct and show that the model that has effective robustness with the best in-distribution performance gets 10% of these examples correct. I think such results cannot support the claim that the models with effective robustness are able to correctly classify examples that no standard models get right since only 10% of these examples are predicted correctly by the model that has effective robustness with the best in-distribution performance. Also, it seems these results could not demonstrate that the models with effective robustness have prediction diversity. 4. Some observations may already be known to the community. For example, the observation that the effective robustness increases with the larger size and more diversity of the dataset seems obvious. I think this paper doesn't make enough contributions and the claims are not well supported by results. Also, they don't provide analysis for the observations and the observations may not be helpful in understanding the problem. Thus, I think this paper is not ready for publication. ***[Post Rebuttal]*** After reading the rebuttal, I think my major concerns still remain: the contributions are not very significant and the findings may not be useful. I still think that the models studied in this paper are not enough to represent all models that exhibit ER. The authors need to explore other kinds of models that have ER (and also have high accuracy). Thus, I keep my original rating and think the paper is not ready for publication.
Thank you for your submission to NeurIPS. The reviewers are quite split on this paper, but some remain substantially negative even after discussion. I'm a bit more optimistic about the paper: the observed increase then decrease in ER during fine-tuning _does_ strike me as a fundamentally interesting phenomenon, and I believe that papers that present such phenomena can be valuable contributions even without more fundamental "explanations" of the observations. My recommendation, therefore, ultimately rests largely on the fact that I think (as is honestly evidenced by the reviews to a large degree), the presentation and contextualization of these results can be substantially improved in a future revision of the paper. Specifically, the fact that several reviewers found the results obvious and/or not sufficiently substantiated suggests that the basic premises here are still failing to land. I would strongly suggest revisions that clarified these points in a resubmission.
+ The practical approximation of integrals is one of the most important practical mathematical problems. Only a handful of viable methods exists for that problem and the authors contribute a true improvement over the state-of-the-art. + The paper is well written an a pleasure to read. The formal presentation is very good and it is easy to follow the proofs and the author's reasoning. + An implementation is available. - The limitations are not discussed. Is the applicability limited, e.g., by assumptions on k(0,0) or the positive definiteness of "p"? What about the methods complexity? Is it more computationally demanding than standard BQ? - The experiments have rather small scale. I would love to see results on non-toy data. What are the computational limits of this method? - Non-BQ stochastic quadrature methods are not discussed / mentioned. Honestly, I am a fan of this work. The idea is brilliant and it even works in practice. The proposed method works with any shift-invariant kernel, which simplifies the usage in practice when compared to standard BQ. After explaining nicely the intuition behind the approach, the authors present an upper bound on the quadrature error followed by a careful empirical evaluation of their approach. On small-scale benchmark data, traditional Monte Carlo techniques as well as standard BQ deliver far worse results than the proposed method, especially when data is scarce. One question: In Eq.(11), it is unclear why we see Sigma in the normalizing constant when exp(-|x-mu|^2) has variance = 1? <doc-sep>* Very well written * The authors use both the uniform and the Gaussian measures in the experiments section. The Gaussian case can be compared directly with BQ that uses RBF kernels. The uniform measure works well, there is no competing method. - It would be good to have some comparison of computational time. Mention if there are any additional steps that increase the computational cost of GBQ wrt BQ. Also, in terms of the dimensionality of the input space. Please clarify and ideally add more through experiments wrt dimensionality for a high enough dimension. - Edit: the authors have provided a computational complexity analysis in the rebuttal. I encourage the authors to include the in the paper since it gives a full picture of this. - I suggest to change the colour scheme in Figure 1, red and orange can be easily confused - In the text, the authors say that the Matern kernel should work well with the disjoint polynomial but then in the figure you can see it has the highest error in the small sample size regime, please clarify <doc-sep>- Novelty: The proposed generalized BQ discovers a new class of tractable BQ problems when both kernels, as well as density functions, have their representations as RFFs, and it allows the BQ to be applied with more expressive kernels. - Writing: This paper is overall well-written. It has provided enough background to understand the proposed methods and it is easy to follow. Discussion on the runtime complexity is missing in this work while the complexity of the proposed algorithm might be the main issue. From equation (13), it seems that the computation of the mean estimation is quadratic in the number of Fourier random features. Moreover, in the empirical evaluation, the number of features are 100 and 300 in the 1D and 2D experiments respectively, meaning that it requires a non-trivial number of features to get decent results. I think the authors should also include the comparison of runtime in their experiments for a fair comparison and to better illustrate the efficiency of the proposed approach. Also, only examples in 1D and 2D are presented, while the previous BQ paper such as [1] has their algorithm run on cases with dimensions being 5 or 15 where the scalability of the algorithms would be better illustrated than the low dimensional cases. [1] Kandasamy, Kirthevasan et al. “Bayesian active learning for posterior estimation.” AAAI 15 - Can the authors confirm what is the runtime complexity of generalized BQ and whether scalability is an issue? - In Sec 5.2, it mentions that the size of training data n ranges from 10 to 1000, while it is unclear what values does n exactly take. It is unclear for which n it gives the results in Table 1 & 2 and also it seems that not all the results for n are presented. - Suggestion on Definition 1: I think the authors should put only the definition of generalized BQ for any density, and separate the cases when the measures are Gaussian or Uniform as Propositions/Theorems instead of including everything in the definition. - Typo: Before Eq (11), the limit should be 'R -> \\infty' instead of 'r -> \\infty'. <doc-sep>I think the key idea of the paper is really cool: random Fourier feature expansions are a general tool to approximate shift-invariant kernels, and the kernel means needed for Bayesian quadrature can be easily estimated for RFF kernel approximations when the relevant measure (like the kernel) is positive definite and it is easy to sample from the Fourier transformed version of that measure. I also like that there is some error analysis, though I have not checked all details. Sample efficient high-dimensional integration is intractable without a lot of smoothness or some other special assumption. But the selling point in the 1D and 2D experiments seems to be that the method can deal with non-smooth integrands. I am skeptical that this scales to higher dimensions. In low-dimensional spaces, conventional numerical quadrature methods (Gauss quadrature and adaptive quadrature rules) are quite efficient. In spaces of moderately high dimension, tensor-product quadrature becomes infeasible, though sparse grid methods are sometimes still attractive; but techniques like Bayesian quadrature and quasi-Monte Carlo methods still have a strong edge over the competition. In truly high-dimensional spaces, both function approximation and quadrature suffer from the curse of dimensionality *unless* the regularity of the function of interest (or sometimes some other measure of function complexity) goes up concurrently with the dimension. This is part of the great appeal of Bayesian quadrature with a squared exponential radial basis function: implicit in the choice of kernel is the idea that the integrand is smooth enough for this to be a sensible way of doing things. The proposed technique extends the set of kernels that can be used for modeling the integrand, but does not seem as useful for extending the set of measures that can be treated. Additional flexibility in the kernel, allowing choice of something less regular, may help with modeling non-smooth integrands in low-dimensional spaces, but it does not seem as likely to help get around the need for a lot of samples in similar situations in high-dimensional spaces. I could potentially be convinced that I'm wrong about this, but it would require more than 1-D and 2-D test problems (a 5-10 dimensional test problem would be fine to make the point). It would be helpful to introduce the underlying measure earlier, I think (e.g. writing integral f(x) p(x) dx at the outset, rather than just writing integral f(x) and leaving the measure implicit). In equation (10), please use either the equal sign or \\mapsto, but not both. I was very confused around (11). In the statement p(x) is approximately q(x), what is p(x)? If it is a Gaussian, how is q(x) an approximation? Also, the normalization constant involves the determinant of a covariance matrix, but the covariance matrix does not appear elsewhere in the expression; is \\Sigma = I in this example? Lemma 1 seems to be about estimating the integrand, not estimating the integral (though the text indicates the latter). It was a little unclear to me what measures were being used for the standard Bayesian quadrature and QMC methods in the experiment. I can guess that BQ was with respect to a Gaussian measure and QMC with respect to uniform on a box, but neither of those is clear (the Box transform is as good for quasirandom uniform variates as it is for standard PRNGs).
Meta Review: While there are mixed reviews for the paper, concerns largely relate to computational complexity which seem well addressed by the authors in their responses. The broader coverage of the literature in the paper is not great. For example, it's surprising that very well known work about spectral kernels, such as spectral mixture kernels (http://proceedings.mlr.press/v28/wilson13.pdf), is discussed nowhere in the text. This should definitely be corrected in a final version, as well as a careful accounting of reviewer concerns.
Summary. The paper studies the kernel ridge regression and provides close form characterization on the loss curve in the finite sample regime of $m \\propto d^{r}$, where $m$ is the sample size, d is the input dimension and $r \\geq 1$ is arbitrary integer. The paper also proves the results hold broadly to the neural tangent kernel (NTK) of one-layer convolutional net. Model. The paper studies the classical kernel ridge regression under the assumption that input data draws from d-dimensional sphere and certain isotropic/independence assumption of higher order Fourier coefficients. While previous work of [Mei and Montanari 2021] considers mostly of linear regime, i.e., $m \\propto d$, the paper extends the result to $m \\propto d^{r}$ to arbitrary integer $r$. Result. The paper provides a close form characterization of the population loss. It is hard to describe in non-mathematical way, the close form characterization consists of Bias and Variance terms, where both terms are certain integral of Marchenko-Pastur distribution (limiting spectral distribution for random matrix) and exhibits the double descent phenomenon (bias always decay, but variance first goes up and then down) Method The method extends from the previous work of [Mei and Montanari 2021]. They use Legendre polynomial decomposition to the kernel and by standard concentration, argue one only needs to look at the $r$-th order decomposition. Then perform standard bias-variance decomposition and use random matrix theory to argue about the spectral. Some technical complications come out along the proof. Strength. The paper provides close form characterization of loss curve for kernel regression and offers some insight on the double descent phenomenon. The technique follows largely from [Mei and Montanari 2021] but also turns out to be very interesting and non-trivial to me. Weakness. There are no significant weaknesses but the writing could be significantly improved. (See below) 1. Line 72. It is better to use $x^{\\top}$ instead of $x^{T}$ for transpose. This applies to other matrix/vector transpose. 2. Line 46. “Extending these result “ - - > “Extending these results “ 3. Line 40. “Large text corpora can contain trillions of tokens, wheares” I don't quite get it, the input dimension of language model might not be that large, but the entire models could still be quite large. 4. Eq. (5) what is $\\delta_{0}(t)$. 5. Eq. (11) typo: need to add transpose for the variance and covariance. 6. Figure 3. Might be good to make bias and variance more clear. 7. Line 190. Would be good to make a separate paragraph for low frequencies and critical frequencies 8. The experiments are all simulations that should be emphasized somewhere. 9. The reference format seems wrong. 10. typo: Line 258 $y_k(\\mathbf x)$ should be the summation over $j\\in[d]$ . <doc-sep>The authors aim to theoretically analyze the relationship between number of samples ($m$) and test error (learning curve) in the case of Kernel regression for dot product kernels when the $m \\sim d^r$ where $d$ is the dimensionality of data and $r$ is a natural number. Their analysis shows that when the input distribution and labeling function follow some regulatory conditions, under some asymptotics, a closed form formula precisely characterizes the learning curve for the mentioned set of kernels, and their experiments support this claim. Moreover, they asymptotically derive a closed-form distribution for the gram matrix of some dot product kernels, which seems valuable for future works. Although the work indeed does have some limitations (mentioned in greater detail in the limitations section), both on the theoretical and experimental aspects, I find the contribution to be impactful in the direction of understanding/characterizing generalization error in modern machine learning. **Strengths:** * The theoretical results of the work are sound and fills a gap in the study of learning curves for the case of non-linear relationship between the number of samples and dimensionality of the data. * The predicted distribution for the spectral distribution of the mentioned dot product kernels seems very accurate and can potentially benefit future work. * The conducted experiments, although with some limitations, strongly support the theoretical results. **Weaknesses:** * Some parts of the text are hard to follow. For instance, the notation section brings up the whole decomposition of the function of the dot product kernel without motivating it. This kind of introduction to some topic without motivating it beforehands happens a lot in the paper, and for me was generally confusing. Moreover, sometimes some equations or results are being referred to before being obtained or described. For instance, in line 171 there's a reference to Eq 29, which is undefined at this point for the reader. Likewise, $\\chi_B$ and $\\chi_V$ are introduced without much motivation. * Various assumptions are not justified nor analyzed, and their importance could be better explained. As an example, the assumption of having according decomposable labeling function could be better explained in a sense that how close is it to the practical datasets that one might expect, at least for some toy examples. * I believe the writing could enjoy more description about motivations, intuitions, the why of different assumptions and the relativity of them (how often do they hold in practical settings? why do we need them and what goes wrong if we don't have them?) instead of some extra formulas in the main text, such that the main text would give an overall intuition about the problem and the approach to solve it along with the derived theorems, and the proof sketchs could be delayed to the appendix instead. However, this is just my opinion. * Although the experiments strongly support the theoretical results, the range of values used for $m, r, d$ seem a bit limited to me. In particular, there are some asymptotics in the theoretical results (high frequency and low frequency decompositions of the kernel value that converge or diverge asymptotically as $d \\to \\infty$) that I expected to result in very noisy experimental results, specially for the range of tested values, but surprisingly this doesn't happen in the provided experiments. I would be curious to know why this is the case. Moreover, experiments involving noise and ridge regularization could also benefit the paper. * I believe more insights could be provided based on the solid theoretical results that are achieved. Having a closed formula for determining the precise learning curve in the mentioned context unlocks a lot of analyses. For instance, how does $r$ in $m \\sim d^r$ affect the number of descents in the curve or the slope of the curve? How does the labeling function or the data affect the slope and/or the number of descents? The dependence on the spectral gap is mentioned, but when does one expect large or small spectral gap? * Some more detailed concerns and questions: * What is the definition of a "fat" or "tall" matrix? * Why would one expect the coefficients of the legendre polynomials to be non-zero up to a large index in the decomposition of $h$, and why does it have to be independent of $d$? * Could you please shed more light on the decomposition of the labeling function and the specific assumptions like the fact that $\\hat f_k$ should be isotropic? How likely is it that practical datasets follow such decompositions? I have mentioned unaddressed limitations as part of the weaknesses. <doc-sep>The authors explicitly characterize the learning curve for kernel ridge regression under poly scaling regimes $m \\sim d^r$, both theoretically and empirically. Despite the strict restrictions on the distribution and the setting, this interesting multiple-descent behavior in the sample-wise learning curve could help the understanding of neural networks. Strength: Theoretical results seem to be solid (did not check step by step though). Experiments serve as good illustration of the key ideas. Weakness: 1. (minor) readability could be improved by adding more details (could be placed at the beginning of the appendix in a separate (sub-)section): (1) explicit definitions of asymptotic notations, e.g., $\\sim$, $O(\\cdot)$. (2) background on decomposition via spherical harmonics Moreover, are there (commonly seen) examples for the kernel function that eq. (1) is satisfied? 2. (major)The contribution/impact of the current finding should be more carefully articulated, instead of several lines in the intro (lines 54-56). Also a better summary of the existing works on both linear/non-linear scaling regimes would be better. What is the connection (in terms of the empirical/theoretical behavior, technique in obtaining the theoretical results, etc.)? 3. (major)More experiments would be better: I would not say d=60 is very large (though I am not expert in this field). Since the goal is to demonstrate the (asymptotic) theoretical findings using numerical evidence, I would expect more extensive results (in terms of settings, such as dimensionality, distribution parameters, etc.). For example, with larger dimension, would you observe better empirical scaling? To be precise, the authors state peak appears at $m \\approx d^r/r!$, then I would expect experiments indicating as $d \\rightarrow \\infty$, the different between $m_{peak}$ and $d^r/r!$ shrinks to zero. Yes. As the authors pointed out in section 7, this works only focus on kernel ridge regression with uniformly distributed data, the generalization of the results is a very important topic. The impact of this work, beyond the interesting phenomena, should be more carefully explored. I do not see what is the potential negative societal impact. <doc-sep>This paper characterizes the prediction error for kernel ridge regression with dot-product kernels. Different from the previous work that focus on the linear scaling regime (sample size $m$ $\\propto$ data dimension $d$), this work focuses on the higher-order scaling regimes ($m\\propto d^r$). To establish this theoretical result, the author first study the limiting distribution (MP distribution) of the spectral density of the gram matrix under this scaling regime [Theorem 1]. With the help of this theorem, the author can express the testing error in terms of the MP distribution [Theorem 2]. Experimental results match their theoretical result. Extensions to convolutional kernels are also included. **Originality**: Main theoretical result [Theorem 2] itself gives us an understanding of the learning curve of kernel regression for higher-order scaling regimes, extending previous analysis in literature. The intermediate theoretical result [Theorem 1] and supporting proof are also interesting. From my perspective, the key step is to view kernel function as harmonic series and then study the empirical spectral distribution of the non-trivial term from this series. Previous literature [Tao, 2012] only gives similar result for the case $r=1$, but the auhor shows that it holds for all degrees. **Quality/Clarity**: Presentations of notations, theorem statements, and proofs are crystally clear. Theoretical results are highly non-trivial. Experimental results match their theory surprisingly well. **Significance**: Their theory on learning curves applies for dot-product kernel and NNGP/NT kernel with one-layer convolution, which covers many scenarios in machine learning literature. **Weakness**: 1. The author commented that it is promising to extend the theory for deep convolutional kernels, but due to the complicated structure of this type of kernel, it is left for future work. I agree on this point but I am looking forward to seeing some updates on deep convolutional kernels in the future. 2. As the author have commented in Section 7, the strong assumption on distributions of input data and the focus on kernel regression setting makes this work less popular. However, I believe similar technique can be used to handle more general settings, which can be finished in the future. See weakness part in previous section. <doc-sep>Derive the precise generalization error of kernel ridge regression in the polynomial regime $n=\\Theta(d^r)$ for dot-product kernels on unit sphere. The analysis generalizes [Mei et al. 2021] to the case where $r$ takes integer value. The key observation is that the degree-$r$ decomposition of the kernel matrix has a Marchenko-Pastur spectrum, which, together with a random label function assumption, gives a simple description of the generalization error similar to the case of linear regression. Theoretical predictions align with empirical findings. The generalization error of kernel methods in high dimensions is an important research problem. This submission considers the challenging polynomial scaling setting and completes the picture in [Mei et al. 2021] by covering the $r\\in\\mathbb{N}$ case. The asymptotic formulae provide a precise description of the multiple descent risk curve (without manipulating the spectrum of the input matrix). I feel that this is an interesting submission that is relevant to the NeurIPS community. On the other hand, the current results are also a bit limited for the following reasons: 1. The analysis is based on the decomposition of kernel matrix into convenient orthogonal bases, which seems to work only for restricted input data (e.g. sphere or hypercube). It is not clear if the similar results can be shown for general settings. 2. To reduce the bias term to the Stieltjes transform of the decomposed kernel, the high-degree components of the label function is assumed to be random and isotropic. The authors should highlight this limitation in the abstract / introduction. [Minor] the reference list for precise learning curves / Gaussian equivalence principle is incomplete. Please update the citations and include more relevant papers (e.g. see related work section of [Loureiro et al. 2021]). Loureiro et al. 2021. Learning curves of generic features maps for realistic datasets with a teacher-student model. N/A
The paper studies kernel ridge regression and characterizes its performance theoretically, which are interesting and highly relevant to machine learning venues. The paper is technically sound and the authors have done a good job in the rebuttal period. The paper is worth publishing in NeurIPS.
The paper presents a new method for contrastive ('why class x and not class y', as opposed to 'why class x') explanations of classification networks. The main idea is to use the attributions of other classes, weighted by the softmax of their logits. The methods's applicability as an extension of existing explainability methods is shown, as well as its computational efficiency: While using information from all of the classes, ir only requires a single backward pass. **Originality** - while the extension of existing methods is very simple, it has not been used before in the context of explainability (to the best of my knowledge). Related work is properly cited and the main contributions are well separated from previous work. **Quality** - all the claims are well supported theoretically and the authors provide convincing experimental evaluation. The work is self-contained but proposes directions for future work. **Clarity** - the paper is well oeganized, easy to follow and written in good English. Main ideas are explained clearly. Information about details of finetuning required to reproduce the reported numbers is missing (line 269, 'fine-tuned correspondingly'). **Significance** - the method is a simple extension of existing methods, limited to a specific task yet novel and well supported both theoretically and experimentally. The method relies on softmax weights, so is not directly applicable to ie. multilabel classification. Limitations are properly addressed in the conclusions section. <doc-sep>In this paper, the authors argued for the need of "class-contrastive" explanations, which not only explain "why an input is classified into a particular class," but also "why the input is NOT classified as others." To obtain class-contrastive explanations, the authors proposed the "weighted contrast" explanation scheme, which is similar to the previous "mean contrast" explanation scheme except for the weights assigned to the non-target classes -- namely, in "mean contrast" explanation scheme all non-target classes have the same weight, whereas in the proposed "weighted contrast" explanation scheme, the weight for a particular non-target class depends on the softmax normalized score for that non-target class against the scores for all non-target classes. The authors showed that their proposed weighted contrastive explanation scheme is equivalent to standard explanation applied to the softmax normalized output of the model, thereby simplifying the implementation of their explanation scheme. Finally, they applied their weighted contrastive method to various back-propagation explanation techniques, including gradient, input x gradient, integrated gradient, GradCAM, and linear approximation (LA), and showed quantitatively that their weighted contrastive explanation scheme can find input regions that are better associated with target-class probabilities, and also qualitatively that their weighted contrastive explanation scheme (when applied to GradCAM and LA) provides more informative explanations. Strengths: + The authors have provided very solid theoretical reasoning for where the current explanation techniques fall short and why we need "class-contrastive" explanations. + The authors have provided an easy way to obtain "class-contrastive" explanations -- namely, applying standard techniques not to the logits (unnormalized class scores) but to the predicted probabilities (after softmax normalization). + The quantitative measurement of changes in predicted probabilities by perturbing/blurring/masking pixels or input regions identified as important does show that the proposed "class-contrastive" explanation scheme is superior in finding input regions that are relevant for a target class. Weaknesses: - I find it difficult to understand equations (3) and (4) -- in particular, why is the weighted sum of standard GradCAM/LA explanations for all classes (where the weight for a class is the derivative of the target class probability with respect to the unnormalized score of that class) "approximately equal" to the weighted contrastive explanation? Please show your proof. - I also find it difficult to see how integrated gradient with zero baselines is equivalent to the input x gradient method. Please show your proof. A minor issue regarding related work: - ProtoPNet (Chen et al., 2019) only provides similar examples but not contrastive examples. In addition, ProtoPNet does not require additional annotations. In fact, ProtoPNet (and a number of other works) does not belong to the "posthoc" explanation family. The authors have adequately addressed the limitation that their contrastive explanation method is only applicable to techniques where attributions ("heatmaps") are involved. <doc-sep>The authors propose a method (Weighted Contrast) for explaining DNN classifier predictions. Rather than focusing on features that change the predicted probability of a given target class, the authors’ method focuses on features that are important for one class and _not_ others. Strengths: 1. As shown in previous work and by the authors, so-called contrastive explanations can better capture features that discriminate between classes for a given model as compared to non-constrastive explanations. 2. The method proposed by the authors is an extension of gradient-based explanation methods, making it faster than some previously proposed contrastive explanation methods. Weaknesses (listed in order of importance to my score): 1. Lack of baselines: Many recent works have focused on developing contrastive explanation methods. Although the authors do discuss some of these in their literature review, they do not compare their proposed method against any of these previous works in their experiments; instead, the previously proposed methods are dismissed as being "against Occam's razor". While I acknowledge the authors' point that their proposed method may be computationally cheaper than previously proposed methods, it's necessary to understand whether this speed comes at the expense of explanation quality. For example, with non-contrastive explanations, simpler gradient-based approaches have been shown to produce worse quality explanations than the axiomatically justified (but computationally more intensive) integrated gradients or SHAP methods. Without any comparisons against previously proposed methods, it is difficult to assess the significance of the authors' results. 2. Novelty/additional discussion on the choice of backpropagating with respect to $p$ or $y$: The authors' show that a "weighted contrastive explanation" of logits $y$ is equivalent to a standard explanation of softmax/sigmoid outputs $p$. However, to my knowledge backpropagating with respect to $p$ is already standard practice for some previously proposed methods (see e.g. the official Tensorflow integrated gradients tutorial at https://www.tensorflow.org/tutorials/interpretability/integrated_gradients). Given this, could the authors clarify what exactly the contribution of this work is? Is it just the theoretical perspective suggesting that one should _always_ backpropagate with respect to softmax/sigmoid outputs as opposed to logits or something more? Moreover, for other methods (e.g. input x gradients/LRP) it is specifically recommended _not_ to backpropagate with respect to softmax/sigmoid outputs (see [1] Sections 2.4/2.5). Could the authors comment on this? 3. Presentation of experimental results: In addition to point (1), I found it difficult to assess the authors' experimental results due to issues with figure design. Specifically, for the bar charts in Figure 3, the two darker-colored bars are superimposed on the lighter-colored one for each group of bars, making the chart difficult to read. Moreover, I found Figure 4 hard to evaluate without the labels of the most and second-most likely classes. 4. Writing clarity: I found it difficult to understand the authors' point in parts of the manuscript due to issues with writing clarity. Specifically, Section 3 felt quite rushed, and I had to read the section multiple times to connect the propositions with the preceding text. [1]: "Not Just A Black Box: Learning Important Features Through Propagating Activation Differences" https://arxiv.org/abs/1605.01713 The authors briefly discussed the limitations of their work in the conclusion section. I would strongly recommend the authors expand the limitations section and specifically discuss how backpropagating with respect to softmax/sigmoid outputs may lead to issues with some previously proposed methods (see "Weaknesses" point 3). The authors do not discuss potential negative societal impact for this work, which I think is fine (I don't see any obvious potential for negative impacts).
Reviewers expressed overwhelmingly positive opinions about the simple, easily implementable, and at the same time innovative procedure proposed in the paper for obtaining gradient-based class-contrastive explanations. Appreciation also transpired for the significance of this work in clarifying some technical points in the gradient-based XAI literature and the potential for future work that the paper opens up. One of the main criticisms raised in the reviews, the lack of comparisons against other contrastive explanation methods, has been addressed satisfactorily with additional experiments and discussions in the rebuttals. The most important remaining criticism was a doubt on the merits of one of the key technical points in the paper regarding whether gradient-based attributions should be computed according to the softmax outputs or logits. Reviewers pointed out that computing attributions with respect to softmax outputs instead of logits is already common practice in the field. Reviewers expressed strongly the opinion that it would be appropriate to characterizing and clarify this situation, as it could be potentially misleading and indeed counterproductive even for the paper to denote attribution methods with respect to logits as "standard", while it's instead the case that some implementation of gradient-based attribution methods already attribute with respect to the softmax output (albeit inconsistently). In conclusion, the reviewing panel voted for accepting the paper, under the condition that the camera-ready version of the paper explicitly clarify the distinction between the two approaches and discuss the implication of choosing one of the other, without however referring to attribution with respect to logits as standard, but merely pointing out that until now the distinction has been vague and implementations inconsistent. From this technical standpoint, Reviewers ask that the contribution of the paper should then be explicitly characterized as clarifying the distinction between logits and softmax attributions, rather than as the proposal of a new procedure in opposition to an already established standard. This is already perceived as a strong contribution to the community, as phrasing it specifically as indicated would help elucidate the state of affairs in the literature and make the community aware of this outstanding blind-spot.
This papers tackles the problem of machine unlearning, that is, removing training data upon request from a model, as if the model is trained only with the retained data. The proposed approach, PCMU, trains a model with randomly smoothed quantized gradient. Analogous to certified robustness, the paper presents some certified radius of the proposed method, against to pertubation of the gradient. Unlike previous machine unlearning approaches, PCMU directly trains a model that is robust to data removal, so it does not require an additional unlearning stage. Experimental results show that PCMU can achieve a more similar error rate with retraining than competitive models. Strength - some theoretical results are provided - the proposed method is faster than previous methods, and it can matches the accuracy better with the retraining approach Weaknesses - I'm not an expert on machine unlearning. But from my perspective, rather than a machine unlearning method, the proposed approach looks more like a differential privacy work, which trains a model without utilizing too specific features from individual users, so the trained model automatically satisfies removal requirements. I wonder whether it counts as machine unlearning. - The writing is somewhat vague: there is not a description or pseudocode of how the training and unlearning are performed, making the paper difficult to follow. There are not apparent further limitations or potential negative societal impacts other than those I raised for weaknesses and questions. <doc-sep>This paper presents a novel certified machine unlearning framework that targets the issue in the expensive computational cost of training and unlearning. The authors analogize certified robustness on classification against adversarial attacks to certified machine unlearning on gradient quantization against data removals. The randomized gradient smoothing and quantization techniques are developed to guarantee that the learnt model shares the same gradients (and parameters) and has the same performance with the naive unlearning model retrained on only the remembered data, with only the cost of simultaneous training and unlearning. The theoretical analyses validate the effectiveness of certified machine unlearning in terms of the certified radius and the certified budget of data removals. Overall, the studied problem is interesting and practically important. The experimental results look promising. Strengths: 1. Existing machine unlearning methods separate the unlearning process into two sequential operations of training and unlearning, which leads to non-trivial computation cost when training complex models over large datasets. In addition, these methods often sequentially address multiple unlearning requests one by one. To improve the unlearning efficiency, this work trains and unlearns the model simultaneously. 2. The authors propose a randomized gradient smoothing and quantization technique to directly train an unlearning model in advance with fast convergence and certified unlearning guarantees. The framework is able to resolve the requests of data removal in a timely and cost-efficient manner. 3. The proposed method provides a general machine unlearning framework. The proposed framework is important for privacy-critical applications that usually require near-zero tolerance of data leaking, such as financial and health data analyses. 4. This work theoretically analyzes and understands the certified radius regarding the data change before and after data removals and the certified budget of data removals in machine unlearning. Extensive experimental results on different benchmark datasets have been conducted to validate the efficacy of the developed prompt certified machine unlearning algorithms. Weaknesses: 1. The paper provides descriptions of the benchmark image classification datasets and learning models in the paper and appendix. It would be nice to include the description of the data removals for how to separate the datasets into the forgotten data and the remembered data for each benchmark dataset. 2. It would be interesting to see the results about different smoothing strategies exploited in certified machine unlearning problems, such as Laplacian and uniform smoothing. It is unclear of the potential negative societal impacts of the results, such as security, privacy, and fairness issues, etc. <doc-sep>The authors propose a novel certified machine unlearning algorithm to improve the unlearning efficiency for complex models on large-scale data. First, the authors present an analytic framework to connect randomized smoothing for certified robustness on classification to randomized smoothing for certified machine unlearning on gradient quantization. Second, the paper develops a prompt certified machine unlearning model for producing the effective certificates of data removals based on randomized data smoothing and gradient quantization. Finally, the authors propose a practical framework of randomized gradient smoothing and quantization for producing the high confidence certificates in an efficient manner. The proposed PCMU method brings three significant benefits: simultaneously conduct the training and unlearning for improving the unlearning efficiency, one-time training for responding multiple machine unlearning requests at a time, and no need to know the forgotten data before the unlearning. Strengths: + The authors study an important research problem, i.e., prompt machine unlearning, which is important to improve the unlearning efficiency for complex models on large-scale data and to provide the timely response to a series of machine unlearning requests. There are few prior works to conduct this problem. + The motivation for proposing randomized gradient smoothing and quantization techniques is clearly explained. The method and the claim are correct and sound. The authors provide enough methodology description and theoretical analysis to explain their proposed PCMU model. + The paper conducts the theoretical analysis to derive the certified radius regarding the data change, the certified budget of data removals, and the correlation between two types of certified radii in two frameworks. This work integrates the certifying and training of machine unlearning into a unified framework for further enhancing the unlearning performance. The convergence analysis is conducted to demonstrate the effectiveness and efficiency of the prompt certified machine unlearning algorithm. + The paper provides comprehensive extensive evaluation on three real datasets to demonstrate the superior performance of the proposed techniques against a number of SOTA baselines. The experiment results look promising. Weaknesses: - There are several typos and the paper would benefit of a careful proofread. For example, "which are used to derive the certified budget B about R'" in P2 -> "B'", "the certified budget of data removal" in P5 -> "removals", and "Notice that the accuracy and error on test data by our PCMU keeps unchanged" in P8 -> "keep". - It would be nice to move the related work section to the paper for readers to better understand and appreciate the technical contributions of this paper compared with existing studies, instead of the appendix. - In the experiments, Tables 1-2 and relevant texts lie in different pages, which decreases the paper’s readability. I suggest the authors to update the paper layout to get more clear presentation. None
This paper proposes an algorithm for simultaneous learning and unlearning without the knowledge of the datapoints that will be forgotten. This reduces the computational cost associated with unlearning in a unified fashion. Both experimental and theoretical results are interesting and the paper would be a great addition to NeurIPS.
This paper proposed a model-free HRL method, which is combined with unsupervised learning methods, including abnormality discovery and clustering for subgoal discovery. In all, this paper studies a very important problem in RL and is easy to follow. The technique is sound. Although the novelty is not that significant (combining existing techniques), it showed good results on Montezuma’ revenge, which is considered as a very challenging problem for primitive action based RL. Although the results are impressive, I still have some doubt about the generalizability of the method. It might be helpful to improve its significance if more diversified domains can be tested. The paper can be strengthen by providing some ablation test, for example, is performance under different K for Kmeans? Also some important details seems missing, for example, the data used for kmeans, it is mentioned that the input to the controller is four consecutive frame of size 84x84, so the input data dimension is more than 10k, I guess some further dimensionality reduction technique has to be applied in order to run kmeans effectively. Regarding the comparisons, the proposed method is only compared with one primitive action based method. It might be better to include results from other HRL methods, such as Kulkarni et al. Is the curve based on the mean of different runs? It might be useful to include an errorbar to show the statistical significance. <doc-sep>Summary: The authors propose an HRL system which learns subgoals based on unsupervised analysis of recent trajectories. The subgoals are found via anomaly/outlier detection (in this case states with a very high reward) and the clustering together of states that are very similar. The system is evaluated on the 4-rooms task and on the atari game Montezuma’s Revenge. The paper cites relevant work and provides a nice explanation of subgoal-based HRL. The paper is for the most part well-written and easy to follow. The experiments are unfortunately not making a very convincing case for the general applicability of the the methods. While the system does not employ a model of the environment, k-means clustering based on distances seems to be particularly well-suited for the two environments investigated in the paper. It is known that the 4-rooms experiment is much easier to solve with subgoals that correspond to the rooms themselves. I can only conclude from this experiment that k-means can find those subgoals given the right number (4) of clusters and injecting the knowledge that distances in grid-worlds correlate well with transition probabilities. Similarly, the use of distance-based clustering seems well-suited for games with different rooms like Montezuma’s Revenge but that might not generalize to many other games. The anomaly detection subgoal discovery is interesting as a method to speed-up learning but it still requires these (potentially sparse) high reward states to be found first. For tasks with sparse rewards it does make sense to set high reward states as potential subgoals instead of waiting for value to propagate. That said, the reward for the lower level policy is only less sparse in the sense that wasting time gets punished with a negative reward. Subgoal discovery based on rewards should probably also take the ability of the current policy to obtain those rewards into account like some other methods for subgoal discovery do (see for example Florensa et al., 2018). The authors mention that the subgoals were manually chosen by Kulkarni et al. (2016) instead of learned in an unsupervised way but I don’t think that the visual object detection method employed there is that much more problem specific. Like Kulkarni et al. (2016), the authors compare their method with DQN (Mnih et al. 2015) but it was already known that that baseline cannot solve the task at all and a lot more results on Montezuma’s Revenge have been published since then. A more insightful baseline would have been to compare with at least some other HRL methods that are able to learn the task to some extend like perhaps Feudal Networks (Vezhnevets et al., 2017). Looking at the graph in the Feudal Networks paper for comparison, the results in this paper seem to be on par with the LSTM baseline there but it is hard to compare this on the basis of the number of episodes. Did the reward go up further after running the experiment longer? Since the results are not that spectacular and a comparison with prior work is lacking, the main contributions of the paper are more conceptual. I think that it is interesting to think more carefully about how sparse reward states and state similarities can be used more efficiently but the ideas in the paper are not original or theoretically founded enough to have a lot of impact without the company of stronger empirical results. Extra reference: Carlos Florensa, David Held, Xinyang Geng, Pieter Abbeel. (2017). Automatic goal generation for reinforcement learning agents. arXiv preprint arXiv:1705.06366. <doc-sep>This paper proposes an unsupervised method for subgoal discovery and shows how to combine it with a model-free hierarchical reinforcement learning approach. The main idea behind the subgoal discovery approach is to first build up a buffer of “interesting” states using ideas from anomaly detection. The states in the buffer are then clustered and the centroids are taken to be the subgoal states. Clarity: I found the paper somewhat difficult to follow. The main issue is that the details of the algorithm are scattered throughout the paper with Algorithm 1 describing the method only at a very high level. For example, how does the algorithm determine that an agent has reached a goal? It’s not clear from the algorithm box. Some important details are also left out. The section on Montezuma’s Revenge mentioned that the goal set was initialized using a “custom edge detection algorithm”. What was the algorithm? Also, what exactly is being clustered (observations or network activations) and using what similarity measure? I can’t find it anywhere in the paper. Omissions like this make the method completely unreproducible. Novelty: The idea of using clustering to discover goals in reinforcement learning is quite old and the paper does a poor job of citing the most relevant prior work. For example, there is no mention of “Dynamic Abstraction in Reinforcement Learning via Clustering” by Mannor et al. or of “Learning Options in Reinforcement Learning” by Stolle and Precup (which uses bottleneck states as goals). The particular instantiation of clustering interesting states used in this paper does seem to be new but it is important to do a better job of citing relevant prior work and the overall novelty is still somewhat limited. Significance: I was not convinced that there are significant ideas or lessons to be taken away from this paper. The main motivation was to improve scalability of RL and HRL to large state spaces, but the experiments are on the four rooms domain and the first room of Montezuma’s Revenge, which is not particularly large scale. Existing HRL approaches, e.b. Feudal Networks from Vezhnevets et al. have been shown to work on a much wider range of domains. Further, it’s not clear how this method could address scalability issues. Repeated clustering could become expensive and it’s not clear how the number of clusters affects the approach as the complexity of the task increases. I would have liked to see some experiments showing how the performance changes for different numbers of clusters because setting the number of clusters to 4 in the four rooms task is a clear use of prior knowledge about the task. Overall quality: The proposed approach is based on a number of heuristics and is potentially brittle. Given that there are no ablation experiments looking at how different choices (number of clusters/goals, how outliers are selected, etc) I’m not sure what to take away from this paper. There are just too many seemingly arbitrary choices and moving parts that are not evaluated separately. Minor comments: - Can you back up the first sentence of the abstract? AlphaGo/AlphaZero do well on the game of Go which has ~10^170 valid states. - First sentence of introduction. How can the RL problem have a scaling problem? Some RL methods might, but I don’t understand what it means for a problem to have scaling issues. - Please check your usage of \\cite and \\citep. Some citations are in the wrong format. - The Q-learning loss in section 2 is wrong. The parameters of the target (r+\\gamma max Q) are held fixed in Q-learning.
Pros: - good results on Montezuma Cons: - moderate novelty - questionable generalization - lack of ablations and analysis - lack of stronger baselines - no rebuttal The reviewers agree that the paper should be rejected in its current form, and the authors have not bothered revising it to take into account the detailed reviews.
This paper is a theoretic investigation on how to learn a manifold with a score-based generative model (SGM). SGM uses an approximated score of the distribution in the middle of the reverse diffusion. This score estimation is only an approximation by a neural network, and this paper provides the study on error bound for this approximation. Also, this paper analyzes the impact of the limited sample population in learning the score function. Another story line is the limitation of diffusion in T because the diffusion step needs to be infinite to make the diffused distribution be a standard Normal distribution. This becomes infeasible in actual implementation, so there should be discrepancy between the final timestep distribution and the prior distribution. This discrepancy is analyzed, and its bound is suggested. Strength: 1. This paper provides a theoretic discussion on many mathematical assumptions suggested by SGMs. 2. This paper clearly shows the necessary and sufficient assumptions to make the SGM identify the low-dimensional manifold. Weakness: 1. This paper is very difficult to understand because of its eccentric structure. I partly understand the authors' effort because this paper is dedicated to the theoretic analyses. However, this paper is really needs a restructuring to draw more attention from possible audience. 1) Assumptions are referred before their appearances. 2) The motivation of Theorem 1 and 2 need to be further provided by formally throwing a research question. 3) Need a further clear statement on error bounds. Assumption 2 shows the characteristics of e(x,t), but I expected the explicit error bound for e. 2. The score approximation error will grow as we bring the diffused distribution to the standard Normal distribution, i.e. t->T. This is briefly mentioned in line 206-211, and I think that authors might produce further discussions and plain explanation on the interaction of score approximation quality and closeness between the prior and the diffused distributions. I think that this paper would need further toy example experiments and plain explanations to secure audience who may use diffusion models without enough knowledge on authors' discussions background. <doc-sep>This paper is a theoretic paper that solidifies the well-definedness of sample measure and proves the equivalence of the sample measure and the learnt measure under Assumption 2. $\\textbf{Strengths}$ - Until now, there is no guarantee that the reverse diffusion is well-defined at $t=0$ if $p_{data}$ is embedded in a low-dimensional manifold. This paper is the first to prove that we can safely solve the reverse diffusion up to $t=0$ when we use the previously suggested diffusion strategies (VE, VP, CLD). - On top of that, this paper shows the well-definedness of the generative measure if Assumption 2 holds. Theorem 2 additionally proves that the sample measure and the trained measure have identical support. $\\textbf{Weaknesses}$ - It would have been much novel work if the author proved that Assumption 2 is the necessary and sufficient condition in Theorem 2. Theorem 2 only proves one direction, and it is difficult to conclude that uniform integrability is the key diagnosis of indicating the overfitting of the trained network. - It would be much better to move the proof of theorems to the appendix and put more explanations and experiments. In particular, this paper significantly lacks the empirical validation of their claims. Although I believe the claims are novel, the one-point lesson is missing to the community of diffusion models. What would be the "concrete truth" observed from the high-dimensional experiment? $\\textbf{Notes}$ - This is not a weakness, but I should note that Theorem 1-(ii) is nothing but Theorem 1 of [**Song21Maximum**]. In Theorem 1 of [**Song21Maximum**], they used the data processing inequality for KL divergence, but the data processing inequality is well known for its $f$-divergence extension, and Theorem 1-(ii) is merely a straightforward generalization of the work of [**Song21Maximum**]. With all this, nevertheless, proving Theorem 1-(i) is a significant improvement, so I value Theorem 1. - Again, this is not a weak point, but I think it would be preferable to describe the meaning of equivalent measure for the general audience. A measure $\\mu$ is equivalent to $\\nu$ if $\\text{supp}(\\mu)=\\text{supp}(\\nu)$, and this can be framed in an everyday language as "support matching force". The general audience will much more value this paper if Theorem 2 is well-understood. - Not a weak point, but it seems that Assumption 2 is akin to the assumption of Theorem 1 of [**Bortoli21Diffusion**]. Theorem 1 of [**Bortoli21Diffusion**] explicitly bounds the total variation distance between the data distribution and the generative distribution under the uniformly bounded score estimation. Under such previous research, it would be very interesting to compare Theorem 1 of [**Bortoli21Diffusion**] and the authors' Theorem 2. - To get a generalizable model, the authors claim that Assumption 2 should be violated. [**Kim22Soft**] introduced Unbounded NCSN and Unbounded DDPM as $\\mathbf{s_{\\theta}}(x_{t},\\eta(t))$ with $\\lim_{t\\rightarrow 0}\\eta(t)=\\infty$ instead of NCSN and DDPM that parameterize the score estimation as $\\mathbf{s_\\theta}(x_t,t)$, to enable the score network to successfully estimate the unbounded score function. But it seems that NCSN and DDPM are more appropriate to violate Assumption 2, from their network design. It could be an interesting topic to investigate the network architectures with respect to the generalization power. [**Song21Maximum**] Song, Yang, et al. "Maximum likelihood training of score-based diffusion models." Advances in Neural Information Processing Systems 34 (2021): 1415-1428. [**Bortoli21Diffusion**] De Bortoli, Valentin, et al. "Diffusion Schrödinger bridge with applications to score-based generative modeling." Advances in Neural Information Processing Systems 34 (2021): 17695-17709. [**Kim22Soft**] Kim, Dongjun, et al. "Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation." International Conference on Machine Learning (2022). - I am a practitioner in the community of diffusion models. In my perspective, a paper in this venue is better to contain intuitive illustrations and related explanations. In the current form, the submitted manuscript is flawless in its solidness, and I believe this is good work, but I doubt if this version could be valued in this field simply because it is difficult to understand the contents for the audience. Please remember that the expected audience of this paper is mostly the practitioners, and most of them are short-lesson-seekers. For instance, what is the meaning of uniformly integrable martingale? NeurIPS is not Annals of Mathematics, and sufficient interpretation of mathematical concepts is required. With the kind and satisfactory illustrations and insights, practitioners could find the true value of this paper. <doc-sep>The paper considers the approximation errors in the sampling process of score-based models. Specifically, the errors are in two folds: the estimation error of neural networks and the approximation of priors. The authors study the terminal distribution of the reverse SDE under such errors. They first show that the terminal measure is absolutely continuous w.r.t the data measure if we have an imperfect prior. Next, they show that the terminal distribution has the same support as the data distribution, under mild assumptions of the estimation error. Toy experiments demonstrate the validity of the theoretical results.  ## Strengths - The problem of approximation errors in score models is interesting and the theoretical understanding is lacking. The paper takes a step in this direction and investigates the terminal distribution of reverse sampling with the existence of various sources of errors. - The paper verifies the correctness of the main results on toy experiments, and examines popular SDEs to see whether they fulfill the assumptions.  **Questions** Before going into the concrete questions, I would like to double check some terms and the overall proof ideas with the authors. Please correct me if I'm wrong. (1) In line 71 "its distribution is equivalent to the distribution of ..": Does the equivalence here only mean that the two distributions have the same support? (2) The general proof idea of theorem 2: To my understanding, the paper uses the Girsanov theorem with $Z_T$ as the ratio $\\frac{\\mu_{sample}}{\\mu_{data}}=Z_T$. The uniformly integrable martingale assumption is to ensure the boundness of $Z_T$. Together the result manifests itself. ## Weaknesses - The main result (theorem 2) in the paper is straightforward. It's a direct implication of Girsanov theorem.  - The results seem tangent to the actual score-based generative models. For the image score-based generative models, the terminal $T$during sampling is set to $1e-3\\sim1e-5$ due to discontinuity at $T=0$. Critically, the support of $p_{T}(x), T>0$ is $\\mathbb{R}^n$, and the results trivially hold in this case. In addition, I ran some quick experiments on CIFAR-10 by adding 1 to the predicted score (the paper also does similar things), and the final results are noisy samples, with very large pixel values. Due to the limited impact of the theoretical results and the gap to practical models, I think the paper has a lot of room for improvement. The authors have adequately addressed the potential negative societal impact of their work. <doc-sep>This is a pure theoretical study of the score based generative model. The authors discover the conditions under which the sample distribution is equivalent to the data distribution. 1. The theorems provided here could help researchers better understand those score based generative models, which is significant to the whole community. 2. The assumptions proposed here can guide future design of score based models. 3. As a pure theoretical work, the authors provide detailed proofs, which is helpful. But the lack of experimental verification could make it hard to digest. Although, I would not penalize the authors for this, since numerical verification on high dimension is a challenge on itself. No societal impace for a theoretical work.
This paper presents a theoretical analysis of score-based generative models (SGMs) [diffusion models]. Specifically, the paper theoretically studies the effect of approximations used by SGM [1. approximating p_T by µ_{prior} and 2. approximating ∇log(p_t) by a neural network], which currently lacks a solid understanding. The paper presents conditions that assures SGMs can sample from the underlying data manifold and also analyzes conditions under which an SGM memorizes the training data (the latter relates to understanding the generalization properties of SGMs). Besides technical discussions and clarifications during the rebuttal period, the authors overhauled the introduction section and also added some experiments with CIFAR-10 dataset to support their theory, both of which were requested by the reviewers to enhance the paper. Reviewers were satisfied with the responses and the improvements in the revision. In concordance with them, I believe the paper provides a solid theoretical contribution to our understanding of SGMs and recommend accept.
The paper studies the robustness of kNN and rNN to poisoning attacks. They try to achieve certified bounds on the robustness of rNN and kNN algorithms, against poisoning attacks. They claim to achieve certified upper bounds on the effect of poisoning attacks on overall accuracy. The techniques used are very simple. Their first theorem which is about the robustness of individual test examples states that if the number of poison points are smaller than half the distance between the number of neighbors with label that has highest vote and the number neighbors with label that has second highest vote, then poisoning is not effective. Then they try to extend this result to overall accuracy of scheme. Their Theorem 3 tries to achieve such a bound but I think this theorem is not correct (I mentioned the issue with this Theorem in the comments bellow). Then they have some experiments that uses this Theorem to achieve certified bounds for the case of MNIST and CIFAR10. Their experimental results show that they can achieve better certified bounds compared to some previous certified defense papers. But their results are extremely dependent on the correctness of Theorem 3 and it is necessary for authors to rewrite the proof and theorem itself. Even if the authors fix their theorem, I still don't find the theoretical contribution of this paper significant. But their experiments and certified bounds could be interesting enough for paper to be accepted. Comments to Authors: - I think there is a problem in the proof of Theorem 3. It is mentioned that a_i's are different. But b_is are not necessarily different and they could all be the same label. Then the attacker can potentially flip all of them to b_i. -In page 4, it is mentioned that for breaking the ties in kNN we can use ranking of training examples. How would that exactly work? -The claim on being the first certified poisoning defense on overall accuracy is not correct. The work of Steinhardt et al (2017) also studies provable defenses for overall accuracy. It is crucial for this paper to compare their result with Steinhart et al (2017). page 1: worse-case -> worst-case page 2: page 3: The formal definition o S(D_tr,e) is inconsistent with what is described in text. Page 4: The notation is not very clear. s_l is the number of votes for class l and for instance x. It is better to use something like s_l(x). Similarly, a_i and b_i are better to be a_i(x) and b_i(x). <doc-sep>This paper studies to train a certifiable robust model against data poisoning attacks using nearest neighbors. The paper studies the voting mechanism in the nearest neighbor models, and presents a relationship between the poisoning instances and the difference between the majority votes and the second majority votes. Such a relationship will result in a guarantee on the lower bound of a training model's accuracy, which is referred to as Certified Accuracy (CA). The theoretical results are neat. The experiments are conducted on MNIST and CIFAR, and results show better CA than previous approaches of DPA and Bagging. My main concern is the limitation of the applicable machine learning models, which seems restricted to only kNN and rNN models --- they may not yield the best performance on most interesting tasks. For example, from Fig 2 and 3, we can see that even when poisoning size e=0, the accuracy (which should be identical to CA) is far below the SOTA on the corresponding MNIST and CIFAR tasks. Also, the lower bound is established with respect to s_a - s_b - e <= k-e. Therefore, to be able to handle larger poisoning size e, one has to employ a larger k (in the kNN case) or larger r (in the rNN case). Such choices of hyperparameter typically hinder the accuracy on the clean dataset. It is not clear how such a restriction can be mitigated in a practical setup. Due to the above concern, I'm borderline on work. <doc-sep>**Summary:** First, the paper identifies k-Nearest Neighbor (kNN) and radius Nearest Neighbor (rNN) to be naturally effective baseline certified defenses against data poisoning attack. It is easy to see that kNN and rNN are resistant to poison attacks, since to flip the prediction of a test example, one would need to insert/delete enough examples to change the majority vote. Second, the paper proposes a joint certificate that further improves certified accuracy for rNN. Specifically, it uses the fact that for any given poison removal budget, it can only decrease the vote for a single label. Even though the idea is simple, the experimental result is quite impressive significantly outperforming the previous more sophisticated certified defense methods. **Strength:** - The approach is easy to implement and understand - Despite its simplicity, the approach performs significantly better compared to previous methods. This should be the new standard baseline for all certified poisoned defense papers. **Weakness:** - The technical novelty is not very strong, since it is obvious that kNN is naturally robust to poisoning. The proposed joint certification helps compensate for the technical deficiencies. However, I expect more ways to improve this lower bound than what is presented here. For example, another natural way to improve the joint certification is to consider how the added poison cannot influence two test examples concurrently when they are far enough apart. - Even though joint certification can improve certified accuracy, in practice, individual certification may be more important compared to joint certification, since users of the system probably want certifications as individual queries come in. - Since kNN/rNN are not used as frequently in practical settings, the proposed solution may not be as useful to systems that require the use of neural networks. However, I am also aware that none of the existing defenses work well enough for any practical setting. **Recommendation** Even though the technical novelty is limited, I recommend acceptance due to the simplicity of the approach and its impressive performance compared to previous methods. I think this paper will become a new standard baseline for future certified poisoned defense papers. **Update** After reading reviewer2's comment, I realize that there is literature proving much stronger results that I was unaware of. I still think these results should be used as a standard baseline for certified poisoning defense, but due to the lack of novelty, I have to downgrade my score. <doc-sep>The paper studies robustness of k-NN and r-NN against data poisoning attacks. The main message of the paper is that k-NN and r-NN are automatically resilient against attacks. Furthermore, by grouping test examples based their predicted labels. Data points with different predictions are grouped together, and then better certification guarantee can be derived. Experimental results demonstrate that k-NN and r-NN are indeed self-robust against data poisoning attacks. The theoretical angle proposed in this paper is interesting. However, I have a high-level question regarding the paper. Is the goal of the paper simply providing theoretical robustness guarantee of k-NN? I do not see any new defense mechanisms developed in this paper. The grouping idea is only intended to prove the theoretical results. If so, I feel like contribution of the paper is not significant enough. Besides that, how does the result in this paper compare to the following one? I hope the authors could clarify the concern. Analyzing the robustness of nearest neighbors to adversarial examples Secondly, I think definition (1) is not explained well. Does modification, addition and deletion counts equally as a single operation? The authors seem to point toward that modification is equivalent to one-time addition plus on-time deletion. This makes me wondering if modification counts as two operations? Furthermore, does the training data allow repetitions of any data point? For example, if the attacker simply adds the same point that exists in the clean training data multiple times? Does that count as poisoning? In definition (1), the dataset is considered a set, thus repetitions will be absorbed as a single item, which leads to D*=D. However, repetitions definitely matter for k-NN, and could be potentially exploited by the attacker. Therefore, I hope the authors could provide a more clear definition of the defining poisoning size. In the beginning of section 3.1, a data point is certifiably correct only when the predicted label stays unchanged before and after attack, and it matches the true label. I am wondering does this requirement rule out the test examples that are originally misclassified? There are indeed cases where due to the attack, some test examples become correctly classified, while originally they are not. In this case, I am not sure how to interpret the robustness because the attacker is conversely helping the k-NN. This makes wonder if certification is a correct way to define robustness. Throughout the paper, there is no discussion regarding the ground-truth underlying data distribution, which I think is important for the definition of robustness. Ideally, a robust classifier should maintain high accuracy over the underlying data distribution even with attack. There is little reason to care too much about the certification for a particular data point, since that point appears with probability 0. I wonder if the authors can discuss this point. The theoretical results in this paper are very interesting, but it lacks a nice and intuitive explanation before the proof. For example, why grouping can give us a better certification rate? What is the intuition behind that? The explanation is not given enough space in this paper. I only see a few sentences before Theorem 3, and that does not support my understanding a lot. In Figure 1, I sort of see why multiple data points cannot be jointly certified. In my understanding, it is because the point 0 and 2 are somewhat far from each other, thus the attacker cannot place enough poisoning points within the overlapping neighborhood of the two points such that their labels are changed simultaneously. However, if they are close enough, then the attacker can modify some neighbors of point 2 while adding new points to the overlapping area, which leads to a successful attack. Therefore, it is not easy to distill the distinction between individual certification and joint certification from this example. This example also does not help understanding of later theorems. I am wondering if the authors could design a more informative example? In the experiments, I think it will be helpful to also draw the theoretical lower bound for the individual and joint certification, so that readers can see whether the analysis is tight or not. Apart from that, what is the data poisoning attack algorithm used to evaluate the robustness? Overall, I believe the paper is not written clearly, and there is a huge space for improvement. <doc-sep>Existence of much stronger results: I don't get why majority voting is claimed to be the "state-of-the-art" technique. If I'm understanding it correctly, the majority vote technique can only handle a number of corruption points up to O(K), K being the number of voters. Furthermore, since the voters more or less split the dataset, in order to maintain the accuracy of each voter, the number of voters can't be too large, and is usually O(1). Therefore, this majority vote approach can only handle O(1) number of corruption. More specifically, for the case of kNN, the certified accuracy (Theorem 2, ) becomes vacuous as soon as the number of corrupted points $e$ becomes greater than $k$. On the other hand, there are techniques developed from the robust statistics community that can be robust against an $\\epsilon$-fraction of corruption points, that is, if there are a total of $N$ training points, it allows $\\epsilon N$ number of points to be corrupted. For example, Sever [1], a recently developed robust supervised learning algorithm, guarantees $O(\\sqrt{\\epsilon})$ generalization error under $\\epsilon$-fraction of arbitrary corruption. Such a guarantee is much stronger than the ones majority voting approaches are able to achieve. Thus, I'm having trouble appreciating the contribution of this paper given the existence of much stronger results. Relevance to the field: While prior approaches like DPA also suffers from the same weak/trivial guarantee, they are at least meta-algorithms that allows one to plug in any base learners depending on the application. The method developed in this paper, however, only works on kNN. And let's be honest, not many modern ML applications use kNN with the slightest chance. So I don't see much empirical value nor any significant theoretical contribution. [1] Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart. Sever: A Robust Meta-Algorithm for Stochastic Optimization.
Some reviewers expressed concerns on soundness of the theory in the paper. Specifically, theorem 3 does not seem to be correct. There are other concerns such as the significance of the theoretical contributions, little empirical value and existence of much stronger results. Unfortunately the authors did not provide responses to the concerns raised by the reviewers.
The authors propose a new ML force field based on GNNs that uses higher-order equivariant messages. Previous methods like Dimenet++ and Gemnet use higher-order messages (triplets or quadruplets), but they enumerated the various triplets / quadruplets, which made them computationally expensive, and limited the order of interactions. In the present work, the authors use a tensor product formulation to make the framework efficient. Moreover, the authors empirically show that only 2 MP layers are needed to get good performance with their model, which makes it quite fast. SO3 equivariance is achieved using spherical harmonics representations, similar to previous methods. The authors also perform scaling laws analysis as well as extrapolation to OOD data, and show that their model performs well. Strengths * The authors present a new MPNN using higher order messages. The network is equivariant which is a desirable property, for its data efficiency. The model performs well on a variety of small molecular datasets. * The model is very fast thanks to the tensor formulation of higher order messages, and the fact that it only requires 2 layers. Efficiency for ML force fields is very useful in practice for running long MD simulations or performing structure relaxations. * Scaling laws analysis shows desirable scaling behavior. * The model works well for OOD data. Weaknesses * In section 4, the paper claims that the higher order features B_i,eta,K,L,M can be interpreted as a complete basis. Is there a reference / proof for this? The paper does not mention any. * Experiments don't compare against newer methods like Gemnet that also use higher order message passing. Seems adequate. <doc-sep>This work proposes a new equivariant MPNN model named MACE for force field prediction. In particular, by combining equivariant message passing with efficient many-body messages, MACE achieves both state-of-the-art performance on several benchmarks and considerable computational efficiency. Strengths: * The proposed method achieves state-of-the-art performance on several benchmarks. * The proposed method is faster to train and inference than previous models, which is meaningful for modeling some macromolecular systems. * The analysis of the impact of many-body messages on learning curves is interesting, which may inspire future works in balancing message correlation orders and network depth while designing architectures. Weaknesses: * The connection between tensor product and the standard many-body expansion is unclear to me. The authors should provide clearer background and technical details while presenting this core module. * The novelty of the proposed method is not well clarified. As mentioned in related works (lin99-109), both the theories and implementation strategies of the Multi-ACE framework have been put forward by previous works. There is no clear clarification of non-trivial modification in the proposed method, making it difficult to justify the technical contribution of the paper. * The description of evaluation benchmarks is unclear. For example, the meaning of rMD17 is not well explained. * The claim about receptive fields is misleading. In line 220 of section 5.2, the authors claim that "By decoupling the increase in correlation order of the messages from the number of message passing iterations, MACE only requires two layers resulting in a much smaller receptive field." There is no doubt that a small receptive field will make the model more parallelisable. However, the big receptive field also captures more global (long-range) interactions than the small receptive field, which usually leads to better empirical performance (see Figure 1 in this paper). It seems that the authors think the correlation order of body messages is more important than the size of the receptive field, so the negative impact of small receptive fields is ignored. But there is a lack of convincing discussion about this point. Correct me if I misunderstand something. Limitations are well discussed. <doc-sep>The paper describes a message passing NN approach that efficiently handles many-body interactions (where classical methods typically handle 2 body interactions). The method is based on a standard steerable/equivariant edge embedding and aggretation, leading to equivariant atomic environment embeddings. These per-node embeddings are then pulled through some higher order CG tensor product (like an equivariant polynomial) by which the pair-wise features start interacting with each other and effectively a higher body order is achieved. Notably, this interaction takes place on the node level (not over all possible many body combinations) which makes the method highly efficient. The paper then systematically shows what components of equivariant architectures are important (spherical harmonic degree or max body order, or both?). To a large degree it is the max body order, and spherical harmonic degree helps but to a lesser extend. The paper further shows that with high-body order, the number of layers can be greatly reduced (to 2) whilst reaching state of the art. **Strengths** * The paper is very well written. * Literature review is coherent and the storyline is clear * Despite though material, the paper does an excellent job in presenting the method in a concise, precise and comprehendible way * Experiments are systematically setup as to allow to specifically test individual components and in order to draw appropriate conclusions. **Weakness** * I found it hard to assess the impact of some of the ablation studies, in particular those pertaining to MD17. I wonder if anything can be set about how well these results generalize to large datasets and molecules of larger size? I am wondering how representative is the aspirin molecule for the large amount of challenging problems in computational chemistry? * The work could still benefit from a more in depth discussion on handling higher-order interactions (e.g. via simplicial neural networks or via Dimenet-like methods). It is clear that these approaches are different, is there anything to say about what difference is decisive? (e.g. eq 10 is based on products **Small notes** I do not see the point of the MLP in equation 13. All are linear layers, except for the last one, why? The last one is apparently a one layer MLP, is the only difference then an activation function? (otherwise a one-layer MLP == a linear layer) Experiments are very well organized and conclusions are appropriate. It is however slightly unclear how well this method generalizes to other types of problems in computational chemistry (involving more or large molecules).
This paper proposes a novel equivariant message passing network for modeling atomistic systems based on the popular Atomic Cluster Expansion formalism. The method relies on a clever factorization of higher order terms into products of two-body terms. This allows MACE to be fast while also taking into account many-body effects. Experimental results seem strong with intriguing scaling properties. Neural network potentials for atomistic systems is a rapidly growing subfield and this seems like a great contribution. All reviewers supported acceptance noting the strong experimental performance, the fast training and inference speed, and demonstrated scaling with dataset size.
By restricting (can also be viewed as expanding) to holomorphic neural networks, the authors are able to use a contour integral in the complex plane for finite $\\beta$ to evaluate a differential used in equilibrium propagation (EP) that is only valid as an approximation of the backprop gradient in the limit $\\beta \\to 0$. The authors interpret this use of the contour integral as a time integral of an oscillating teaching signal. The authors show that this holomorphic EP surpasses EP in performance especially in the presence of noise. The authors incorporate a cool idea to convert a differential to a contour integral in the complex plane and then interpret this integral as a time integral of an oscillating teaching signal. They also show that allowing for finite beta enables robustness to noise, unlike EP which is only vaild in the lim $\\beta \\to 0$. However, the authors do not provide any pointers as to how this may be implemented in biology except mentioning that there may be connections to theta neurons and phasor networks. Still this is an interesting idea that possibly deserves to be disseminated to stimulate further progress in the search for bio-plausible back-prop. Disclaimer: I have not verified any proofs / derivations. Yes <doc-sep>Equilibrium propagation is a promising framework to learn neural networks in a biologically plausible way. However, it is sensitive to noise, which makes it hard to scale to large machine learning tasks, and requires two phases, limiting its interpretability as a biologically plausible learning framework. The authors solve those two problems by introducing a new variant of equilibrium propagation that relies on complex analysis and holomorphic functions, and derive an online unbiased version of the corresponding learning rule. They show that this new algorithm is more robust to noise than traditional equilibrium propagation and scales to large vision classification tasks such as ImageNet. + Significant theoretical advances on equilibrium propagation + Careful empirical study of the impact of the different parameters ($N$, $\\beta$) on the quality of the gradient approximation + Strong empirical results (robustness to noise, scale to ImageNet) while being arguably more plausible than previous equilibrium propagation versions + The paper is overall very well written - (Weakness) A detailed description of the algorithm is lacking, along with an interpretation in terms of neural dynamics I found the discussion of the limitations of the paper adequate. <doc-sep>The authors present `holomorphic equilibrium propagation' (holomorphic EP), an extension of equilibrium propagation for networks with holomorphic activation functions, which estimates the gradient of the loss using an approximation of the complex Cauchy integral. This avoids the need for an infinitesimal teaching signal, making the method more robust to noise. Strengths I think the authors have presented a beautiful extension of equilibrium propagation for energy-based models with holomorphic activation functions. I found their presentation to be clear and as far as I can tell, their work is original (though I am not an expert in this area). The problem of solving the credit assignment problem using local learning rules is an important challenge in neuroscience and I believe the authors have put forth an interesting and elegant solution. Weaknesses Despite the elegance of the authors' solution, it is not clear if the results here are actually useful for understanding neural computation, which is in part the motivation for studying local learning rules. However, I think it is reasonable to leave such considerations for future work. As stated above, it is not clear to what extend the method of holomorphic equilibrium propagation is useful for understanding neural computation. The authors address this to some degree in the discussion, but not in detail. I think this is fine for this paper, but it is an important potential limitation. <doc-sep>The authors begin by showing that the classic equilibrium propagation can be extended to the case where the network layers are holomorphic dynamical systems. The essential result which follows is that the gradient with respect to parameters can be transformed to a contour integral on a circle around zero, which is equivalent to the first Fourier coefficient of the derivative of the nudged energy function with respect to network parameters. The central claim is that this equivalence eliminates the requirement that the "nudge size" go to zero. The authors define a straightforward numerical approximation to the gradients (equivalent to discrete Fourier transform), and show that the bias of this approximation goes to 0 as the sample rate of the Fourier transform goes to infinity. The authors' toy experiment shows that holomorphic EP is capable of learning and that the cosine similarity of the approximated gradients is significantly higher than with classic EP. The authors' theoretical investigations also reveal that holomorphic EP can be adapted to an online learning algorithm, assuming that the time scale of the layer dynamics is much less than the time scale of the oscillations computing the Fourier transform, which is in turn much less than the time scale of the weight updates. The authors validate this result with the MNIST task and show that the online approximation of the gradients are still significantly more accurate than the classic EP algorithm. They also compare classic, homlomorphic, and online homlomorphic EP on the MNIST task and claim that holomorphic EP has greater robustness to noise. Finally, the authors validate their algorithm by adapting a CNN architecture to be holomorphic and showing that the holomorphic EP training yields virtually the same results as BP training on a standard CNN. The paper presents a very novel idea which greatly increases the power of a local learning rule using the elegance of complex analysis. The proof of theorem 1 in the main text could be presented more clearly. It seemed as though the authors were implying that the logic of the complex chain rule needed to be applied to $\\theta$ as well as $\\beta$. The proof in the appendix is much more clear, I think that one thing the authors could do to improve presentation in the main text would be to denote $s_{\\theta,\\beta}$ instead of $s_{\\beta}$. The derivation of the weight gradients presented was only with respect to a mean-squared error loss. However, for the computer vision tasks I would assume cross entropy loss would be required. Was an equivalent version of Eq. 8 derived for cross entropy anywhere? Eq. 8 seemed out of place since the main tasks are classification. If I understood correctly, Eq. 8 is the basis for the derivation of the online learning algorithm, is the use of MSE in the online learning essential or inconsequential? It would be strange if the online training of MNIST used MSE. Was the JAX autodiff engine used to compute the gradient of the nudged energy function with respect to parameters, in the case where MSE was not used? The toy task was not adequately explained in the main text, what exactly are the inputs and targets? The way it sounded was that the network is given noise as input and is supposed to output a single value. This is very trivial, even for a toy task. Were there multiple Gaussians? Was the task to classify which samples came from which distribution? How many classes were there? Another minor clarity issue is the labelling of Figures 2d and 3c versus 4a and S6. Why did the authors choose to present 1 - cos similarity instead of just the similarity itself? This made the plots look very surprising at first. Overall, this is a very interesting paper which I would very strongly accept on the condition that its clarity issues are fixed. The authors adequately addressed potential societal impact and some of the limitations of their work. However, only in the final appendix did I find a note that holomorphic EP was *much* slower at learning than BP. Especially since the authors mention edge and IoT devices in their discussion, I think a discussion of computational cost should also be moved to the discussion. Just how much longer should we expect holomorphic EP to take to train and evaluate? Was it only for ImageNet, and why would that be the case? It was not made clear whether or not 64 bits was required for each real and imaginary part or for the entire complex number, this is an important factor which would certainly affect performance.
Equilibrium propagation is a biologically plausible form of backpropagation based learning where the true gradient of an energy based model is computed for infinitesimal perturbations. In this innovative work, authors extend EP using complex analysis that links contour integrals for finite perturbations with the oscillatory dynamics in time. This not only allows better gradient estimates, but also applications to related theories of learning in neuroscience as well as neuromorphic engineering. It represents a significant advance that opens new doors.
The authors present the split Poisson Gamma (SPG) distribution, an extension of the Poisson-Gamma distribution, to model a discrete non-stationary stochastic process. SPG has an analytical posterior allowing accurate prediction after the model parameters have been inferred a single time. The authors apply the SPG to model tumor mutation rates and show that model parameters can be accurately inferred from high-dimensional epigenetic data. This is achieved through a combination of CNNs, GPs and MLE. The results are promising in detecting tumor drivers such as genes, regulatory structures and base-pairs. The paper is well written, with motivation described and prior literature being discussed. Figure 1 is well laid out to drive home the point. Comments: * Section 2: are all the distributions univariate? If not, a table giving dimensions will be helpful. What is the dimension of the covariates (eta_R)? It would also help if a plate diagram of the generative model is given. * Why was CNN chosen for dimensionality reduction? * Were other non neural architectures used for dimensionality reduction? * What is the significance of ‘735’ epigenetic tracks? * How valid is the assumption that events are distributed independently in the mutation space? I think this is too restrictive but at the same time simplifies the problem. A discussion of this simplification will be essential for the medical/biology audience. * Title of paper has 'causes'. Title on ICLR webpage has 'cusses'. Please correct if possible. <doc-sep>Summary: The paper extends Poisson-Gamma models for non-stationary sequences, in a manner that allows partitioning the counts according to a binomial model to account for multiple resolutions. This generalisation is motivated well with a biological application of practical relevance, and the proposed method is particularly strong in enabling linear computational scaling required for analysis of large genome data. Reasons for score: I am leaning towards rejection in the current form. The contribution itself is worth publishing and the method is likely to be valuable for the application, but the presentation would need to be improved (especially regarding the GP+CNN part; see the detailed feedback) to better communicate the technical contributions for the ICLR audience. Conditional on improved presentation, I would be leaning towards acceptance. Detailed feedback: The proposed split Poisson-Gamma (SPG) process seems reasonable, but in technical terms is a fairly straightforward hierarchical structure and can be constructed using standard properties of Poisson-Gamma. It is well motivated by the resulting efficient inference and this specific application, and may find uses in other applications as well, but does not provide a very clear theoretical contribution that would open immediate follow-up research directions for more general modelling questions. My main problem with the paper concerns the structure of the presentation. While the authors motivate the model well and provide very clear illustrations for the application, the method sections are disconnected and I had trouble following the technical contributions. In particular, the connection between Section 2 (SPG) and the technical algorithm required for using it (Section 3.3) is unclear. To me it seems the GP+CNN part is an integral part of the overall solution and a contribution in itself (and, in fact, an important one -- the SPG alone is not quite sufficient as theoretical contribution). It provides a concrete algorithm using SPG and is general, but now the description is provided only after talking about specific data and looks more like a minor technical detail with no proper theoretical justification. For me, the paper would be more natural if Section 3.3 (and possibly some other parts of Section 3) would be described after Section 2 as description of how SPG is used in practice, and if they would use shared notation and terminology. This would make the practical approach easier to follow and the contributions more clear. The empirical experiments and illustrations for the application are well carried out, and serve as good demonstration of the method. However, a reader uninterested or uneducated in this specific application will have some trouble figuring out how well the method works; this could be improved by complementing the results with clear artificial data of slightly simpler nature. Modifications after discussion: Increased score by one as the presentation in the revised version has clearly improved, along the lines requested in the original review.<doc-sep>Short summary: This paper introduces a split-Poisson-Gamma model to capture discrete-time, integer-valued stochastic processes at multiple scales. Although it seems to be simple and incremental compared with Poisson gamma models, this novel method may has some impacts in modeling mutation rates and identifying genomic elements that drive tumor emergence. Thus, I vote for weak acceptance. -quality: the technique contribution appears to be simple, incremental but could be useful in real applications of detecting cancer-related mutations. -clarity: the most parts of the paper is clearly written. Nonetheless, I find the paper is improvable by clearly describing the input data in Sec.2. -originality: to my knowledge, it is the first attempt to develop split-Poisson gamma model for discrete-time, integer-valued stochastic processes at multiple scales. -significance: it appears to be an incremental contribution in the family of Poisson gamma models. I think the split-Poisson gamma model is a sensible and useful method in detecting cancer-related mutation rates. Pros: a useful and very sensible model for detecting cancer related mutation rates. Cons: A clear description of the input data is missing in the beginning of Sec.2. I think it could be better if the authors can position the split-Poisson gamma model among the closely related Poisson gamma models, to clearly distinguish the main differences and why the novel model outperforms the others. In addition to estimating mu_R and sigma_R^2, how to place hyper priors over these parameters , and perform MAP estimations? I agree the detection of cancer-causing mutations is a significant application of the multi-resolution modeling of discrete stochastic processes. Nonetheless, I would suggest the authors to discuss the broad applications of the split-Poisson gamma (SPG) model for a venue like ICLR. comments: -"NB" notations are used in eq.(6,7) but are not defined for negative binomial yet. -you miss a "Pr" for lambda_R in Eq.(9). -can you present the complete procedure to perform parameter inference in the supplement? -In C.1 of pp14, it should be NB(M_i;alpha_R, 1/(p_i \\theta_R+1)).
Reviewers agree that the paper excels in providing a principle pipeline that combines CNNs and GPs with a Poisson-Gamma distribution to provide a generic approach for multiresolution modelling of tumour mutation rates. As a whole such combination of techniques addresses a key challenge in computational biology that also scales to large datasets.
Summary: Cluster-former is the latest proposal for enabling transformers to deal with long input sequences. Such sequences are particularly problematic for problems like question answering, QA, (or summarization), where the context can be arbitrarily long, and effectively open-ended when the setup includes a context retrieval component (e.g., as in OpenQA). Cluster-Former combines local information by encoding sequence chunks separately with a sliding window, then injects clustering layers, that use k-means to compute centroids to cluster hidden states and capture global information. The approach yields state-of-the-art, and top-of-leaderboard, results on Natural Questions (long answers). This is great solid work, showing how clustering can be designed, implemented and used successfully, to capture long distance dependencies in sparsified self-attention models. This is a concrete and useful contribution in itself for the large community working on this type of architecture and related problems. At the same time the approach involves quite a bit of complexity which makes one wonder if the baselines could be more competitive given a comparable amount of fine tuning. At the same time, competitive solutions of different nature (generative) are being proposed that pose a concrete challenge to this type of architecture, which are not evaluated, but should be at least discussed. Pros - Solid proof of concept and reference to successfully implementing clustering in sparse attention. - Strong empirical results, particularly the Natural Questions’ leaderboard for long answers. - Impressive amount of technical work, also with respect to reproducing results with other systems. - Notwithstanding the amount of work in this area, literature review and comparison seems adequate but I might have missed something. - Some qualitative analysis: which could be extended and articulated, in particular it would be interesting to understand where the long distance information helps; e.g., vs the sliding window approach and particularly vs LSH. Cons - One of the arguments for the paper is that it is not clear if related methods, like Reformer, can generalize to long sequences. However, in the evaluated implementation (Table 2) LSH is not that much worse than k-means. In fact, even just the sliding window alone seems surprisingly competitive on all QA tasks. While being much simpler. I find the authors’ effort to compare with all these related methods truly commendable. It seems natural to wonder how much more fine-tuning has gone into Cluster-Former compared to the simpler baselines, given its additional complexity. It would be important to discuss this aspect in more detail. - Given the recent work of generative readers: https://arxiv.org/abs/2005.11401, and particularly Izacard & Grave, (FID, https://arxiv.org/pdf/2007.01282.pdf) it seems unclear that direct encoding is the only, or the best, option for dealing with long sequences, at least for QA. In particular, FID seems attractive due to its simplicity and capacity (about twice as much as Cluster-Former it seems). The authors should discuss this work. It would be ideal, at some point, to compare directly by evaluating on OpenQA-NQ or by other means. Detailed feedback - Pleas define x, from x\\times d, right below Eq(1). Num tokens in context? - Scaler value/scalar value? - It would be great to explain Eq(2) step by step for clarity. - What is the effect of the overlapping content size m-l? And in general of parameters l and m. In particular, could this affect positively the performance of the simpler sliding window model? - Why using cluster layers at only 2 fixed depths? How does this parameter affect results? - The max length is constrained to 5k (10k test) due to memory constraints, can this be improved, how? - How long did it take to train the leaderboard (NQ) entry system? - Unclear what table 2 evaluates on, e.g., for NQ, is this on the dev set? Or a fraction of it? <doc-sep>The paper proposes ClusterFormer to address the problem of quadratic compute requirements of the attention mechanism in a Transformer model. To this end this paper proposes to combine local attention to promote local consistency and proposes KMeans clustering to gather global information for every token. The paper establishes strong results on the long form question answering task of Natural Questions in an extractive setup, with it getting the leaderboard position ahead of ETC-large. While the idea in the paper is natural and the results on NQ are strong, unfortunately the idea in the paper is not new and has already been introduced in the work "Efficient Content-based Sparse Attention with Routing Transformers" [1, 2] which the authors fail to cite or credit. Therefore, I recommend rejection. References: [1] https://openreview.net/forum?id=B1gjs6EtDr [2] https://arxiv.org/abs/2003.05997 <doc-sep>The paper describes a method to handle long documents for question answering. Most existing approaches use a sliding window approach, without communication between different sliding windows. Instead, they propose an approach that clusters individual vectors, and allows communication (attention) among the locations in the same cluster. I am not sure about the intuition behind this -- why would communicate between similar vectors more efficient than dissimilar or randomly chosen vectors? Would the performance improve if you use a better clustering algorithm? The authors do not provide much intuition on this either. I have a concern about comparison with locality sensitive hashing. The number of buckets used in locality sensitive hashing was 64. And it's clear that having more clusters help. And the comparison between #C=64 Cluster Former and Locality Sensitive Hashing is marginal -- less than one point on all measures. I am not sure the results are strong enough to support that clustering is better than random assignments. For a valid comparison, they should report the results with locality-sensitive hashing and 512 buckets. The paper evaluates on three QA datasets, as long as experiments on perplexity for language modeling and shows promising performances. Some clarifying questions: 1) could you specify a bit more on how do "classify the mean values of the first hidden state of all the chunked sequences to identify whether the question has short / long answers or not"? 2) I'm a bit confused with the experimental set up. For NQ, what's the numbers in Table 2? Is it on the dev set, and the numbers on Table 3 are on the test set? Please make it clear. 3) would this work on a really lengthy QA dataset such as narrativeQA? 4) From Table 2, it seems the more the number of clusters, the better the performance. Why do you stop at 512? Is this have something to do with computational efficiency? <doc-sep>**Summary:** This paper introduces the ClusterFormer, a transformer architecture that scales gracefully to long sequences by restricting pairwise attention by the cluster assignments of the hidden states of input tokens. The paper presents strong empirical results on question answering benchmark datasets outperform state-of-the-art approaches as well as strong baselines introduced by the authors. Summary of review: Strong empirical results on question answering datasets; interesting data-driven efficient transformer model; further clarification on relationship to related work needed; experimental results would be stronger with more analysis of the proposed method. **Strengths:** The all pairs self attention component of transformers limits their scalability to long sequences. This paper presents a model that is reduces the complexity by grouping related tokens into clusters, such that self-attention is applied only within each cluster. In particular, a long sequence is first encoded using a sliding window style approach, then these sliding window representations are clustered and the resulting cluster memberships determine the sparsity for the remaining layers of the transformer. The approach appears to work quite well on question answering datasets for which the approach achieves state-of-the-art results on three datasets. The paper is well written and the presentation is very clear. **Weaknesses:** **Relationship to related work:** The proposed approach appears to share many similarities to the Routing Transformer (Roy et al, 2020). While both approaches from this year, I think that it would be important to present the similarities and differences of the two approaches (i.e. sliding windows, way k-means centers are updated, etc) clearly in this paper. Other related, though more distinct, ideas are used in the inducing point based variant of Set Transformers (Lee et al, 2019). **Empirical Analysis of Scaling to Long Sequences:** I think the presentation of the paper would be improved if the authors demonstrated just how much computation is saved by using these sparse, cluster-based attention layers. It would also improve the presentation to compare the efficiency of the proposed approach to other methods at varying input length sizes. Similarly, it would be interesting to show the performance of the proposed approach compared to baselines for varying maximum sequence lengths. It would further be interesting to investigate the cluster centers discovered by the method, what they represent, and how they change over time. This would be particularly important to analyze how the model picks up information across long sequences (i.e., showing that clusters are not made up of tokens from the same sliding window). **Details of k-means**: Apologies if I've missed this, but is anything done to ensure that the cluster sizes produced by k-means are relatively balanced? The skew of these sizes will directly impact the scalability of the method? Further, while it is implied by the method/text, it would be nice to describe how the gradient is calculated for this hard cluster assignment. Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier. Efficient Content-Based Sparse Attention with Routing Transformers. First posted March 2020. https://arxiv.org/abs/2003.05997 Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, Yee Whye The. Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks. ICML 2019. http://proceedings.mlr.press/v97/lee19d/lee19d.pdf **Questions for the authors:** • Please see questions in the details of k-means section.
The paper attempts to make transformers more scalable for longer sequences. In this regards, authors propose a clustering-based attention mechanism, where only tokens attends to other tokens in the same cluster. This design reduces memory requirements and allows more information mixing than simple local windows. Using the proposed approach, new state-of-the-art performance is obtained on Natural Questions long answer, although marginal. However, reviewers raised numerous concerns. First, the novelty of the paper compared to prior work like reformer or routing transformer which also conceptually does clustering is not resolved. Second, the claim that k-means yields a more balanced/stable clustering than LSH is not well established. Finally, why clustering, i.e. attention between similar vectors is better than dissimilar or randomly chosen vectors or does is it even as expressive is not clear. Thus, unfortunately I cannot recommend an acceptance of the paper in its current form to ICLR.
The paper proposes a method to dissect a policy trained for a simple task to extract "motor primitives" (or set of neurons) that provides different behaviors or attributes. The authors then use these attributes to have humans control the agent for more complex tasks or to create different behaviors by combination. The paper showcases the algorithm in 2 domains: Driving and quadruped locomotion (in simulation). The authors first train for driving (or walking) forward, without any explicit goal conditioning. Then, the authors dissect the policy for different behaviors by replaying trained policy and doing frequency matching. Next, humans use the provided interface for obstacle avoidance. In addition, the authors show that they can obtain very different behaviors by combining these motor primitives in different domains such as 2D walker. As an example, they obtain front flipping behavior by activating jumping and roll rate which are discovered using policy dissection. The method proposed by the authors is very interesting for interpretability and discovery of skills in Reinforcement Learning. As the authors state, instead of training for complex behaviors, the authors can train for simple ones and discover different skills from replay. I think the front flipping is a very good example. It is very hard to design reward functions for all these different skills and train a network that can perform all of them, but the authors can find these skills and combine them without reward engineering which is very interesting, especially from locomotion perspective where the training are usually goal conditioned not skill-rewarded. Another interesting side-effect of the approach is from robotics perspective. Although the authors are working on simulation, not real robots, the proposed method allows discovery of skills with small amount of replay data. This can be a very interesting approach to the well known sim-to-real problem, where trained policies are not behave the same way in the real world. The proposed method can be used to discover skills on the hardware although it does not match the simulated behavior, this would lead to motor skills that work on the real robot. In my opionion, the apprach has some weaknesses as well. 1- The authors use the approach to have humans control the robot to solve slightly more complex task. On the other hand, the obstacle avoidance problem can already be solved with goal conditioned policies. So the results does not fully exploit the method to its full capacity. Instead, the authors could try to achieve a problem that can't be solved with simple training. One example I can think of is to have a parkour setup that requires jumping, walking low. 2-The authors could automate the higher level controls instead of collaborating with humans. The proposed schema is perfect for a hierarchical reinforcement learning approach. Once skills are discovered, the authors could retrain with the higher level controls provided to the humans with additional sensory information that did not exist in the initial training. This could very well train in very short time and handle the complex scenarios. 3-The authors could autmoate the discovery of the skills. The literature contains many examples of such approaches based on maximizing entropy during training. Instead of using human designed motor primitives, learning could provide more diverse behaviors as well. 4-The discovery is based on replay of the very simple behavior. If I understand correctly, the matching is based on behaviors seen in very small timeframes (i.e. it turns slightly left and then slightly right to go straight). But, if the initially trained policy was great, the behavior could very well just go straight which would lead to failure to match (again, if I understand the matching process correctly). Adding disturbances (at the neuron level) to the replay would provide much more than the initial data set where the agent is just going straight. Of course this might require some more work and careful design of the disturbing process. Despite these weaknesses, I think the method has a very good potential and very interesting to read and the provided experiments is considerably enough to show some of this potential as well. The limitations of the method is explained well, but does not contain the weaknesses (or potential directions) that I explained above. I don't think that the paper contains any major societal impact. <doc-sep>In this paper, the authors propose a novel method to modify the behavior of pretrained neural network policies to obtain the desired control effects without changing the weights of the networks unchanged. Instead, the authors analyze the activation patterns of all neuron units in the network, and create associations between the neuron activation and desired kinematic properties (such as angular velocity, translational velocity, etc) in the frequency domain through Fourier transforms. For each kinematic attribute, the proposed method identifies one evoking neuron; So during the online policy roll out the users can directly stimulate a certain desired behavior of the robot by activating the corresponding neuron, even though the policy is never trained for this new behavior/task. The authors demonstrate in a set of experiments that, with their method and human-in-the-loop, they can improve the performance of many safety environments. The strengths of the paper: It introduces a new concept to do zero-shot generalization of existing policies without any re-training. Provides a new insight on the interpretability of neural network policies, i.e. from the frequency domain to associate neuron activations with behavior. Demonstrates that, even though a policy may not be trained with certain tasks, some behaviors might have already been seen during the training phases and somehow are “memorized” through the network structure. The weaknesses of the paper: The motivation of the paper is to make neural controllers safer by incorporating humans in the loop. Thus, I assume that control accuracy and robustness of the NN policy is very important. However, by “hacking” into a NN policy that is not trained with human inputs (e.g. steering, throttle cars), I am not convinced if the behavior, such as velocity tracking accuracy, etc, can be as good as goal/human input conditioned policies. The authors assume that for each kinematic attribute, there is one neuron that is responsible for. This assumption is not quite sound to me, as the activation of a neuron will have chain effects for other neurons down the propagation path. And the other neurons might be responsible for other kinematic behaviors according to the same assumption. Also, I am not sure how this method will scale with truly deep neural networks with many hidden layers and neurons, as there are many more activation paths for the network. N/A <doc-sep> This paper proposes a framework where a human can intervene to steer the behaviour of an RL agent. This is done by allowing the human to select from a "stimulation-evoked map". * I think the paper has a big problem with clarity, the setting is very unclear. * Experiments don't compare against a baseline * Requires full attention from the human and system isn't able to learn from the human interventions Apologies for not discussing this more but this is not my area and I found the paper really unclear, so I struggle to write a proper review. Agent isn't able to learn from interventions <doc-sep>This paper introduces a method termed 'policy dissection', which finds a mapping between neurons in a trained continuous control policy and high-level kinematic motions of interest (eg. turn left, right, stop, go forward). This mapping is then used to allow for human intervention of modification of the trained policy. The core contributions are a method for identifying this mapping (using a frequency-based analysis technique), and to show that this can be exploited to allow policy intervention (by amplifying units in a direction that correlates with an increase in the desired kinematic motion) and high level policy modification by humans. Strengths: - This is a creative and interesting idea, which I have not come across before. I enjoyed reading this paper. - I think this work is significant from the perspective of trying to understand the implicit representations or primitives captured by neural control policies, and because it introduces a method for humans to controllably modify a learned policy to achieve a goal. - A thorough set of experiments are provided on a number of experimental settings. Weaknesses: - The paper could do with some smoothing, and improvements to the writing. There are numerous typos and grammatical errors, but these did not get in the way of the idea being communicated. - Although the proposed approach does allow for some level of human intervention and modification, this appears to be very imprecise and it is unlikely that the proposed approach allows more than very coarse control. Limitations are discussed (neurons that do not align with any primitives), but I would value more discussion on the coarseness of the method - the video does make human take-over appear quite crude. I would also value a discussion of competing approaches, eg. policies that are explicitly trained to allow for kinematic intervention, and motivation for the proposed approach over this.
This paper proposes to dissect a trained policy, which finds the correspondence between the neuron activations and the motion primitives. This enables a human to control the agent to complete complex and diverse tasks even though only one simple task is trained. All reviewers agree that the proposed method is a creative solution to an important problem. And it may also have important applications in robotics. In addition, this is an important step towards understanding/interpreting neural network policies, which may inspire follow-up works in the future. There are a few concerns and suggestions in the original reviews. Most of them are sufficiently addressed in the rebuttal and discussions. The new experimental results look impressive, and help to resolve a major concern about the coarseness of the human control that this paper enables. While the paper writing is still rough, the creative solution and the potential important applications compensate for the shortcomings. Thus, I recommend accepting this paper. Please revise the paper by incorporating reviewers' comments.
In this paper, the authors consider Byzantine-robustness of a distributed learning system in the setting of non-iid data distribution. The bucketing technique is applied to reduce the heterogeneity across the workers. To further reduce the variance within each worker, the authors adopts the momentum technique. These two techniques can be combined with various robust aggregation rules. The authors prove the convergence of the combined methods, and show that they reach the lower bound. The authors also claim that for the over-parametrized case, the negative impact of data heterogeneity can be eliminated. 0. The investigated problem, Byzantine-robust distributed learning over non-iid data, is important. The algorithm development and the analysis both contain new results 1. In the previous version of this paper, the authors propose resampling to reduce the heterogeneity across the workers. Why do the authors switch to bucketing in this version? 2. In Page 2, the authors claim that “none of these (non-iid Byzantine-robust) methods are applicable to the standard federated learning.” Please justify this claim. 3. Related to the above comment, the authors do not compare the proposed methods with any other non-iid Byzantine-robust methods. 4. Also about the numerical experiments, the MNIST dataset is too simple. Testing on one or two additional large datasets could make the results more convincing (in particular, for the over-parametrized case). 5. In Section 3.2, the mentioned mimic attack has appeared in other papers that investigate non-iid Byzantine-robustness. Please cite. 6. In Section 4.2, why place centered clipping here? It should be located in Section 3. 7. The role of momentum is not well investigated. The authors should indicate its contribution to the performance improvement. 8. For the numerical experiments of the over-parametrized case, the authors use centered clipping as the basic aggregator. But centered clipping has certain robustness in the non-iid setting. Using other base aggregators, such as geometric median, could be better here. 9. Theorem IV requires that B^2 is sufficiently small. Please check whether this condition can be satisfied in the numerical experiments. Overall, this paper has publication merits, but some issues need to be addressed. <doc-sep>This paper focuses on Byzantine-robust federated learning, a learning paradigm where a centralized server coordinates learning across a data set partitioned across multiple worker nodes, some of which are adversarial. Typically this is done via robust aggregation schemes which ensure that adversarial nodes do not hinder learning. In this setting, the author's main focus is on studying the setting where honest worker nodes own heterogeneous data sets. As the authors mention, the existing literature on Byzantine-robust federated learning focuses on the setting in which honest worker nodes draw iid samples from an underlying data distribution, and although there is also existing literature on (non-Byzantine) federated learning over heterogenous data distributions amongst nodes, this work brings these two strands of research together. The main results are the following: -Providing (simple) settings and empirical results wherein common aggregation methods fail in the presence of heterogeneous data, even when there are no adversarial nodes. -Providing a specific attack vector for the heterogeneous Byzantine setting, whereby adversarial nodes choose an honest node to replicate ("Mimic" attack). This exposition comes with an efficient algorithm for computing the most "hurtful" choice of node to mimic, and the authors empirically demonstrate the potential of this attack against common aggregation methods -Formally defining the design objective of "Agnostic Robust Aggregators", which quantifies degradation of aggregator performance as a function of byzantine tolerance and dataset heterogeneity. -Providing a simple randomized bucketing scheme. Messages from nodes are randomly averaged in buckets, where the number of buckets is a free parameter. Increasing the number of buckets reduces the variance of bucket representatives, but potentially increases the number of byzantine agents (post-bucketing). If there is sufficient margin in existing aggregation protocols, performing judicious bucketing before applying the aggregation rule permits making Agnostic Robust Aggregators out of existing aggregation schemes (with specifically quantifiable guarantees). -The authors also theoretically study optimization methods that make use of aggregation methods (via stochastic gradient descent). In this setting they provide upper bounds on convergence rates of optimization using robust aggregators (in terms of the parameters that govern the robustness of aggregation method as per Definition A), and show that these upper bounds match existing upper bounds in the case where there are no byzantine agents (\\delta = 0) or when data is homogeneous (\\rho = 0). In the regime where these terms are non-zero, convergence is not guaranteed, but via information theoretic methods, the authors also provide a matching lower bound -If heterogeneity bounds are more refined (whereby gradient variation is bounded by the order global gradient of the entire dataset), then the authors demonstrate convergence of robust aggregation methods on SGD. This setting applies when systems are over-parameterized. -Finally, the authors provide multiple experimental results that validate the theoretical findings of the work. Strengths: -I think that this is an elegant way of combining two key areas of work in the FL community: non-iid data sets + Byzantine nodes -Empirical results seem robust -The bucketing model is simple, believable, and it is nice that it composes well with existing methods. -The theoretical analysis of the model is extensive: it matches existing bounds when \\delta or \\rho=0, and the lower bound matches the non-trivial term that occurs when both \\delta,\\rho \\neq 0. Weaknesses: -Are there more complicated examples of non-iid models under non-adversarial attacks that perform poorly? The Rademacher distribution is a good starting point to justify the analysis, but seeing the performance of existing aggregators on more complicated data sets when \\delta=0 would be interesting. - Still not convinced about the upper bound on gradient norm in the over-parameterized setting (where the term is bounded by B \\times \\grad f(x)). Is this realistic? Perhaps I am not seeing how this falls out of over-parameterization. A note on this might be useful in the write-up of the paper. -The lower bound analysis (Theorem 3) could have some more intuition as well, in terms of what the argument is like, and what the functions that are used consist of. In particular, it would be interesting to note whether the author thinks the adversarial functions for the bound are efficiently computable, and if not, whether hard functions can occur in practice over the byzantine nodes (could this happen without some amount of coordination amongst the byzantine nodes for example?) -What about other types of attack objectives, i.e. backdoor attacks (this seems highly relevant in the non-iid setting, especially if the adversary has knowledge of which clients have which data) I recommend this paper for acceptance. This seems like a fitting extension of existing work, and the authors have provided a corresponding framework for quantifying performance loss in the presence of byzantine agents and heterogeneous data for federated learning. Their results match existing work in regimes when there are no byzantine agents, and when there is a lack of heterogenous data. Furthermore, the non-trivial loss of performance in the regime where both byzantine agents and heterogenous data is present is accounted for in a matching lower bound. These results will be of interest to the growing body of work in federated learning at ICLR. <doc-sep>This paper proposes a bucketing scheme for robust distributed learning with heterogeneous data. Gradients are more homogeneous after bucketing, which will increase the robustness of existing algorithms. The author provides theoretical analysis as well as empirical results. ## Strengths The bucketing scheme proposed in this paper is interesting. It is easy to adopt and does not bring much extra computational cost, but can improve the performance of existing Byzantine-resilient algorithms, as shown in the empirical results. Also, I appreciate that the authors provide convergence results for general aggregation rules, together with precise analysis for three common aggregation rules in Theorem I. ## Weaknesses When decreasing the heterogeneity, the bucketing scheme also reduces the number of candidate vectors (gradients). However, in the worst case, the number of corrupted vectors will remain the same. Thus, the number of Byzantine workers that can be tolerated will decrease to $1/s$ when adopting bucketing. Although the empirical results show that $s=2$ is sufficient to overcome heterogeneity, the number of Byzantine workers that can be tolerated already drops by half in this case. I am not aiming to criticize the bucketing scheme but to point out the limitation. I think that this problem has truly restricted the application prospects of bucketing in general cases. Although there are weaknesses in this work, considering the challenges from heterogeneity, I think this work is slightly above the threshold. -------- (Post-rebuttal) My major concerns have been properly addressed by the authors and the quality of this paper has been improved after revision. Thus, I have raised my rating. <doc-sep>The paper provides a systematic and deep theoretical study of the problem of Byzantine-robustness in the heterogeneous setup, i.e., when workers have non-identical datasets and, as s result, their local loss functions are non-identical. The authors prove that even under bounded heterogeneity it is impossible to provably achieve any predefined accuracy of the solution by any method. Next, the authors propose and analyze an algorithmic tool called *bucketing* and show that it makes some known aggregators such as Krum, coordinate-wise median, and geometric median to be *agnostic* robust aggregators. Under bounded heterogeneity assumption, the authors derive that Robust Client Momentum method from (Karimireddy et al., 2021) converges for non-convex problems to a neighborhood of a stationary point. Next, it is shown that the size of the neighborhood can be made arbitrarily small for over-parameterized problems. Numerical experiments corroborate theoretical findings and, in particular, show the benefits of the proposed bucketing procedure. Overall, the paper is well-motivated, clearly written, and contains solid contributions. There are also some minor inaccuracies in the proofs, small typos, and other minor weaknesses -- I list them below. I encourage the authors to address all of them. ## Strengths 1. **Lower bounds for Byzantine-robust optimization under bounded heterogeneity.** In Theorem III, the authors prove that even in the case of bounded heterogeneity (in the classical sense for papers on FL) one cannot achieve any predefined accuracy even for the strongly convex problems. In particular, they prove that functional suboptimality cannot be made smaller than $\\Omega(\\frac{\\delta \\zeta^2}{\\mu})$ ($\\mu$ - str. convexity parameter, $\\delta$ - fraction of Byzantines, $\\zeta^2$ - dissimilarity measure of the gradients of local loss functions). Typically, $\\mu$ is quite small and $\\zeta^2$ can be large for some FL problems when the clients naturally have highly heterogeneous data. In such situations, the lower bound is really pessimistic. Although this fact (Theorem III) is expected for the experts in optimization, it is very important for the field of Byzantine-robust optimization: it creates a clear picture of the limits of robustness. 2. **New upper bounds for Byzantine-robust optimization under bounded heterogeneity.** The authors prove new complexity bounds for Byzantine-robust optimization under bounded heterogeneity in the non-convex case. The derived results match the established lower bound. Moreover, it is shown that one can achieve any predefined accuracy (Theorem IV) when the heterogeneity level at the point $x$ is proportional to $\\|\\nabla f(x)\\|^2$ (can be seen as a strong growth condition from (Vaswani et al., 2019)). 3. **Bucketing as a tool to make Krum, Geometric Median, and Coordinate-wise Median robust.** (Karimireddy et al., 2021) show that even in the homogeneous case Krum, Geometric Median, and Coordinate-wise Median are provably non-robust to Byzantine attacks. This work fixes this drawback of the mentioned aggregation rules (Theorem I) via a simple tool called bucketing. This result (together with Theorem II) is very important for Byzantine-tolerant optimization in the homogeneous case, since Krum, Geometric Median, and Coordinate-wise Median were not analyzed previously without restrictive assumptions. 4. **Clarity.** The paper is well-motivated, clearly written, and has a good structure. ## Weaknesses 1. **Inaccuracies in the proofs.** I have checked all the proofs and noticed several inaccuracies and unexplained derivations. Although it is possible to fix all the issues, in the current shape, it is hard to follow some parts of the proofs. I list all my questions and comments in section **Questions and comments about the proofs** of my review. 2. **Comparison with related work.** Unfortunately, it is not trivial to compare the results from this paper with other related works given the information provided in the related work section. The current version of the related work section summarizes the known works without going into the details. However, it is important for the paper to provide an explicit comparison with the related works (with the discussion of the derived rates and assumptions). I strongly encourage the authors to provide such a comparison at least in the appendix. Moreover, the authors should pay a lot of attention to the comparison with Yang & Li (2021) since they also use bucketing. ## General questions and comments 1. **Abstract, sentence "Our work is the first...":** This sentence is not correct since Li et al. (2019) also derive convergence results under realistic assumptions for strongly convex problems. Perhaps, the authors wanted to emphasize that their work provides the first guarantees under the not too strong assumption in the non-convex case. 2. **Page 2, "However, none of these methods are applicable to the standard federated learning."** This claim requires additional clarifications. For example, it is not clear why the method from Li et al. (2019) is not applicable. 3. **Definition A:** I suggest the authors additionally emphasize that this definition is useful for both homogeneous and heterogeneous cases. 4. **Remark 3:** This is 1 iteration of CClip, which is not necessarily the output of the aggregator. 5. **Second paragraph after Remark 4, "... which matches the optimal iid Byzantine robust rates of (Karimireddy et al., 2021)".** It is not clear why the mentioned rate is optimal (and in what setting). Karimireddy et al. (2021) do not prove the lower bound. Moreover, there is a recent work, where the better rate is achieved (in terms of the number of iterations): Gorbunov et al. "Secure Distributed Training at Scale." arXiv preprint arXiv:2106.11257 (2021). 6. **Page 7, the sentence above Theorem 7, "... typically holds in most realistic settings (Vaswani et al., 2019".** Vaswani et al. (2019) do not provide evidence that this assumption holds in most realistic settings. In fact, they show it (Strong Growth Condition) for an example of squared hinge-loss in the case of linearly separable data. They also prove that Strong Growth Condition follows from the interpolation condition when the summands in the loss function are smooth and the loss satisfies PL-condition. However, in this case, the known bound upper bound for $B$ is proportional to $\\frac{L}{\\mu}$ where $L$ is the maximal smoothness constant of $f_i$ and $\\mu$ is the PL-parameter of $f$. Therefore, the current theoretical estimates for $B$ are quite large even for simple special cases. 7. **Theorem IV, condition $B^2 < \\frac{1}{3c\\delta}$.** In view of my previous comment, this requirement may imply that $\\delta$ is tiny. I think the discussion of this requirement should be added to the paper. 8. **Missing reference.** This work also addresses the heterogeneous case for Byzantine-robust optimization: *Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B Giannakis. Federated variance-reduced stochastic gradient descent with robustness to byzantine attacks. IEEE Transactions on Signal Processing, 68:4583–4596, 2020.* Moreover, Wu et al. (2020) also prove convergence guarantees under similar assumptions. Therefore, a detailed comparison of the derived results should be added. 9. **Conclusion, the last sentence, "... our results represent a major breakthrough..."** Although the work makes a strong contribution to the field, in my opinion, it cannot be called a breakthrough taking into account that many of the building blocks were known and analyzed to some extent (bucketing, client momentum). I think only a couple of papers can be called a breakthrough objectively (e.g., Nesterov's acceleration and discoveries of the same caliber). So, I suggest the authors rewrite the sentence: let the readers decide for themselves whether this paper is a breakthrough or not. ## Questions and comments about the proofs 1. **Proof of Lemma 1, formula for $\\mathbb{E}_{\\pi}[y_i | i \\in \\widetilde{\\mathcal{G}}]$:** this is true, but the detailed derivation should be added. 2. **Lemma 7.** First of all, the lower and upper bounds should be multiplied by $n$. Moreover, I have the following question about the proof of the lower bound: does the described distribution of $y_i$ correspond to any distribution of the initial vectors $x_i$ before the bucketing? This is crucial for the correctness of the lower bound. 3. **Page 20, robustness of Krum.** The first formula is not proven. Moreover, it is not clear what is $S^{\\ast}$. Next, in the formula above "Taking expectation now on both sides yields" the minimization should be taken over the sets $S$ such that $|S| = \\frac{3m}{4}$. In the next formula, it seems that the numerator should contain $4n\\tilde{\\rho}^2$. After that, the sentence "Then, the number of Byzantine workers can be bounded as $|\\tilde{\\mathcal{B}}| \\leq m(1/4 - \\delta)$" should be rewritten as "Then, the number of Byzantine buckets can be bounded as $|\\tilde{\\mathcal{B}}| \\leq m(1/4 - \\nu)$". Finally, in the upper bound for $\\mathbb{E}\\|y_{k^\\ast} - \\bar{x}\\|^2$ the denominator of the first fraction should have $\\nu m$ instead of $\\nu n$. 4. **Page 21, robustness of Geometric median.** In the third sentence, the word "worker" should be replaced by "set"/"bucket". There is also a typo in the formula after the words "Squaring both sides, expanding, ...": the sum in the first row should not have $\\mathbb{E}$ inside. 5. **Appendix D, the proof of Theorem III.** The first formula should contain $\\delta$ instead of $\\hat\\delta$. Next, the formula after the words "Note that the gradient heterogeneity ..." should be supported by the full derivation (or explained). It is true but requires few extra steps. 6. **Proof of Lemma 8** contains several inaccuracies and unexplained derivations. First of all, what is $\\mathbb{E}[\\cdot | i]$? If it is an expectation conditioned on $i$, then $\\mathbb{E}[g_i(x^{t-1}) | i] \\neq \\nabla f_i(x^{t-1})$ since $x^{t-1}$ depends on the stochasticity not related to the choice of $i$. But the proof uses $\\mathbb{E}[g_i(x^{t-1}) | i] = \\nabla f_i(x^{t-1})$. This should be fixed (here and below). Next, it is not clear what is $\\mathbb{E}_i[\\cdot]$. The first formula on page 24 is incorrect due to the same reason as the first formula in the proof. This issue should be fixed as well. The second formula on page 24 is also inaccurate: the RHS should have $\\zeta^2(1 - (1-\\alpha)^t)$. Next, there is a typo in the sentence "This is because the randomness in the sampling...". Finally, the last sentence in the proof the authors claim that it is enough to apply Definition A, but it is not correct: this definition requires $\\mathbb{E}\\|m_i^t - m_j^t\\|^2 \\leq \\rho_t^2$ while the authors prove a weaker result that $\\mathbb{E}_i\\|m_i^t - \\bar{m}^t\\|^2 \\leq \\rho_t^2$. Overall, the proof of Lemma 8 requires a major revision (and, probably, Definition A should also be changed to fit the proof). 7. **Lemma 10.** Numerical constants are incorrect: instead of $\\frac{2\\alpha}{5}$ the formula should have $\\frac{5\\alpha}{16}$ and instead of $\\frac{\\alpha}{10}$ one should have $\\frac{3\\alpha}{32}$. 8. **The proof of Theorem V** contains several places that should be better explained (and, most likely, corrected, because of the inaccuracies). First of all, it seems that the authors forgot to upper bound $\\mathbb{E}\\|m_t - \\bar{m}_t\\|^2$: it is contained in the RHS of the second formula in the proof, but it is omitted in the derivations on page 26. Next, the last formula on page 25 is inaccurate: one should have $\\nabla f(x^{t-1})$ in the RHS. Moreover, the next step in the proof is unclear: it seems that a lot of derivations are omitted. For me, it is not clear how the term $\\mathbb{E}\\|m_t - \\bar{m}_t\\|^2$ is handled. Therefore, this derivation should be significantly rewritten and checked. ## Minor comments 1. **$|\\mathcal{B}| = f$.** The symbol $f$ is already used to denote the objective function. I suggest using a different notation for the number of Byzantines. 2. **Lemma 1.** $\\widetilde{\\mathcal{G}}$ is defined only in the proof. The authors should add the definition in the statement of the lemma. 3. **Figure 9** contains low-resolution images. The authors should replace them with the ones with higher resolution. 4. **Page 18, the last formula:** full stop is missing in the end of the of the formula (please, check other formulas that end sentences as well). 5. **Page 19, the second sentence, "... each can belong to only 1 bucket each":** one "each" should be removed. 6. **Lemma 9.** In this lemma, the authors use $m_t$ instead of $m^t$. The notation should be unified. 7. **Lemma 10.** The authors use $x_{t-2}$ and $x^{t-1}$. The position of the indices should be unified. To sum up, I am sure that the paper should be accepted to the conference after minor improvements. If the authors apply all necessary corrections and address my comments properly, I will increase my score: the paper deserves acceptance as a spotlight or even oral talk.
This manuscript proposes and analyses a bucketing method for Byzantine-robustness in non-iid federated learning. The manuscript shows how existing Byzantine-robust methods suffer vulnerabilities when the devices are non-iid, and describe a simple coordinated attack that defeats many existing defenses. In response, the primary algorithmic contribution is a bucketing approach that aggregates subgroups of devices before robust aggregation. This approach is also easily composed with existing Byzantine-robust methods. The manuscript includes an analysis of the performance of the proposed approach, including an information-theoretic lower bound for certain settings. During the review, the main concerns are related to the clarity of the technical contributions, and unclear technical statements. The authors respond to these concerns and have satisfied the reviewers. After discussion, reviewers are generally strongly positive about the strength of the manuscript contributions. The authors are reminded to make the final changes agreed in the public discussion e.g., discussion of the reduction to SGD when $\\delta=0$
The paper presents a method that jointly optimizes shape, appearance, and foreground segmentation from a monocular video based on neural fields. The author then fits the physical parameters of the scene. The experiments demonstrate that the method works better than dynamic nerf and nonrigid nerf in nonrigid neural rendering. Further, the authors demonstrate interesting video editing experiments based on the learned physics. Overall the quality of dynamic reconstruction and neural rendering from the author-provided data shows superior performance compared to prior SOTA. That said, I have a few concerns about the physical component as well as the full pipeline. I am currently on the borderline and open to going on either side based on whether the rebuttal addresses my major concerns on "physics" well **Pros**: - The paper tackles a very interesting yet challenging problem; - The solution is technically sound - Experiments show superior results to strong competing algorithms. - Code is provided and the video shows impressive editing results. **Cons**: - The method section (particularly the physical component) lacks many details, making it hard to evaluate the technical contribution there. - The experiment section lacks a thorough study of the quality of the physical component. - The two-stage framework is inconsistent with what is claimed in the intro. See above <doc-sep>This paper presented a method for learning the geometry and physics parameters of a dynamic scene. The input source is a monocular RGB video. The paper decoupled the learning of dynamic scenes into a static reference neural field and a deformation field. The main structure of the learning part drew similarity to [57], including the offset field, the divergence field with rigidity masks. On top of these, the paper performed mesh reconstruction for the video sequence and send those meshes to a differentiable physics simulator to optimize between the extracted and reconstructed meshes. The method also allows scene editing. **Strength** 1. The results presented are pretty impressive. Both quantitative and qualitative results were shown with comparison to previous approaches. **Weakness** 1. The paper is hard to follow and not well organized, especially the motivation of the differentiable physics simulator. In fact, most of the demonstration should be used for the simulator, but the current version went over this part hastily. Therefore, it left much confusion to me. For example, 1.1 The physical parameters to learn were unclear. The L_{physics} is only L-2 distance between two meshes with bending operation. 1.2 The mesh extraction method (Line 177) is unclear. How were the meshes extracted and how were they used as supervision signal for the physics simulator? 2. It seems that the simulator can be viewed as a post-processing step after D-NeRF optimization. (Line 242, 243) The integration of all the components was not well demonstrated. This paper does not have limitations listed. <doc-sep>The paper proposes a method to recover geometry, motion, and physics parameters from a single video. It recovers geometry and motion with differentiable volume rendering similar to Dynamic NeRF. To recover physics parameters, it uses Diff-PD. It shows results on synthetic and real data with relatively small motion. It also demonstrated applications of user editing. **Originality and Significance** - (+) This work is the first to combine prior works of dynamic NeRF and differentiable physics in a joint optimization framework. - (-) However, it is not well-motivated why we need this joint optimization. Why not running differentiable rendering and simulation in a step-by-step manner? Where does joint optimization help in geometry and motion recovery, as well as physics parameter recovery? To give readers a better idea, experiments and a thorough analysis is needed. - (-) The challenges in combining differentiable rendering and differentiable physics is not clearly presented. As a result, the technical contribution appears weak. l183-l196 sparsely discussed an alternative solution but lacks in-depth analysis. How much does the proposed solution out-performs the alternative solution in compute and accuracy? **Clarity** - (-) The paper is difficult to follow in general. Some sentences are not logically connected. See questions. **Experiments** - (+)The quantitative results have nice details, benefiting from NeuS representation. The material parameter estimation experiments is interesting. - (-) The baseline comparison only shows the proposed method is better but does not provide insight and analysis. What is the difference between the proposed method and the baselines? What does the results suggest? - (-) There is no ablation study. For example, I'd like understand how important to jointly solve physics parameters together with differentiable rendering. How important are the loss terms? Yes
After rebuttal the new version of the paper reads much better and all reviewers were positive, despite some remaining criticisms. Hence the paper should be accepted.