Input
stringlengths
128
43.6k
Output
stringlengths
141
10k
In this paper, the author investigates how to utilize large-scale human video to train dexterous robot manipulation skills. To leverage the information from the Internet videos, the author proposes a handful of techniques to pre-process the video data to extract the action information. Then the network is trained on the extracted hand data and deployed to the real robot with some human demonstration collected by teleoperation for fine-tuning. Experiments show that the proposed pipeline can solve multiple manipulation tasks. **Strength** - The direction explored in this paper is important. Utilizing the internet video data for robot learning is well motivated. Especially considering the similarity between human and multi-finger hands, this direction looks very promising. - The authors perform experiments with multiple real-world tasks with pick and place, pushing, and rotating objects. **Weakness** - Although the objective of this paper is very impressive, the experiments can not support the introduction and there are multiple overclaims. - Section 4 is titled VideoDex: Learning Dexterity from Youtube. However, I can not find any evidence that the author utilizes YouTube data for learning dexterous manipulation. As mentioned in the Section on Retargeting Wrist Pose, ORB SLAM and the camera’s acceleration data are used to compute the camera pose trajectory. This information is not readily available in the YouTube data. The experiments and methods are misaligned with this claim. - In the introduction line 42, the author mentioned that our key insight is to combine these visual and action priors from passive data with the physical constraints of how robots should move in the world. However, the method does not consider the surroundings of the human hand, and the detection results itself is not accurate. How to incorporate physical information into the training data? - Missing literature discussion on previous learning from video works: [1] *DexMV: Imitation Learning for Dexterous Manipulation from Human Videos, 2021*: This paper focuses also on how to learn dexterous manipulation from human videos. The reviewer understands that this literature paper uses simulated tasks while the authors focus on the real robot settings. But it seems that similar pipelines are also used in this paper: estimating the human hand, retargeting, and learning from retargeted hand pose. [2] *The Surprising Effectiveness of Representation Learning for Visual Imitation, 2021*: This paper also focuses on how to leverage the video data for better learning. It also uses a GoPro camera to collect a video of each trajectory, which is the same as the Ego4D dataset used in this paper. It shows that by learning from this video data, the final manipulation performance can be improved a lot. These literature works use very similar methods to achieve robot learning. The novelty claims of this paper can also be found in this literature. - Missing details for Re-targeting Wrist Pose The detection module FrankMocap is a 2D hand detector, it is not clear how the author can get 3D keypoints from the hand model in the camera frame. Also, this section is important in the whole technical approach, it is better to provide visualization of the final retargeted robot. A hand wrist pose and robot arm should also be visualized in Figure 3 if they are used in the training. If the wrist pose and arm joint pose is not used, how to pretrain the action prior? - Missing details about transforms In the equation, it is not clear why the author uses T and M to denote pose simultaneously. What are the differences? If M is also a $SE(3) $transformation, how to compute the position part of the $M_{World}^{C_1}$? Besides, the reviewer can not find any information about how the $T_{Robot}^{World}$ is determined heuristically in both the main paper and supplementary. <doc-sep>The authors demonstrate a system in which they combine a few different components to get interesting supervised-learned open loop behavior of real robot hands doing several different tasks. In particular the most notable part of the approach is using videos of human hands as an “action prior” which informs their supervised mapping. # Strengths - Good core idea. The overall idea of using action priors from human videos, via hand tracking, to make robots work better, is a good idea. There are a lot of closely related works, but I think they are well referenced in this paper. - Good execution on several key parts. The execution details of handling moving cameras with camera pose tracking, together with per-frame hand tracking, seems to be well done. I also like just using R3M features out of the box, this is smart and interesting to see external validation. - Results of real robots with hands doing a variety of things. # Weaknesses There are various unscientific elements of this paper in its current form. While the work is interesting, I can’t recommend a strong accept for a paper in this form. Hopefully the list below will help the authors improve both this work and their future work. If the authors can address all of the following weaknesses in their rebuttal, which I think is all doable and within scope to do in a rebuttal, I’d be happy to move from weak accept to strong accept. 1. It seems like the authors are not very upfront about the fact that this method does not produce closed loop policies. Only on the last page or two is it mentioned that the whole method is open loop. This is fine to study the task of (i) inputting an image of a scene and (ii) outputting an open loop trajectory, but, it of course is very limiting. The tasks are carefully chosen such that they don’t require any closed loop feedback. This aspect of their approach is not what most researchers in the field would expect… so a common experience of a researcher would be to look over the first handful of pages of this paper, and only at the last page or so realize that this is an open loop method. Please just make this clear up front. 2. Several false statements in the introduction: - “ To build such robotic agents that can operate anywhere, we need access to a lot of successful robot interaction data in many environments.” —> not necessarily true… This is a reasonable hypothesis, but one that isn’t tested in this paper, and it can’t be stated as a fact. - “ However, deploying inexperienced real world robots to collect experience must require constant supervision which is in feasible.” —> also not necessarily true… but also a very reasonable hypothesis. Just need to say “may require” instead. - “Most of the inefficiency in robot learning is due to the exponentially large action space.” —> an opinion, and can’t be stated as fact. 3. “NDPs can produce safe and smooth trajectories” … yes, but this is a meaningless statement. They *can* also produce trajectories that are completely unsafe. There is nothing about NDPs/DMPs that provides safety other than a bit of smoothness that may arguably help. But there is nothing that helps here with the presence of obstacles in the environment, or humans, etc. This statement probably only serves to confuse/mislead inexperienced readers, please remove/fix. 4. The paper mentions a “physical” prior as a key component, but this is just that it uses Dynamic Movement Primitives it seems. I’m not sure this is the best way to communicate this. Line 191 also says physically-aware NDPs… they don’t know anything about contact physics… maybe just say second order system or dynamical system or something, maybe physically-inspired, but not physically-aware. And whenever it says, for example line 269, “baselines without a physical prior” it should just be instead clear that this just means they don’t use DMPs. 5. Line 213 “ is VideoDex able to perform general purpose manipulation?” Since the method is open loop, the answer is no. That’s fine, and the results are still impressive, but should be clarified… this is not something that needs to be empirically evaluated, it’s just a result of the formulation. 6. It’s very confusing that citation 44 is used open loop… this isn’t an intention of the method. Also, is the RNN version closed loop over time? It’s not clear. And if it’s not? … I’m not sure how the RNN would be any different if it’s not used sequentially over time. 7. Please state exactly how many demonstrations were used for the different experiments. 8. In the conclusion… “ this is because training RL in the real world is difficult due to hardware limitations.” Yes, but this isn’t reason to make the used behavior cloning method open loop instead of closed loop. ## Minor Don’t worry about these too much but I mention these as opportunities to improve the paper further. - Ego4D is not cited on page 2 (mentioned but not cited) - HR() is not defined in an equation. Also, I would recommend not using two letters for a math symbol… it looks like a matrix H multiplied by a matrix R - Why use ORBSLAM3 rather than COLMAP for the poses? Already running colmap for the calibration. <doc-sep>VideoDex pretrains a policy network with videos, with gyroscope and accelerometer data, of humans performing a task, then fine-tunes with demonstrating trajectories collected by teleoperating the robot. In order to train with the human data, they use the approach from [49] for mapping human pose to robot pose and use ORBSLAM3[55] to account for the camera motion. They feed the image data, labeled with the outputted pose, into a ResNet18[15] backbone initialized with R3M's[6] features and use a Neural Dynamic Policy (NDP) [13] network to generate actions. The paper demonstrates that using human data allows improved performance on 6/7 tasks. Pros The paper presents a theoretically simple method of learning from videos of humans. The method is demonstrated on 7 different tasks, outperforming the baselines without human data on 6 of them. Cons The writing of the paper is somewhat scattered. The analysis of why the proposed approach using NDP rather than a MLP works better with human data could be stronger. The paper needs to be much clearer that it relies on gyroscope and accelerometer data from the human videos, which is a barrier to truly using internet-scale data.
This paper studies how to learn dexterous manipulation from human videos. In the initial review, the reviewer appreciated the direction and real-world experiment but also raised concerns about the need of special sensor for tracking. During rebuttal, the authors effectively addressed this concern by providing additional experiment results, and reviewers were satisfied with the response. AC would like to recommend acceptance for this paper.
**Summary of contributions:** This paper proposes a new framework to design new loss for GANs. The authors show that their framework is quite general and encompass a number of existing approaches (e.g. the original GAN formulation, hinge loss, etc..), they also propose a categorization in three different classes and derive new loss function. They then compare experimentally the different existing loss and the new proposed loss that fall under their framework. **Main comment**: The framework proposed in the paper is interesting since it's quite general and the authors are able to derive a large number of existing as well as new loss from it. However, I think the framework has several limitations: 1. The formulation is based on the likelihood ratio which is only defined if the support of $g$ and $f$ match, this is known to not be the case in the context of GANs. 2. The benefit of the framework is not clear, while it provides a way to derive new loss it's not clear what are the advantages of the new loss. Theoretically the author argue that it is a hard question to answer, and I agree. The authors try to answer this question through experiments but I find the experiments not very convincing. In particular, the authors argue that subclass A objectives are more stable based on the CelebA experiment, however it's not clear to me that the instability is due to a specific choice of objective function, it might just be that the hyper parameters where slightly off for the other objectives. I believe it would be interesting to understand better the results on CelebA, in particular maybe to show that some objectives are indeed more stable, they can vary several hyper-parameters and compare how often each objective is better than the other, that would make the results and conclusion much more convincing. *Minor comment*: The paper is overall clear but the clarity of some sections could be improved. I think theorem 1 would be more clear if stated a bit differently simply saying that $D=\\omega(r)$ maximize $\\phi(D)+r \\psi(D)$ and that $r=1$ minimize $\\phi(\\omega(r))+r \\psi(\\omega(r))$. Section 3 is a bit dense, the subclasses also seem a bit arbitrary. I believe section 5 could be improved by stating more clearly the different observations, right now it looks more like a description of the figures than a clear statement of the question that the experiments try to answer and how they answer it. <doc-sep>This paper generalizes the min-max problem of GANs to form a richer family of generative adversarial networks. Interestingly, most of the well-known variants of GANs can be found in the spectrum of formulations covered by the family proposed in this work. In terms of modeling, it is evident that the family proposed in the paper is richer than that of f-GAN. The family in this paper is shown to have a connection to WGAN except that the Lipschitz condition is omitted. However, under the light of existing works including f-GAN and other relevant works, the obtained theoretical results are not surprising to me. In addition, apart from providing a richer family, this work does not significantly influence the practical aspects of GANs. I have some following questions: 1. If we solve the min-max problem in (2) subjected the fact that \\phi and \\psi satisfy Eq. (9), is it equivalent to minimizing any divergence between two distributions with pdfs f and g? 2. D(x) is not a typical discriminator whose values between [0;1] providing the probability to distinguish true and fake data, is not it? D is more similar to a critique whose output values are real-valued, is not it?<doc-sep>Summary ======== In this paper, the authors set out to find what scalar functions will make for a “max” part of the “min-max” GAN objective. They then find such a class of functions, and show that only a ratio between two equal probabilities will be admitted as a solution. Pros: ==== The paper nicely introduces a different way of seeing GANs, not as a difference between the generated and real data, but as a an integer of the ratio between generated and real distribution times the discriminator. Only if the ratio is 1 everywhere is the discriminator unable to maximize the max part of the GAN objective. Further, I liked the idea that the discriminator shouldn’t just decide what class data belongs to, but also estimate the probability ratio. Specifically, in the formulation here, the max part is maximized when $D(X) =\\omega(r(X))$, so maximized iff $\\omega^{-1}(D(x))$ doesn’t just classify, but says the probability ratio between the two classes. If this idea is expanded upon, I think the authors could make a novel contribution. Cons: ===== Unfortunately, the authors have neglected to carefully explain how their contribution relates to previous work. It’s telling that the paper cites only two papers from 2018, one from 2019 and none from 2020. All other citations are from previous years, even though 2018-2020 has been a time of much GAN research. A key way in which the author’s work hasn’t been sufficiently compared to previous work is with their main claim “We propose a simple methodology for constructing such [min-max] problems assuring, at the same time, consistency of the corresponding solution.” In [Liu], they show a class of of functions where consistency is also guaranteed, and the class shown by the authors here is a subset of the class in [Liu]. The details are at the bottom of my review Further, many of the techniques in this paper seem very similar to [Song], where they also investigate the f*-gan divergence. Specifically, the claims they make in Theorem 1 seem very similar to Prop. 2 in [Song]. Also the change of measure trick in the introduction can be found in [Song]. A detailed comparison of this work to that work would also be helpful. Since when reading this paper one simply doesn’t know what is previous work which has already been done by others and what is the author’s novel contribution. Once the authors address this, and one is confident the contribution is indeed novel, then the submission would be worth considering. Details of why this is a subset of what’s already been shown in [Liu]: There, they examine the difference between the target density $d$ (in this paper $d$ is $f$, but Liu uses $f$ for something else) and the generated density $g$ via $\\sup_{f\\in\\mathcal F}\\mathbb E_{x\\sim d,y\\sim g}[f(x,y)]$, so we find the function $f$ in a class $\\mathcal F$ which maximally separates the classes from $d$ and $g$. Now this work proposes to do the same thing, but with $f(x,y)=\\phi(D(x)) - \\psi(D(y))$ where $\\phi(z) = -\\int_{\\omega^{-1}(0)}^z \\omega^{-1}(t)p(t) dt + C_1 $ and $\\psi(z)=\\int_{\\omega^{-1}(0)}^z p(t) dt + C_2$. In [Liu] they then split f(x,y) up into two functions m and r, such that f(x,y)=m(x, y) - r(x,y) where m(x,y) has the form m(x,y)=v(x)-v(y). This can be done in your case too, resulting in (here we drop the constants C_1 and C_2 for simplicity) $v(x) = \\int_{\\omega^{-1}(0)}^{D(x)} p(t) dt$, $v(y) = \\int_{\\omega^{-1}(0)}^{D(y)} p(t) dt$ and $r(x,y) = \\int_{\\omega^{-1}(0)}^{D(x)} (\\omega^{-1}(t) + 1) p(t)dt$ Since D(x) must be in $\\mathcal J_\\omega$, this integral has an infimum, and theorem 4 from [Liu] can be applied to achieve the same results as in this paper. [Song] Song, Jiaming, and Stefano Ermon. "Bridging the Gap Between $ f $-GANs and Wasserstein GANs." arXiv preprint arXiv:1910.09779 (2019). [Liu] Liu, Shuang, Olivier Bousquet, and Kamalika Chaudhuri. "Approximation and convergence properties of generative adversarial learning." Advances in Neural Information Processing Systems. 2017. <doc-sep>Overall, this paper provides impacts on understanding the core of generative models with adversarial optimization problems. This paper shows the diverse possibilities of formulating the generative model optimization problems that the researchers can further investigate for better performances.  Also, this paper shows that generative models with unexplored losses achieve the best results in various datasets which demonstrates the possibilities of future improvements of generative models. Overall, this paper is valuable to the machine learning community (especially for generative models and adversarial training). The below are some concerns for this paper but those concerns are not bigger than the advantages of this paper. 1. Quantitative experiments - Although the authors provided two tables (Table 2 and 3), there were not much analyses about the results. - I understand that it is not an easy problem to understand "when" should we use "which" function. However, it would be great if the authors can discover some trends in the results to demonstrate which type of functions work well with which type of datasets. - I think it would be great to use some synthetic data with known characteristics of distributions as the target distribution to analyze for understanding this point. 2. Other types of dataset - Generative models are widely utilized in computer vision.  - However, there are various other types of datasets that can get benefits of generative models such as tabular data and time-series data. - It would be good if the authors can provide some simple experiments to demonstrate its generalizability. 3. Minor points - It is not clear to transform between equation (3) and (4). I think this is a critical part in this paper; thus, it would be good to explain a little bit more for this part. - The authors explain the differences between f-GAN and this paper. However, it is not super clear to understand. It would be good to clarify this point to highlight the novelty of this paper. --------------------------After reading other reviews are rebuttals--------------------- After reading all the reviews from other reviewers and corresponding rebuttals, I think this paper is a good paper and enough to be accepted in ICLR. 1. I think it has a clear difference from f-GAN. It can provide a new loss function for the generative models which can further extend the success of generative models in the future. 2. Experiments are not super interesting but at least it has some intuitions corresponding to the authors' claims. 3. General theoretical results for the generative models (such as when should we use which loss) is a very difficult problem to solve. Maybe this paper can provide some intuitions for solving that large problem. But it seems too much to ask this thing to the authors of this paper. Without that, I think this paper is still worth to present to the ICLR readers and participants. Therefore, I am standing on my original score (7).
This paper proposed a new family of losses for GANs and showed that this family is quite general and encompasses a number of existing losses as well as some new loss functions. The paper compared experimentally the existing losses and the new proposed losses. But the benefit of this family is not clear theoretically, and this work did not also provide the very helpful insights for the practical application of GANs.
This paper addresses the problem of MoE routing under the cases of different network topologies by allocating another abstraction layer for the topology and designing an auxiliary objective to optimize. Experiments show very good improvement in terms of speed compared to strong baselines. Strength: 1. The paper offers an important contribution to the AI community at the system level, which is probably not difficult to approach for many people working in this field. In fact, in my humble opinion, not so many AI people have the opportunity to access detailed hardware information as cloud users such as with Azure or AWS. 2. The experiments show very good improvement over strong baselines. System analysis is clearly presented. Weakness 1. The paper addresses the system level. However, since it claims a significant boost of speed without sacrificing the model accuracy, it needs to show the accuracy, e.g. at least the LM-related one with NLP-related metrics. 2. Line 240, which claims "without loss of generality", is probably too strong. My suggestion is if the solution is good, with the current hardware settings, the authors can run current codes for other many applications of which codes are available to further solidify their claims. 3. Likewise, why not show the data dispatch distribution of other ranks but only rank 0? If space is limited, appendix space is always there. 4. In the era of GPUs and large data, the motivation is led by demonstrating only 128MB of data is probably inefficient. Probably at least some GBs, or even stronger in a combination with different types of data would make a stronger motivation. 5. No code is provided. Maybe not very relevant since the paper addresses the system-related level and thus is hard to judge those impacts. <doc-sep>The paper proposes a new algorithm to improve training efficiency of Mixture of Experts models in a distributed training setting by exploiting the network topology information. To achieve this, the authors propose a new auxiliary loss term incorporating communication bandwidth to encourage tokens to be routed to closer nodes rather than further nodes. By applying this new algorithm, authors claim that they could achiever faster throughput (1.01x - 4.77x) without losing accuracy on their several different clusters. As a result, they show a faster wall-clock time convergence. The communication overhead is one of the major issues for the MoE model training and this paper proposes a new method to deal with this problem naturally. Given the increased usage of MoE model technology, this is a timely work. Having a soft guidance seems like a good idea not to hurt the original training dynamics while encouraging locality of token routing. And, as authors mentioned, there have not been this kind of topology aware loss terms before as far as I know. However, there are a few missing details about model configurations and algorithms asked in the question section. And, the overall speed gain is minor. This paper is focusing on the computation algorithm itself. So, it might not have direct societal impact. <doc-sep>Sparsely gated Mixture-of-Expert (MoE) plays a vital role in large-scale model training but suffers from both load imbalance and global communication. In addition, the existing even dispatch approach may cause network contention and worsen the previous challenges. This work proposed a topology-aware large-scale MoE training method, called TA-MoE, that can adapt communication volume to fit the underlying network topology without interfering with the model convergence. The key ideas are abstracting the dispatch problem as a communication cost optimization problem and then adding an auxiliary loss with pattern-related coefficients. Experiments show that TA-MoE provides up to 1.61x speedup and 4.77x speedup over DeepSpeed-MoE and FastMoE without accuracy loss. Strengths: + this work tried to tackle a very significant and interesting challenge in MoE system: network topology may worsen the communication and load balance problems during the dispatch in MoE. + the paper is well organized and easy to follow + the proposed TA-MoE method is simple and effective: extensive experiments show that TA-MoE is able to offer noticeable speedup over the state-of-the-art under different hardware and model configurations. Weaknesses: - the experiments are mostly doen with GPT models; it would be better to have models with different neural architectures in the evaluation benchmark. It is unclear how TA-MoE works on other MoE using models other than GPTs. The authors have adequately addressed the limitations and potential negative societal impact of their work.
Mixture-of-Expert (MoE) models have demonstrated a lot of success recently. To further improve upon the existing literature this paper studies MoE routing for different network topologies. This is essentially to deal with the communication overhead of MoE training. The strategy is to add another layer on top for the topology along with a corresponding objective to optimize. The authors also provide experiments demonstrating improved speed of convergence. The reviewers were in general positive and liked the idea of the paper. The reviewers did however raise issues about lack of clear demonstration that accuracy is not compromised, lack of large data, and a few other more technical concerns. The reviewers concerns seem to be more or less addressed by the authors. My overall assessment of the paper is positive. I think the general premise of the paper is interesting and the paper has interesting ideas. I do agree however that the experiments need to be more thorough. I am recommending acceptance but request that the authors follow the reviewers comments to improve their experimental results
This paper discusses applications of variants of RNNs and Gated CNN to acoustic modeling in embedded speech recognition systems, and the main focus of the paper is computational (memory) efficiency when we deploy the system. The paper well describes the problem of the current LSTM, especially focusing on the recurrent connection matrix operations, which is a bottle neck in this scenario, and introduces variants of RNNs (e.g., QRNN). Also these variants may not yield enough performance compared with LSTM, but 1-D convolution and/or deep structure helps to avoid the degradation. One of the biggest issues of this paper is that they use CTC as an acoustic model, while still many real speech recognition applications and major open source (Kaldi) use hybrid HMM/DNN(TDNN, LSTM, CNN, etc.) systems. Therefore, the paper's claim on CTC is not along with the current application trends. (It may be changed near future, but still hybrid systems are dominant). For example, the WSJ WER performance listed in Table 3 is easily obtained by a simple feed-forward DNN in the hybrid system. The latest Lattice free MMI with TDNN can achieve better performance (~2.X% WER), and this decoding is quite fast compared with LSTM. The authors should consider this current situation of state-of-the-art speech recognition. Also, the techniques described in the paper are all based on existing techniques, and the paper lacks the technical novelty. Other comments: - in Abstract and the first part of Introduction: as I mentioned above, CTC based character-prediction modeling is not a major acoustic model. - The paper needs some discussions about TDNN, which is a major acoustic modeling (fast and accurate) in Kaldi - p.4 first line "and represents element-wise multiplication": The element-wise multiplication operation was first appeared in Eq. (1), and it should be explained there. - Section 3.2: I actually don't fully understand the claims of this experiment based on TIMIT, as it is phoneme recognition, and not directly related to the real application, which is the main target of this paper I think. My suggestion is to place these TIMIT based experiments as a preliminary experiment to investigate the variants of RNN or gated CNN before the WSJ experiments. (I did not say that Section 3.2 is useless. This analysis is actually valuable, and this suggested change about the position of this TIMIT experiment can avoid some confusion of the main target of this paper.) <doc-sep>This paper present a study on efficient acoustic modeling using neural networks-based model. Four approaches are presented and evaluated: diag LSTM, QRNN, Gated ConvNet and adding a 1D convolution layer. The evaluation is done on ASR task using WSJ and in phoneme classification task using the TIMIT corpus. The study show that the inference speed is improved with comparable of better performance than the standard LSTM model. The findings presented in this paper are interesting and quite useful when one wants to implement a LSTM-based acoustic model on mobile devices. The paper is well written and easy to ready. The main issue of this paper is the lack of novelty: the three evaluated approaches (Diag LSTM, QRNN and Gated ConvNet) are not novel, the only novelty is the addition of a 1D convolution, which is not enough for a conference like ICLR. Minor comments on the experiments: * The network quantization approach has been shown to lead to efficient neural networks, could the authors provide a comparison between their approach and the quantization approach ? * On the TIMIT experiment, the authors could add a decoder and use the PER metric instead of the frame accuracy, so they could provide comparison with the literature. * WSJ and TIMIT are quite small corpora compared to the available corpora, maybe the authors should consider using large corpora like Librispeech. It could be interesting to see the performance of the presented approaches. Overall, this paper feels more like a technical report: the findings could be useful, but its novelty is too limited for ICLR. Hence I argue for rejection, and suggest that the authors consider submitting the paper to a speech conference like ICASSP.<doc-sep>This paper investigates a number of techniques and neural network architectures for embedded acoustic modeling. The goal is to reduce the memory access and make efficient computation, in the meantime, to sustain good ASR performance. Overall, the paper is well motivated and well written. However, I have following concerns. 1. It is not clear from the paper whether both the training and inference are conducted on embedded devices or only the inference? I assume it is the latter but can't find it explicitly mentioned in the paper. 2. The exploration carried out in the paper is more on the system level and the novelty is not overwhelmingly significant. 3. My major concern is that the reported WERs on WSJ and phoneme classification accuracy are quite off. 20%-30% WERs for WSJ do not seem to be usable in real applications. Honestly, I don't even think this performance is better than well-trained GMM-HMM acoustic models using a Viterbi decoder. Furthermore, there is no clear winners across the investigated architectures in terms of performance. One question is if one wants to deploy such an on-device system, which architecture shall be chosen? 4. A more general comment on the work explored in the paper. First of all, the on-device memory issue puts a heavy constraint on the capacity of acoustic models, which will significantly hurt the modeling capability for the DNN-based acoustic models. Deep learning acoustic models can outperform GMM-HMM because they can use large model capacity with very deep and complex architectures when a large amount of training data is available. Second, for CTC, when the training data is limited, its performance is far worse than the hybrid DNN-HMM model, let alone a pure end-to-end fashion without using external LM and dictionary. If WFST-based decoders (composition of WFSTs of LM, dictionary and deblank/repetition) are used, then the memory issue will surface again.
In this work, the authors conduct experiments using variants of RNNs and Gated CNNs on a speech recognition task, motivated by the goal of reducing the computational requirements when deploying these models on mobile devices. While this is an important concern for practical deployment of ASR systems, the main concerns expressed by the reviewers is that the work lacks novelty. Further, the authors choice to investigate CTC based systems which predict characters. These models are not state-of-the-art for ASR, and as such it is hard to judge the impact of this work on a state-of-the-art embedded ASR system. Finally, it would be beneficial to replicate results on a much larger corpus such as Librispeech or Switchboard. Based on the unanimous decision from the reviewers, the AC agrees that the work, in the present form, should be rejected.
The authors introduce the problem of telegraphic summarization: given a sentence, we want to reduce its size while retaining its meaning, with no penalty for grammatical mistakes. The main application presented by the author is that of summarizing fictional stories and plays. The setting proposed by the author prescribes that the summarized sentence can be obtained by the input sentence by dropping some words. So, for example, the simplest baseline for this problem would consist of simply dropping stop words. The approach proposed is basically an auto-encoder, consisting of a 2-step encoder-decoder network: in the first step, the sentence is encoded into a vector which is in turn decoded to a (smooth) indicator vector to mask words in the sentence; in the second step, the masked sentence is encoded into a vector, which is in turn decoded into the output (summarized) sentence. The optimization is a tradeoff between recoverability of the input sentence and norm of the indicator vector (how many words are dropped). In order for the network not to learn repetitive masking patterns (eg, drop first half of the sentence, or drop every other word), an additional loss is introduced, that penalizes keeping easily inferable words or dropping hard-to-infer words. Concerns: - the problem doesn't seem to be well-motivated. Also, the length of the obtained summarized sentences is ~70% that of the original sentences, which makes the summaries seem not very useful. - the proposed complex architecture seems not to justify the goal, especially considering that simply dropping stop words works already quite well. - In order for the presented architecture to beat the simple stop-words baseline, an additional loss (L4, linkage loss) with "retention weights" which need to be tuned manually (as hyper-parameters) is required. - there's not enough discussion about the related work by Malireddy et al, which is extremely similar to this paper. A good part of that work overlaps with this paper. - comparison with literature about abstractive summarization is completely missing. Minor comments: - Figure 1: Indicator Encoder should be Indicator Decoder. - Are negations part of your stop words? From your discussion, you should make sure that "not", "don't", "doesn't", ... do not belong to your stop word set. - How did you optimize the hyper-parameters r (desired compression), the regularization weights, and the retention weights? - Were pre-trained word embeddings used as initialization? - What's the average compression of golden sentences? <doc-sep>The authors consider the problem of telegraphic sentence compression: they train a system in an unsupervised fashion to predict which words can be dropped from a sentence without drastic loss of information. To that end, they propose a new auto-encoding type architecture which uses the extracted words as latent code, and, most importantly, a linkage loss which relates a word's perplexity given the summary of its left context to its likelihood of being retained. The model itself is sober and well motivated, and the linkage loss is, to the best of my knowledge, original. The authors show that their method outperforms some simple baselines in terms of ROUGE and compression on a small human-annotated test set. The paper is generally well written, although the initial presentation of the model could be made a little clearer (it is not obvious from the text that the Decoder takes the text as input -- Figure 2 helps, but comes a couple pages later). However, the authors fail to appropriately justify the choice of their hyper-parameters (e.g. "The optimum value of r for our experiments was found to be 0.65", "the best value of b was found to be 5", "The weights λ1, λ2, λ3, and λ4 have been set to 3, 2, 50 and 3 respectively for our experiments" -> how is "best" measured on the validation set, which does not have gold references?). The choice of the specific sparsity constraint (one could as well imagine using a simpe L1 regularization for the Binarization loss) and of \\Chi_i (why not simply use the likelihood?) could also be better motivated. The model also relies on a hand-crafted rules (Section 3.3) whose effect needs to be made more evident. What weights are used in practice? How were they chosen ("We observed that..." needs to be further developed)? The authors claim that "the quantitative scores are not affected significantly", but that is presumably only the ROUGE score, what about annotator's preferences? Most importantly, however, the task of telegraphic sentence compression, whose usefulness is not a priori obvious, is barely motivated. The author refer to "Malireddy et al. (2018)" for a justification, but it is important to note that the latter provides a telegraphic summary of a whole document, with a compression factor of 0.37. The claim is that the concatenation of the telegraphic sentence compression can act as a summary of a whole document, but given the fact that compression for individual sentences is closer to 0.69, this is yet to be demonstrated. And even if that were true, it is unclear whether the cognitive load of reading a sequence of telegraphic sentences would be that much lower than that of reading the original text. This paper presents some interesting ideas and is well written, but the content is not quite sufficient for publication. In addition to the clarifications and justifications requested above, the authors are encouraged to apply there methods to full lengths documents, which would make for a more substantial contribution. <doc-sep>The paper explores unsupervised deep learning model for extractive telegraphic summaries, which extracts text fragments (e.g., fragments of a sentence) as summaries. The paper is in general well structured and is easy to follow. However, I think the submission does not have enough content to be accepted to the conference. First, in term of methodology (as described in Section 3), the paper has little novelty. There has been intensive study using various deep learning models on summarization. The models described in the paper contain little novelty compared with previous work using autoencoder and LSTM for both extractive and abstractive summarization. Second, the paper claims contributions on using deep learning models on telegraphic summarization, but the advantage is not well demonstrated. For example, the advantage of the resulting summary is not compared with state-of-the-art sentence compression models with intrinsic evaluation or (probably better) with extrinsic evaluation. (By the way, it is interesting that the paper argues the advantage of using telegraphic summaries for fictional stories but actually gives an example which looks also very typical in news articles (the “earthquake Tokyo 12 dead” example).) Third, there has been much work on speech summarization that summarizes with the “telegraphic” style (this is natural, considering speech transcripts are often non-grammatical, and “telegraphic” style summaries focusing on choosing informative fragments actually result in usable summaries.) The author(s) may consider discussing such work and compare the proposed methods to it.
This paper presents methods for telegraphic summarization, a task that generates extremely short summaries. There are concerns about the utility of the task in general, and also the novelty of the modeling framework. There is overall consensus between reviewers regarding the paper's assessment the feedback is lukewarm.
This work tackles the task of forecasting dynamics in different domains simultaneously. Using an encoder which is trained to determine the task, the inferred latent vector is then used to adapt a forecasting network to the task at hand. Experiments on three datasets linked to fluid dynamics are then conducted to assess the proposed model. Pros : - This is an interesting problem which is quite timely given the development of the field of forecasting physical dynamics using neural networks. - The proposed solution seems sound and principled. Moreover, it is well motivated and the writing was quite clear. - The different additions made to the forecaster network are also quite interesting, I especially liked the AdaPad solution to deal with boundary conditions. Conducting an ablation study also considerably strengthens the paper. Cons : - All experiments are conducted on somewhat similar datasets, which are based on fluid dynamics PDEs. It would be nice to see how the model deals with other families of dynamics. Especially given the fact that the contributions of this work seem geared towards practical considerations. - The setting of the experiments should be more precise and additional details should be given: how are the different datasets constructed, what supervision is there exactly regarding the different tasks, how many domains are there in each dataset and what are the differences, how is the balance between the different domains ect. This is a good work on a timely subject. The contribution is not groundbreaking but should be significant enough to warrant acceptance. <doc-sep>This paper addresses the problem of learning a deep learning model for dynamics forecasting which generalizes to changes in dynamics. These changes can be induced by different parameters, boundary conditions or external forces. The proposed model takes a meta-learning approach and proposes to partition data into different heterogeneous domains. It consists of two components: an encoder which infers time-invariant features given observed domain data and a forecaster which predicts the dynamics given these features. The paper evaluates the proposed approach on several datasets and provides some theoretical insights. + * This paper addresses a new and interesting generalization problem for dynamics forecasting * It proposes a model to address different changes in the dynamics. * Evaluation is done on relevant datasets with several baselines and some ablation studies. - * The applicability of the proposed approach is restricted to problems where relevant weak supervision from task parameters is available. This seems like an important limitation in real-world applications. How valid is this scenario? The question of choosing relevant parameters for weak supervision is important for applying this model to other datasets, yet the definition of these parameters is unclear; how robust is the model when chosen parameters are not useful ? The performance of Wrong_enc (Table 2) tends to say that this model will then fail. * It is unclear why the model can adapt to changing boundary conditions with AdaPad as it generates them from features $\\hat{z}_c$ extracted from data inside the domain and weakly supervised by quantities unrelated to the boundary condition (e.g. mean vorticity or season). * The theoretical analysis, inspired by existing work in multi-task learning / domain adaptation, has some limitations and does not add much value to the paper. I have some concerns with the domain adaptation upper-bound to the target error in Theorem 3.4 and Proposition 3.5. This upper-bound is not minimized thus the target risk can be high i.e. the model is not guaranteed to adapt well. Moreover, the validity of the theoretical analysis is unclear as several assumptions may not be verified e.g. bounded loss in Theorem 3.1, Proposition 3.3; lipschitz continuity in Proposition 3.5. Theorem 3.4 requires that the assumptions in Theorem 2 in Redko et al 2017 are verified, yet these assumptions are not mentioned in the paper. * Some ablation studies are missing: 1) the contribution of each term in equation (2) and 2) the dimensionality of $\\hat{z}_c$ which is fixed arbitrarily. Other questions: * It would be good to better explain how the experiments include changing boundary conditions between domains. The testing scenarios only mention different initial conditions or external forces. * Why do the baselines ResNet-c and Unet-c not adapt well despite having access to relevant weak supervision (p8)? This is the same information used by the proposed model to adapt. * How redundant is the time invariance term (3rd term in equation (2)) with the invariances enforced in the architecture of the encoder? This paper tackles a new generalization problem for dynamics forecasting and proposes a model supported by experimental results. However, this model can only be applied to problems with relevant weak supervision which may not always be available in practise. Moreover, the definition of relevant parameters is unclear and the robustness of the model to the choice of these parameters is not measured which may restrict its application to other datasets. There are also unclarities on the ability of the model to adapt to changing boundary conditions with AdaPad, some ablation studies are missing and I have concerns on the theoretical analysis which brings limited value to the paper. For this reason, I am giving this paper a weak reject. --- Post-Rebuttal comments --- I thank the authors for their response. After studying it, the theoretical results still have some major issues and feel disconnected from the model. In particular, key assumptions are not enforced in the model (e.g. lipschitz continuity) and the generalization error of the model in Th3.3 is uncontrolled as the upper-bound is not minimized by the model (the Wasserstein distance between domains is fixed and is high in all generality). Its use for the model is thus not very convincing. On practical aspects, the capability of handling boundary conditions should be better justified and evaluated. For this reason, I keep my score unchanged and recommend rejecting this paper. <doc-sep>The paper suggest a remediation for a common problem for dynamics forecasting which is the lack of generalization to other domains/tasks. The author suggest to tackle this with via a 2 component architecture, one for learning the task and one for forecasting. In empiricial experiments the authors show the practical feasibility of their approach. As a caveat: I'm not an expert in the area, so my review remains on a superficial level consequently for which I apologize. I overall liked the paper quite a bit, the question discussed is relevant, the empirical evaluation is very good, the theoretical results seem as relevant as they would get and the related work discussed is crisply presented and relevant. One question I would have is that results in Table 1 are overwhelmingly good with only UNET-c coming close. Do we know for these tasks what the "theoretical" upper bound (e.g. by the right PDE system) would be? Is it computationally even possible to compute this upper bound? I'm wondering how much of a gap there still is too close. In a similar vein, what is the intuition behind DyAD + ResNet being better than DyAD + UNET mostly? Are there some complementary strengths between DyAD and ResNet that this combination can exploit better than DyAD + UNET? This is a good paper that I'd like to see accepted for its combination of theoretical results, empirical results and methodological novelty. <doc-sep>This paper is interested in learning general forecasting models for physical dynamical processes. The paper proposes a decomposition of such a model into an encoder that captures the innate properties of the system, and a forecaster that autoregressively makes predictions conditioned on the encoded properties. This is framed as a meta-learning approach, and is shown to substantially outperform single-task approaches and off-the-shell meta-learning approaches across multiple datasets. The paper provides some theoretical analysis, and qualitative analysis of what is learned. Overall, the paper shows that learning shared models across domains is an important and fruitful way forward for modeling physical processes with machine learning. Strengths: - The problem statement is well-motivated. Learning generalizable deep learning models across diverse settings is an important open problem. - Experiments use interesting and real-world problems. - Results are strong and appear reliable. - AdaPad is an interesting idea specialized to the case of physical complex systems, since it is designed to address boundary condition issues. - Visualizations show the model is behaving essentially as expected. - Although there are many design choices that go in to the model, each such design choice is well-motivated. - Aside from some aspects of the theory section, the exposition is generally quite clear and well-organized. - Assumptions are made clear. - The fact that the encoder can be trained first and independently of the forecaster should be very useful for further rapid developments. - Great to see ESE metric used as a complement to raw error. - Table in Appendix showing alternatives to AdaIn is very useful in increasing confidence in AdaIn for this application. Weaknesses: - The biggest concern is the theory section. The multi-task learning and domain adaptation results are general results that are not adequately connected back to the specific model and problem the paper is considering. Yes, it is widely accepted that multi-task learning and domain adaptation can work well, especially when tasks are related in some measurable way, and it can be a useful exercise to restate existing theory in the language of your framework, but what (if any) novel claims is the theory implying? Are there any predictions the theory makes about the particular approach which can be validated in experiments? - The theoretical bound on error that decomposes the error of the encoder and forecaster is similarly lacking in its interpretation. Yes, it can be a useful exercise to show that the error can be decomposed along the lines of the model, but does this bound somehow suggest that the decomposition results in lower error than a monolithic model? Or is it showing that you can work independently on improving either part of the model and improve the overall error? Where is there potential for practical value in this theorem? - For example, one place there could be potential to validate the theory is to check in experiments that task pairs with lower Wasserstein distance actually support better domain adaptation. However, in the Introduction of the paper it acknowledges that “Even the slightest change in these features may lead to vastly different phenomena”, but doesn’t that suggest that Wasserstein distance may not be a useful metric here for measuring task similarity? Couldn't turbulence limit the usefulness of such a metric? - Proposition 3.3 says the bound is “strictly looser” than the bound in Theorem 3.1. For clarity, it would be very helpful to combine the bounds into an inequality showing this strictly-looser property. It is not immediately apparent from the statement of the theorems since the inequalities contain different terms. - As is, the theory doesn’t really hurt the paper, but, for the amount of space dedicated to it, it doesn’t add much. The paper could be substantially improved by either (1) adding interpretation/predictions/validation of the theory that connect it back to the approach in the paper, or (2) removing some of the less useful parts of the theory from the main paper to free up space for more of the interesting analysis of what the model actually learns. - Also, it is interesting but a bit counter-intuitive that the theory section relies on results in multi-task learning and domain adaptation, instead of theoretical results from the meta-learning literature. As is, since the paper relies on multi-task learning so much, it is missing references to related work in multi-task learning (i.e., related work outside of modeling physical dynamical systems). - Similarly, it would be helpful to mention why there are no comparisons to multi-task learning or domain adaptation methods in the experiments. Why do they not apply here? - The three terms in the loss function of the encoder are well-motivated, but it is not clear how important each term is. Ablations on these terms would be very informative for the reader to understand what’s generally required to train an encoder. - In Section 5 it says “VarSepNet employs separation of variables through different loss terms”. What are these loss terms and how are they different from the ones in the paper? - In the ablations with no encoder, how do AdaIn and AdaPad work? Don’t they require some z? Where does this come from if not from the encoder? - U-Net does seem it could be at a qualitative disadvantage compared to DyAd in terms on number of parameters, especially since U-Net c is one of the more competitive baselines. It would be useful to see results for a larger U-Net c, or at least some evidence that the U-Net is not underfitting the training data. Additional question of interest: Overall, this is a very important a potentially deep line of research. The most exciting promise of such work is the potential of revealing shared regularities across vastly disparate dynamic systems, that is, across complex physical processes. And it seems the approach in the paper could be particularly well-suited to such research. For example, the authors could train a single encoder+forecaster model across all the datasets in the paper, and analyze relationships in the learned encodings across datasets. Training models across highly diverse domains have been tried in multi-task learning (e.g., "Pretrained Transformers as Universal Computation Engines" arxiv 2021, "The Traveling Observer Model" ICLR 2021, "Modular Universal Reparameterization" NeurIPS 2019, "One Model to Learn Them All" arxiv 2017). Is such a generalization part of the longer term vision for this line of work? Minor comments: - In Section 2.4, some references would be useful in the sentence ending with “…the combined force equation.” - There are several inconsistencies in the use of parentheses in citations throughout the paper. Correcting these would improve readability. - In last sentence of first paragraph of Section 4, the word “task” could be changed to something like “problem”, since “task” has another meaning in the paper. - Should the 7.26 for U-Net-c on Ocean Currents future be bolded? - In the last paragraph of Section 5.1: “We tried to vary…” -> “We tried varying…” or “We varied…”. - Appendix A.2.1: footnote for PhiFlow is on the wrong page. - Appendix A.2.1: The last paragraph seems like it should be the first paragraph of A.2.2. - In proof of Proposition B.5, there is an extra or missing set of norm bars in the first inequality. Overall, this is very interesting and useful work. The problem is well-motivated, and the approach and experiments are carefully designed and generally convincing. If the concerns about the theory are addressed, I would be happy to increase my score. Adding the additional info and experiments requested could increase it further, and make this a particularly strong paper.
The paper addresses the problem of domain generalization for learning spatio-temporal dynamics. It proposes a solution where an encoder captures some characteristics of a given environment, and a forecaster autoregressively predicts future dynamics conditioned on the characteristics learned by the encoder. Said otherwise, the forecaster learns the general form of dynamics parameterized by an environment representation extracted by the encoder. The conditioning is implemented via an adaptive instance normalization mechanism. A form of padding is also introduced in order to take into account boundary conditions. The two components encoder and forecaster are trained sequentially. This approach is casted in a meta-learning framework. Theoretical results inspired by multi-task learning and domain adaptation are also demonstrated. The model is evaluated and compared to different baselines on three problems, and for two different settings: varying initial conditions with a given dynamics, and dynamics with varying parameters. This is a borderline paper. It targets a timely and important problem of domain generalization for dynamic environments. The proposed solution is original and compares well experimentally to several baselines. It allows for better generalization performance for the two test settings considered. In the current version, the paper however suffers from different weaknesses. First there is the imprecision of the arguments and the description of the experiments. Some of the arguments and claims are vague and sometimes abusive, not backed up by evidence. For example, a central claim is that the encoder learns time invariant quantities characterizing the environment when the learned representations indeed change with a time shift in the input for any environment. The same goes for the argument developed for the padding construction. It is claimed to model boundary conditions, but this is not supported by any theoretical or empirical evidence. As noted by the reviewers, the theoretical analysis is disconnected from the algorithmic and experimental developments and does not bring much additional value to the paper. What is more embarrassing is that some of the claims in this section are overstated and induce incorrect conclusions. From Theorem 3.1 and proposition 3.3, the authors suggest that multitask learning leads to better generalization than learning independently, while this is not formally guaranteed by the results (this is acknowledged by the authors in a later comment). Besides, the conditions of validity are not discussed while they seem to only cover situations for which the train and the test distributions are the same. The same holds for the second theoretical results (theorem 3.4). It is claimed that this result supports the authors’ idea of training encoder and forecaster sequentially, while it does not. Besides, the bounds in this result cannot be controlled as noted by the reviewers and are not useful in practice. Overall, the paper addresses an important topic and proposes new solutions. The results are promising and it is indeed an interesting contribution. However, inaccuracies and incorrect or exaggerated claims make it difficult to accept the current version of the article. The article would make a strong and innovative contribution if it were written as a purely experimental article with a detailed description of the experiments and comparisons.
The paper studies the Mixture of experts (MoE) architecture which has become popular in NLP recently as a way to increase the capacity of network without increasing depth. The authors aim to develop a theoretical understanding of the MoE model/conditional computation. The authors begin with a formal model for conditionally activated sparse models which can capture common existing MoE models. The authors use LSH (locally sensitive hashing) for the gating in MoE and use this to derive a few theoretical results regarding the ability to approximate real valued Lipschitz functions in Rd. The authors perform some small scale experiments to back up and verify their theoretical findings. ######## POST REBUTTAL ###### Thanks to the authors for running the experiments and for sharing the insights. Like I said earlier, it is important to study the theoretical underlining of MOEs. This paper starts with it, although as a researchers actively working in MOEs, I do not think that the paper exactly answers the key questions. The results proven are expected and not surprising, but on the other hand, as pointed by the authors, non-trivial to prove. So I would say that it is a decent paper at the moment and would suggest the authors to keep going in this direction to develop a more thorough understanding so that they can uncover some more fundamental results. Strengths: 1. Relevant Problem - MoEs are becoming very popular in NLP. Thus it is important to study their underlying theoretical and working mechanisms. The paper tackles this relevant problem. 2. LSH (locally sensitive hashing) - Authors propose to use LSH for gating. This can actually be quite promising in my opinion because it takes the local vicinity into consideration. 3. Well written - It is a very well written paper and is easy to follow - I really like the limitations section. It is good to see for a change that there is somebody who knows and writes the limitations of their work. Weaknesses: 1. Weak Experimental Evaluation - I think that LSH is infact good. Authors need to perform more experiments to show its effectiveness. - I am not suggesting to go to the huge model sizes but atleast medium scale models and datasets should be evaluated. Very well discussed. <doc-sep>From my understanding, the main contribution of the paper is as follows: 1. the authors capture the sparsity structure of these popular transformers. They model the transformers into DSM models. 2. They show that the DSM model can represent the LSH model. 3. They provide theory on the LSH model. These theories can be used to interpret the success of Switch and Scaling Transformers. 4. Motivated by the theory, they proposed a new LSH-based model and run toy experiments to show its efficacy. See my comments below. I have the following concerns: 1. In the current manuscript, the connection between contribution 1==>2==> 3==>4 is still a bit vague (see my comments in the **Summary** part) . When reading the current version, it is easy to get confused about the main contribution of the paper: is it "explaining why general sparse model like Scale transformer works well."? Or is it " designing new LHS methods to save inference costs?" For me, the contribution of the "explaining..." part outweighs the "designing.. " part. This is because the author didn't provide any real-data experiments on LSH. If no real-data experiment is provided, LSH is a pure theoretical tool to prove the theory on DSM and the "designing.. " part is minor. However, the current script over-emphasizes the "designing.." part, causing great confusion for me. 2. The performance of DSM on CIFAR-10 does not quite match the theory. To support the theory, I would suggest the authors run experiments on transformer-based NLP tasks instead of CV tasks on CIFAR. 3. All the theories are built on the function Lipschitz assumption. It would be better if the authors verify the Lipschitz condition. Is it a necessary condition or it is due to the limitation of the theory? If it is the latter case, what is the main technical challenge to relax this assumption? 4. In line 255, why is random d-degree polynomial a Lipschitz function? <doc-sep>This paper provides a theoretical treatment of modern sparsely activated networks with the Data-dependent Sparse Model (DSM) model. The authors show that the DSM model can simulate modern sparsely activated networks and the locality sensitive hashing (LSH) model. It is proven in the paper that the LSH model can be expressive as a dense network for approximating real-valued Lipschitz functions while requiring much fewer FLOPs. Furthermore, experiments are conducted to validate the theoretical findings on Lipschitz target functions as well as the CIFAR-10 dataset. Strength: 1. The paper is the first work to treat sparsely activated networks theoretically, thus novel to me. 2. The paper is well-organized. Weaknesses: 1. The theoretical analysis is based on the assumption of the L-Lipschitz target function and I am not sure how significant the work is. Furthermore, the neural network size used in the experiment is also very small. 2. I am not very sure about the relation between the theoretical findings and experiments. Theorem 4.1 and Theorem 4.3 conclude that LSH-based sparsely activated networks can be expressive as their dense counterparts when their size and number of samples match. As for size, the LSH model is measured using hash table size and the dense model is measured using width. As a result, in experiments, I am expecting to see the LSH model is as good as dense ones when # buckets == width of the dense network. However, in the figures, the width of dense networks is compared to the number of activated units. 3. For comparison of DSM and dense networks, the authors mention that 'Sparsity helps in both DSM and LSH models, ... using the same number of activated units.' However, it seems that the comparison may be unfair. Specifically, with 64 activated units, DSM chooses the best 64 units out of total of 1024 units while the dense one has only 64 units in total. It seems unsurprising to me that DSM is better than its dense counterpart. Yes, the authors have addressed the limitations and potential negative societal impact of the work. <doc-sep>This paper proposes the DSM model to sparsely approximate Lipschitz functions. The authors theoretically demonstrate their method in a wide range of scenarios, from one-layer shallow neural networks to switch and scale transformers. The original idea (but I am not sure as I am not familiar with this domain) of interpreting DSM as KNN is very interesting. However, :(, the experiment setting is a bit weak and seems to be finished in a rush. #### I have increased the rating from 4 to 5 after rebuttal. # Clarity: ## Strengths: This paper offers a detailed introduction to the LSM model and other background knowledge. ## Weaknesses: If the authors can further unify the usage of notations, the overall readability will be better. For example, the authors use s for the sparsity parameter, but in sec. 3.0 it switches to k, and later, k is used as the intrinsic dimension of input distributions. The usage of notation A^x is also a bit confusing. I also recommend the authors add some figures to illustrate their idea. For example, the Euclidean LSM and sec 3.0 can be well explained by figures. # Originality: I am not familiar with this domain, so I may not be able to judge this point. But still, I find the argument in sec. 3.0 interesting. It points out a potential direction in that we may interpret neural networks as KNN operators. The current content can be enhanced in some directions. The authors may try to remove the constraints of unit B rows, typical network blocks like CNNs, Attention networks, and Residual connections, do not have such unit structures. Also, extending it to deep neural networks will be more attractive. # Quality: ## Strengths: The theory analysis is careful and in depth. But some settings and assumptions need either explanation to justify the necessity or adjustment to cater the practice demands. ## Weaknesses: ### Weird experiment setting: 1. Now that the main point is the efficiency of the proposed method, why not report inference time? FLOPS is a good metric but not enough. 2. Needs more detailed ablation study. Specifically, detailed study on how sparsity parameter s influence the model accuracy, approximation MSE, and inference time. 3. Now that the paper put much attention on discussing input distributions, the authors should also use input distributions on low dimensional manifold in R^n. Currently it is unclear how the input is sampled. 4. From numerical perspectives, polynomial may not be good choices, as they tend to be extreamly ill conditioned when degree is high. The authors may consider B-spline or Bézier curve for some realistic industrial scenarios. Also, random neural networks, even shallow ones, may be good candidates. ### Theoretical Settings need clarify and adjustments. 1. It is a bit weird to assume input distributions be uniform, can it be replaced as absultely continuous with respect to the uniform distribution (Lebesgue measure)? 2. Now that the proof is based on Euclidean LSH, it should be clearly stated in the theorems. # Significance: The theory results is good, but needs stronger empirical evidence to support it. 1. I strongly encourage the authors to add figures to illustrate their concepts and ideas. 2. Experiments on SOTA neural networks will be much appreciated. 3. Needs a more detailed ablation study to justify the theory results.
The paper provides a theoretical analysis of sparsely activated neural networks. They introduce LSH (local sensitive hashing) as a new routing function for theoretical analysis and proved a few results on representation power and inference time. One reviewer pointed out that the theoretical results are expected and do not provide much interesting insight, which I agree with. Nevertheless, this is one of the early papers that study sparsely activated networks and may serve as a starting point. I recommend acceptance.
The paper proposes a new approach to inject knowledge into pre-trained language representation models (PLMs). Instead of tuning the original PLM parameters, the paper plugs in new adapters for knowledge injection to avoid catastrophic forgetting. Pros: * Injecting knowledge into PLMs is an advanced topic. The authors focus on the catastrophic forgetting problem during knowledge injection. * Evaluation is solid. The authors evaluate their model on three downstream tasks and show that the adapters improve the performance. * The paper is well written and can be easily understood. Cons: * The approach is simple but achieves good performance over a variety of tasks. I appreciate that the authors conduct the knowledge probing experiment but its P@1 is quite low and worse than BERT. Some more explanations are expected. <doc-sep>Summary: The paper proposes a novel approach of incorporating different types of world knowledge sources contained in texts such as facts or linguistic syntax. To do this, they introduce additional transformers layers between the layers of a pre-trained language model such as Roberta and term this model as "K-Adapters", where the K stands for K different streams of knowledge. Pros: - Incorporating different sources of information into a pre-trained model such as Roberta is an interesting idea. - The proposed approach is simple and interesting and it scales to many different types of information as the different adapters can be trained in parallel with the weights of the pre-trained LM being fixed. - Performance gains on different classification tasks such as entity tying, question-answering, and relation classification highlights the utility of the approach. Cons: - Sec 1: Introduction: In the introduction, there are multiple mentions of the phrase “rich knowledge” but it is unclear what do the authors mean by that in the context of pre-trained language models. Some recent works such as “https://arxiv.org/abs/2002.08910, https://arxiv.org/abs/1909.01066 ” suggest that pretrained language models do indeed contain a lot of world factual knowledge. Hence, the statement in the paper that pertained LMs lack world knowledge is contradicting these works. - There is also a frequent mention of catastrophic forgetting of knowledge during the finetuning step. I tend to disagree that this is necessarily bad for a pretrained model, because it has been shown that finetuning pre-trained LMs perform well in open-domain question answering where some degree of world knowledge is needed. - Furthermore, producing entangled representations may not necessarily be a negative thing, if multi-task learning approaches are able to show an increase in performance due to knowledge injection. - In Table 1, dependency parser doesn't really fall under the same class of knowledge sources such as Wordnet or Wikidata. A dependency parser may be able to provide some sort of syntactic structure of the underlying text. Moreover, such syntactic information is not always generalizable to different domains and thus has the limitation of not being accurate enough. - The Introduction section is not well-motivated and does not present convincing arguments as to why external knowledge infusion is really required in some tasks. It just states that knowledge infusion using k-adapter model outperforms Roberta models in different tasks. - In Section 3.1, not enough space has been allocated to explain the Adapter model in detail. If the authors had used mathematical notation or equations for explanation, then it would have been much more clear. - In Section 4, it is mentioned that they select three downstream tasks for evaluating their models. However, the paper doesn't provide justifications as to why these tasks were selected, how can these tasks highlight the importance of k-adapter model, etc. - In the results table 2 and table 4, as the performance improvements are somewhat marginal, it is important to know if these improvements are statistically significant or not. The paper doesn't report if the results are from single run or the mean of multiple runs. - I have concerns about data leakage during the pre-training step. As the factual adapter makes use of supervised relation classification dataset (T-REx), I feel that there might be some overlap between entity typing and relation classification datasets used for evaluating the performance of the model. The authors should present an analysis as to what degree of overlap if any is present during the pre-training and evaluation tasks. - The paper lacks a detailed analysis section that could explain as to which test examples are getting correctly classified when using k-adapter model in tasks like relation classification, entity typing compared to other baseline approaches such as Roberta, Roberta + multitask. Currently, the paper pays just too much emphasis on raw numbers or performance improvements in various tasks. - The results of the probing experiments suggests that BERT-large model vastly outperforms k-adapter model on Google-Re and T-REx datasets probing datasets. This raises an important question over the validity of the results in different downstream tasks. For a fair comparison with baselines, the authors should compare the performance of the k-adapter model with BERT-large + multitask across different tasks. - In almost all of the experiments, the authors use Roberta as the underlying pre-trained language model. For demonstrating generalization to different pre-trained LMs, the paper should also evaluate when k-adapter model is trained when BERT-large or T5-large are used as underlying models in place of Roberta. Grammar errors: - page 1, 3rd line from bottom: remains -> retains - section 3.3, information that concerned -> information that is concerned - section 3.4, father index is commonly referred to as head index of a word.<doc-sep>#### Summary This submission proposes a general method (K-Adapter) for injecting knowledge (either factual or linguistic) into pre-trained language models. The key architectural property of the approach is that K-Adapters are isolated from one another, allowing the use of multiple adapters without interference. These K-Adapter modules take hidden layer _inputs_ from the main pre-trained model (eg, BERT), and are pre-trained on their knowledge outputs before a fine-tuning phase where they feed into a joint downstream task-specific model along with the pre-trained model outputs. #### Strong and weak points The set of baselines seem strong, and the experimental results consistently show that using _either_ factual or linguistic knowledge K-Adapters improves, while using _both_ yields the best results. The LAMA probing experiment is a nice sanity or validation test that the knowledge injection is achieving the desired effect. Being able to "hard-code" knowledge into the model in this way could be useful in a variety of applications. It is overselling it a bit to say the model captures "richer" commonsense knowledge, however. The basic architectural idea is well-motivated and simple, in a good way. The supplemental materials mostly provide additional reproducibility details on architectures, hardware used, learning rates, etc. #### Recommendation (accept or reject) with one or two key reasons for this choice. I recommend to accept. The proposed approach yields strong quantitative performance against solid and relevant baselines, and the LAMA experiments give some support to the hypothesis that it is doing so by capturing knowledge as intended. The general design pattern could spur further innovations in modular network designs or knowledge capture strategies as well. #### Questions to clarify / additional evidence required "BERT-MK inegrates fact triples from the knowledge graph." - how? I can follow the cite but this sentence provides little information. "inject different types of knowledge independently" - is it correct to say then, that, by design, there can be no _beneficial_ interactions or synergies among different types of knowledge? Alternatively, in the fine-tuning phase, could different adapters interact or affect each other via the downstream coupling in the task-specific layers? Is this observed in practice? How should the reader think about the relative magnitude of the presented improvements? At one point I see "K-ADAPTER (F+L) makes significant improvement of ..." but I believe "significance" is only meant coloquially here. Section 3.1: how was this structure chosen, what was the motivation or intuition here? What limits, if any, do you foresee with the use of separate parallel "knowledge modules" like this? Could we use 10, 100, 1000 K-Adapters? #### Additional feedback to improve It would be helpful to cite Ling and Weld 2012 (or similar) for the definition of "loose" micro/macro F1, or briefly explain it inline in the evaluation setup. Likewise for the "catastrophic forgetting" phenomenon affecting other knowledge injection attempts - is there some previous work explicitly demonstrating this problem when using multiple knowledge sources? If not, it would have been interesting to have an experiment of this sort in this work. <doc-sep>########################################################################## Reasons for score: The authors propose a plug-in based adapter approach to allow for task specific parameter settings without updating the original pre-trained model which prevents the potential for catastrophic forgetting while also removing the need for separate models for separate tasks. The work seems to build off Houlsby 19 as briefly cited, but its in plug-in nature seems easier to adopt for multiple tasks. There is however not direct comparison with it or Cooper et al 19 ( https://arxiv.org/pdf/1902.02671.pdf ) which makes it difficult to assess. The way in which the adaptors were pretrained was a little unclear to me. The experiments are extensive and well done. ########################################################################## Pros: 1) The number of experiments run ( 3 tasks on 6 datasets total ) are extensive and shows the K-adaptor approach can benefit from the factual adaptor in particular in giving better performance over RoBERTa ( with or without multi-task learning ). 2) The proposed adaptor seems concise and easily expanded to incorporate other knowledge sources ( though there are few details which could help clarify things see #2 in next section ) 3) The probing task using LAMA to show how much factual knowledge has been memorized by the K Adaptor ( RoBERTA + facAdapter ) was well done and its discussion was very interesting. ########################################################################## Cons: 1) The proposed adapter solution is somewhat similar in nature to that of Houlsby 19 ( and to a lesser extent Cooper 19 ( https://arxiv.org/pdf/1902.02671.pdf ) ) and it feels like an omission to not discuss Houlsby 19 and make experimental comparisons against it discussing pros/cons more thoroughly especially since in the extensive experiments done in this work it is shown the linguistic adapter usually only adds a tenth of a percentage point when using RoBERTa with a single Factual adapter. In this single adapter case then its not immediately evident how these models would differ and what the advantage is. Both Houlsby and Cooper are evaluated on the GLUE benchmark and provide code. 2) I was a little confused as to how the adapters were specifically pre-trained and it might be a question of Figure 1b, but also sections 3.3 and 3.4 could have been expanded to clarify it a bit. It is my understanding that when pre-training the facAdapter on the relation classification task for instance in Section 3.3, for a given example in T-REx-rc, two entities and context are passed into RoBERTA whose weights remain fixed while those of the KIA units of the facAdapter are updated and the final hidden representations of RoBERTA and the facAdapter are concatenated to form an input representation of the entities given their context and this used for the actual task. Is my understanding correct? If so I'm confused as to how the subsequent pooling and concatenation actions are done. Clarifying this process for 3.3 and 3.4 would be beneficial for clarity purposes and its not discussed in the supplemental materials either. 3) Your RoBERTa Large baseline already beats most of what you are comparing against which is fine as your adapters give gains ( again particularly the facAdapter), but it also would have been interesting to see what sort of gains would have been achieved using a different less powerful model as the base RoBERTa small or just plain BERT and additionally some sort of ablation testing or explanation on the choices made for the adapter networks themselves ( ie, N=2 Transformers etc , hidden layer size, etc ) though its possible this could be left for future work. For clarity in Figure 2 where you show N x Transformer Layer ( and N=2), I'm assuming the first Transformer Layer feeds directly into the second Transformer Layer which then feeds into the Up Projection layer correct? If so it might be better just to show two transformer layers like that instead and additionally, naming the projection layers Up and Down Projection Layer respectively. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above ######################################################################### Small typos: In Abstract: we propose K-ADAPTER, which remains the original parameters .... "remains" -> "keeps" or "retains" In Introduction: they fail to continual learning ..... "fail at continual learning" It remains the original representation .... "remains" -> "leaves" (pg2) while remaining the original parameters of RoBERTa frozen... "remaining" -> "keeping" Section 3: It remains the original representation .... "remains" -> "keeps" 3.1: Different from Houlsby et al. (2019) add adapter layers -> "In contrast to Houlsby et al. (2019) who add adapter layers" 3.3: all relations having lees than .... "lees" -> "less"
The paper augments pre-trained language models by introducing “adapter”, where each adapter is another language model pre-trained for a specific knowledge source (e.g., Wikidata) and an objective (e.g., relation classification). The representation from each adapter is concatenated to the representation from the generic LM. Specifically, they introduce two adaptors, “factual” (mostly derived from Wikipedia), and “linguistic” (from dependency parser), and the experiment shows modest improvements over various benchmarks. This is a borderline paper, as both methods and experiments are reasonable yet not very novel or strong. The clarity of the paper can be improved (as pointed by R1 and R4), without any mathematical notations, model details are to be interpolated from figures. The novelty is limited and experimental rigor can be improved (i.e., for many settings, gains are fairly small and no variance reported).
This paper presents a formal analysis for the impact of graph reordering (i.e., ordering the in-memory storage sequence of graph node embeddings) on the cache efficiency of near neighbour searches using near neighbour graphs. The connection of the graph ordering (i.e., memory layout of the graph nodes) and the cache complexity is formulated, based on which the cache complexity of the Gorder (Wei et al., 2016) is analysed and two other graph ordering (Corder and Porder) are proposed. Experimental results on large real datasets confirm the effectiveness of the analysis and the importance of graph reordering for near neighbour search efficiency. Strengths: 1. The paper has presented a solid analysis for the impact of graph reordering (i.e., ordering the in-memory storage sequence of graph node embeddings) on the cache efficiency of near neighbour searches using near neighbour graphs. 2. Experimental results on large real datasets are presented, with an interesting discussion on the results. 3. Source code of the paper is available. Weaknesses: 1. The proposed method Corder is ineffective as discussed in the experimental results. Perhaps this method can be dropped to make room for adding more details on the experimental settings to the main content of the paper. 2. The other proposed method Porder is only somewhat better than the existing method Gorder (Table 5). 3. A lot of contents have been included as supplementary material, which makes the paper somewhat difficult to follow. Typo: "degree-based groupingFaldu et al. (2019)"; "Studies show that Studies show that" The paper included an interesting discussion on the results and limitations. <doc-sep>The paper studies practical performance of different methods for rearranging the node layout in the memory for graph-based approximate nearest neighbor search algorithms (HNSW specifically). It also proposes a simple modification of the existing methods based on query profiling. The paper claims to have up to 30-50% improvement in query latency on 100M datasets and is accompanied by code (though not clear if it is going to be released). Strengths: - Simple change that is very likely to be agnostic to the type of the graph algorithm used. - Sizable gains from the algorithm on large datasets. - Source-code (hopefully will be released with the publication - not clear from the text). - Long discussions of the results. Weaknesses: - The paper does not have a clear description of the methods (no pseudocode or even text description of step-by-step actions). I guess the readers are suggested to follow the cited papers, but IMO there should be at least a sketch of the best solution. - The source of 1000 queries for POrder not discussed in the paper. Were they used from the train set or the test set? - There is no implementation of POrder in the code, which is confusing. The code also has bugs (e.g. nonexistent “-openmp” flag), it does not compile without fixing the dependencies (could had been fixed if you provide Dockerfile) and there are errors in its description. - Judging the code, the construction is done in a single thread. If index construction time provided in the paper is for this regime (which is not clear from the paper, but seems to be the case), it should be redone in the multi-threaded regime. I do not think there is any negative societal impact. <doc-sep>This paper proposes to use graph reordering to improve the cache locality of graph-based nearest neighbor search algorithms. An analysis is conducted to show why graph reordering works and the experiments show that graph reordering significantly improves performance. Strength 1. Graph-based nearest neighbor search algorithms are very popular and graph reordering improves the performance of graph-based nearest neighbor search algorithms. 2. Although the conditions are restricted, the analysis explains why graph reordering improves performance. 3. The profiling-based ordering scheme makes sense. Weakness 1. The authors should dig deeper to show why recording improves performance. Currently, the explanations are vague (e.g., due to software prefetcher and auxiliary functions). The authors may want to make them more specific by explaining how software prefetcher works or conduct some experiments to show where the reduction comes from. 2. I believe that graph reorder works for graph-based algorithms in general. But it helps to show by experiments that graph reorder improves the performance for algorithms other than HNSW (e.g., NSG or NGT). 3. The legends in the figures are too small to read. Yes
This paper studies how to order in-memory sequences for graph embedding. There was a positive consensus that the studied problem is interesting and results are sufficiently discussed. There were some concerns on missing results, which were addressed during rebuttals.
This paper introduces the PAC-Bayes Information Bottleneck (PIB). Starting from the generalization bound Eq. 4 which shows that the generalization gap is upper bounded by a function of I(w;S), the authors proposes PIB which has an additional regularization term of \\beta I(w;S). Since the computation of I(w;S) is intractable, the authors then make several assumptions to simplify its computation, arriving at an estimate of I(w;S) by \\tilde{I}(w;S) (eq. 15), and use SGLD to compute it in practice. Experiments show that (1) there is a two-phase transition in SGD training as indicated by \\tilde{I}(w;S); (2) \\tilde{I}(w;S) seems to correlate with the generalization gap, under different variations of experiment hyperparameters: number of hidden layers, noise ratio, and number of random label data; (3) it improves performance compared to l2 and dropout regularization. Strength: This paper addresses an interesting and important problem. The proposed PIB is novel. The experiments show that the proposed \\tilde{I}(w;S) correlates with the generalization gap, and helps improving the performance. Weakness: In order to make the computation of I(w;S) tractable, the authors make several important assumptions. It would strengthen the paper a lot if the paper discuss and perform experiments to show if the assumptions are valid, in the experiments the authors run. Furthermore, in section 5.5, can the authors show the generalization gap (together with the train and test acc), with different regularization? Ideally we should see that with PIB as the objective, the generalization gap is much smaller than the other methods. With this, we can then be confident that the improvement is due to reduced generalization gap instead of better training. In summary, this paper is novel, but the experiment should be strengthened as detailed in the main review. <doc-sep>This paper proposes a new version of the Information Bottleneck objective for training neural networks. This is in part motivated by previously-derived PAC-Bayes bounds on the generalization error that are proportional to the square root of the mutual information of the weights and the training dataset: I(w;S). Thus this new information bottleneck objective attempts to minimize both the empirical risk and this mutual information.The paper derives a computationally-tractable algorithm for estimating I(w;S), then this algorithm is used to show that this quantity is inversely correlated with generalization loss on a variety of neural network architectures. Strengths: - The paper proposes an exciting general principle of deep learning. As far as I know, the contributions here are novel and will be of high interest to the community. - The authors build on previous work by showing how their IB objective addresses the shortcomings of previous work in this area. - It is very well written. This is a highly-technical paper, and the details are presented in a careful and thoughtful way. - The experiments are well done and the results support the conclusions. Specifically, this objective is motivated by a PAC-bound (the tightness of which is not clear) and various approximations are used to estimate I(w;S) (the accuracy of these are not immediately clear). The experiments address these issues by showing that the motivation and the approximations are reasonable. Weaknesses: - The limitations of this method are not discussed clearly. For example, the paper provides an algorithm for sampling from the weight posterior p(w|S), but how does this compare computationally to standard training of a neural network, or a estimating the posterior in a Bayesian Neural Network? - There are some minor grammatical and spelling typos throughout, e.g. "infection point". An excellent paper with exciting ideas, clear presentation, and technical depth. <doc-sep>The authors propose new interpretation of the Information Bottleneck (IB), dubbed PAC-Bayes Information Bottleneck (PIB). Where the IB is defined wrt the mutual information between feature representations $T$ and either inputs $X$ or targets $Y$, PIB is defined wrt the empirical risk over the dataset $S = \\\\{X_i, Y_i\\\\}_{i=1}^n$ and the mutual information between model parameters and $S$, $I(\\mathbf{w}; S)$, or information stored in weights. The authors show that PIB is a bound on generalization error. The authors derive a tractable estimator for $I(\\mathbf{w}; S)$. The authors present an approximate inference method for p(\\mathbf{w} \\mid S) that utilizes the proposed PIB. The authors show that PIB reflects the hypothesized two-phase fitting and compression modes of neural networks across different activation functions, network depth, and network width. They show that $I(\\mathbf{w}; S)$ yields a good estimator of the generalization error that is robust to label noise. They show that their inference method improves generalization across several benchmark datasets. **Strengths** To my knowledge this paper has significant technical and empirical novelty. The authors do a good job of summarizing previous work and differentiating their contributions. This is not my area of expertise, but the derivation of PIB, the estimator for $I(\\mathbf{w}; S)$, the proposed optimal posterior all look novel and correct. The experiments are well done, thorough, and support the main claims of the paper. **Weaknesses** The main weaknesses of this paper are language and clarity. I would recommend a thorough Grammarly or perhaps external advice. The graphs should report means and standard error intervals over multiple random seeds. **Some specifics** - Some technical terms are used before definition (e.g., phase transition in abstract) - In the abstract IB cannot explain, maybe IB theory can, as you use in the intro. - IIW and $I(\\mathbf{w}; S)$ are redundant, would recommend just using $I(\\mathbf{w}; S)$ - "Third, mutual information becomes trivial in deterministic cases." Please elaborate / cite. - "(2) we derive a solution to the intractable...," can something intractable have a solution? maybe approximation is better. - "optimal posterior of PIB," does PIB have a posterior or is the posterior over the weights? - Figure 2. IIW only shows compression phase, can the loss also be included in these plots? To my knowledge this paper demonstrates significant technical and empirical novelty. I believe the main weaknesses can be addressed prior to publication. Therefore I recommend acceptance. However, I am not an expert on this topic, so my confidence is only a 2. <doc-sep>The authors propose a formulation of the information bottleneck problem, replacing the mutual information between input X and latent representation Z via the mutual information between the sample S and the weights W obtained from the sample. They derive closed-form solutions for this mutual information in the Gaussian setting and propose an SGLD scheme to optimize the objective. Using this objective and optimization algorithm, the authors investigate several interesting scenarios, including different activation functions and noisy labels. The paper is generally well written and treats an interesting and timely topic. The idea to limit the information about the sample that is contained in the weights is not new (the authors cite several works that bound the generalization error via this information), but this is the first time that I have seen a corresponding cost function implemented in practice. There are, however, a few issues that are not perfectly clear to me: - The authors cite the literature stating that the generalization gap is limited by I(S;W) if the loss is sigma-sub-Gaussian. Does this hold for the negative log-likelihood in (6)? Also, in (6) is S a random variable or not? (4) requires that I(S;W) is computed as an expectation over p(S), while the log-likelihood in (6) is an expectation over P(w|S), i.e., not over p(S) but over a concrete S. How can this be understood? - Connected to this, is it safe to call the resulting cost function an information bottleneck cost function? I assume that this is better called an IIW-regularization rather than an IB cost. The IB cost is a very specific formulation that combines a mutual information cost with a mutual information utility, whereas here we have a general cost with an additional mutual information cost as regularization term. - The authors correctly claim that I(X;T) becomes trivial if the network is deterministic. More precisely, this mutual information becomes infinite in many of these cases (see "Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle" by Amjad and Geiger). I believe that this result carries over to I(S;W) being infinite for deterministic learning algorithms. This may not hold for all learning algorithms, but certainly for some. My own gut feeling suggests that I(S;W) is infinite for SGD with finitely many epochs (e.g., by the fact that there are only combinatorially many options to shuffle the batches), but that it is finite for SGLD, where noise is added to the weights. It is therefore not clear to me in which settings the analysis in Section 3 is a valid approximation. In other words, in which settings is the assumption that p(w|S) is Gaussian valid? Does it only hold for SGLD? - Connected to the point above: In which cases is the assumption that p(w) is Gaussian a valid approximation? - Can this Gaussian assumption about p(w) be used to bound I(S;W) from above? (E.g., for a Gaussian learning algorithm, can it be shown that the term I(S;W) is maximized if W becomes Gaussian as well? This would be at least intuitive from a channel coding perspective, where a Gaussian channel input is known to maximize the mutual information through a Gaussian channel, and which is then known to produce a Gaussian channel output.) - In Algorithms 1, pls. compare line 9 with your equation (15). In (15), you sum over squared inner products. In line 9 and 11, you square over the resulting sum of inner products. Is this difference intended, and if so, how can it be explained? Also, do we have $T_0 \\ge T_1$ in Algorithm 1? - In Fig. 1, why is the mutual information I(W;S) evaluated for different layers? What is the exact meaning of splitting the IIW between layers in terms of the generalization bound? I was assuming that the generalization bounds all consider the entire set of weights, and that the proposed PIB should do so as well. - Also in Fig. 1, the discussion of the inflection point is not fully clear. - In Section 5.1, it is claimed that the variance of the information explodes. Can this be made more precise (e.g., by writing down the mathematical symbol for this variance)? Furthermore, this is not shown in the figures, if I remember correctly. - In all figures, why is the mutual information I(S;W) so small? These numbers do not seem right. I would assume that it is necessary to "learn" more than 10⁻2 bits/nats to successfully solve a classification problem. In other words, while the general trend of IIW seems to be correct, I am not convinced of the correctness of the absolute numbers. Can you provide some intuition about these small numbers? Is this connected with the proportionality symbol in (14)? (But going from (8) to (9) it seems to be that additive constants are dropped, not multiplicative constants.) For the sake of clarity, I would prefer that footnote 3 is in the main text. Also, in some instances the notation and terminology is not clear. E.g., is S sampled iid in (4)? Why is the "oracle prior" called an oracle? How exactly is the bootstrapping resampling weight \\zeta_k defined? Why is the temperature $\\beta$ called the annealing temperature just before (18)? At the end of Section 5.2 you write that the l2-norm keeps increasing -- the norm of what? A very interesting paper, dealing with an interesting and timely topic. Unfortunately, the paper is not perfectly clear throughout all sections.
This paper revisits the information bottleneck principle, but in terms of the compression inherent in the weights of a neural network, rather than the representation. This gives the resulting IB principle a PAC-Bayes flavor. The key contribution is a generalization bound based on optimizing the objective dictated by this principle, which is then tractably approximated and experimentally verified. Reviews raise concerns about assumptions made to achieve the tractable version and a public discussion debates whether this is truly a PAC-Bayes bound. The authors address these adequately. Another concern is whether improvements in experiments can be ascribed to the new objective. Authors add new experiments in support of this. Additional concerns about the clarity of certain aspects of the paper were or were promised to be addressed by the authors. Overall, the perspective of this paper, its technical contributions, and experimental evaluations appear to be worthwhile to share with the community, as they advance the applicability of the information bottleneck principle.
I liked this paper quite a lot. Although this paper does not belong to my area of expertise, I was able to understand the paper clearly because of its lucid exposition. Experimentally, the authors show a novel GNN design with an attention module that has comparable performance to the MLP and outperforms other GNN designs. I believe that this will be a valuable contribution to many practical problems. Unfortunately, this work does not have any theoretical results, and evaluating the experimental results is outside my range of expertise. Therefore I would like to defer this paper to my fellow reviewers.<doc-sep>Main Idea In this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multiple attention heads. The authors carried out experiments on several real-world networks (4 assortative and 3 disassortative) with comparison to several state-of-art GCN methods. Strength: The authors study a very interesting problem of GCN/graph embedding or disassortative graphs. The proposed method is well motivated with solid theoretical motivation from graph wavelets. The proposed model is very intuitive generalization of graph wavelet methods. The empirical evaluation is very thorough on seven networks with comparison to about 10 baselines of different kinds. Weakness: Though the authors mentioned the use of sparsification of attention for speed-up, however, it mentioned that t is set to zero. It is interesting to see how scalable the proposed method is as it needs to have global attention to possibly all nodes. An empirical comparison of running time would be very helpful. The authors only carry out experiments on three disassortative which are all very small. It would be interesting to see more experiments on disassortative graphs. Alternatively, it would be interesting to have an experiment on synthetic graphs where the \\beta can be controlled and varied smoothly to see how it affects the performance of different algorithms. The authors picked only node classification of evaluation tasks. It is interesting to see how the disassortative could impact other tasks like graph reconstruction and link prediction. <doc-sep>This work propose a new GNN architecture to help GNN break its limitation on only working over homophilic networks. The technical is to use introduce graph global attention. I think the paper is written okay. The motivation is clear. The solution is reasonable. However, I have following criticisms: 1. This work has limited novelty. Observing that GCN cannot work well over heterophilic networks is not a new idea and observation. Using attention to capture the features from far-away nodes is natural but not novel. I do not think that it is reasonable to argue against other works, e.g. [1] that adopts the above idea by saying they are not expressive enough. Expressiveness sometimes may lead to model overfitting. Actually, ChevNet [2] can also capture far-aways nodes and be expressive enough. Why does it not work well? I guess that it is due to some overfitting issue. Moreover, if I understand it correctly, the limited difference between this work and [3] is most likely the global attention, which has very limited contribution. 2. Although the work claims everywhere to tend to decrease the complexity, when computing the global attention, one still needs to do computation for every pair of nodes, which is of course not scalable for even medium-sized graphs. 3. The heterophilic networks used for evaluation are very small with only several hundred nodes. Why not try larger ones, say actor, Cham. in [4]? I guess the computational issue comes from the global attention. [1] Non-Local Graph Neural Networks. [2] Convolutional neural networks on graphs with fast localized spectral filtering. [3] Graph wavelet neural network [4] Geom-gcn: Geometric graph convolutional networks. ---post-discussion update---- I would like to thank the authors for preparing the rebuttal and attending our discussion. However, I still think the complexity is a concern of this work. I do not think that Eq. (3) can be implemented within the complexity that the authors claimed. Moreover, if the authors use another way to compute the attention scores, that way should be very clearly stated instead of written in a different form. Given the high complexity, I cannot clearly see the advantage of this work in comparison to [1], as the non-local attention has been proposed in [1] already. [1] Non-Local Graph Neural Networks.
This paper proposes a GNN that uses global attention based on graph wavelet transform for more flexible and data-dependent GNN feature aggregation without the assumption of local homophily. Three reviewers gave conflicting opinions on this paper. The reviewer claiming rejection questioned the novelty of the paper and the complexity of the global attention mentioned in the paper. Even through the authors' responses and subsequent private discussions, concerns about complexity and novelty were not completely resolved. Considering the authors' claim that the core contribution of this paper is to design fully learnable spectral filters without compromising computational efficiency, it is necessary to consider why it is meaningful to perform global attention based on graph wavelet transform in the first place. In terms of complexity, although the wavelet coefficient can be efficiently calculated using the Chebyshev polynomials mentioned by the authors, in the attention sparsification part, n log n is required **for each node** in sorting, resulting in complexity of n^2 or more. There may still be an advantage of complexity over using global attention in a message-passing architecture, but it will be necessary to clarify and verify that, given that the proposed method uses an approximation that limits global attention within K hops. Also, this paper modifies the graph wavelet transform in graph theory, which requires a deeper discussion. For example, as the authors mentioned, the original wavelet coefficient psi_uv can be interpreted as the amount of energy that node v has received from node u in its local neighborhood. The psi_uv defined by the learnable filter as shown in Equation 3 has a different meaning from the original wavelet coefficient. There is insufficient insight as to whether it is justifiable to use this value as an attention coefficient. Overall, the paper proposes potentially interesting ideas, but it seems to require further development for publication.
The paper proposes a novel framework for semi-supervised learning, that solves two issues of previous methods: 1) over-reliance on labeled data and 2) error accumulation. It shows that jointly solving the main task together with another task (that discriminates whether the data label is real or not) leads to better performance. Strengths - The proposed framework seems to be novel. - It works well in experiments, on a wide range of tasks (classification, label propagation, and data imputation). - It seems to be potentially beneficial for many domains, since it does not have domain-restrictions, while many previous SSL methods rely on certain image domain techniques such as consistency regularization (and data augmentation). Weaknesses - Since the proposed method is only compared with the original pseudo-label method, comparing with other extensions of pseudo-labelling methods that are mentioned in Section 5 will make the contributions more clear. - In addition to the papers mentioned in Section 5, there are a few papers that try to address the error accumulation in semi-supervised learning methods that is observed in pseudo-labelling. For example: "In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning" from ICLR 2021 and "Repetitive Reprediction Deep Decipher for Semi-Supervised Learning" from AAAI2020. Questions - I am not sure if I understood the experiments correctly. As the missing rate goes higher, do we have more unlabeled samples (as explained in the last paragraph of page 6), or do we have more noisy-labelled samples (as explained in 1st paragraph of Section 4.1)? - Can we show the 3rd task (data imputation) in Figures 2 to 4? - One of the benefits of the method seems to be that it can be incorporated into a wide range of SSL algorithms. I think the paper demonstrated that it can be used to enhance pseudo-labelling method, but what kind of other SSL algorithms can SCL incorporate? Minor questions and comments - SSL is a very hot topic and there has recently been many advances. Since the experiments do not compare with many of the recent works, it would be better to emphasize why they were not compared. (For example, Section 1 has a discussion on how recent SSL methods utilize consistency regularization, which relies on heavy data augmentation techniques that is only available in certain domains.) - What kind of value for parameter alpha is used in the image classification? (For the other two tasks, I think the appendix explains that alpha is 1). - If we are given a labeled dataset L and unlabeled dataset U, it seems we can automatically construct vector M (which is explained in end of page 2). If this is correct, then why do we need M as an input in Algorithm 1 in page 6? - What is P introduced in the beginning of Section 2.2? It seems like it is a set from the $p \\in P$ notation but since it compares with M in the loss function, it also looks like a vector. - typo "perforamnce" in page 6 - Should $m_i, m_j$ in the beginning of page 3 be $M_i, M_j$? - Is $Y$ a label space ($y \\in Y$), or is it the full set of labels in the training dataset ($Y = Y_L \\cup Y_U$)? - Ideally it would be better to perform several trials and report mean/standard error in Table 1. =========== after rebuttal Thank you for answering my questions. The additional experiments are helpful to have a better understanding about the proposed method. It looks like the advatangeous points of the proposed method is now about the low computational costs, according to the new experiments including UPS, rather than better performance. Although this still may be beneficial for the research community, it seems to be slightly less significant and also may affect the storyline. I would like to also recommend to put the new experiments with UPS in the main paper instead of the appendix. The proposed method seems to have some nice benefits, but I feel there are a few weaknesses that should be addressed. I also have a few questions and it would be helpful if the authors can take a look at the previous section (main review). <doc-sep>The paper introduces Self-interested Coalitional Learning (SCL), which is a novel approach to semi-supervised learning. SCL combines the traditional self-training approach to semi-supervised learning with an auxiliary task that infers label observability. The empirical results show that, in a variety of scenarios, SCL outperforms both self-training and the original model. This is an interesting paper on a topic with important practical applications: semi-supervised learning. The contribution appears to be original, and it is likely to influence future work in the field. The authors are explicitly calling out and addressing the two main weaknesses of traditional self-learning approaches: error accumulation and over-reliance on the labeled data. The paper would greatly benefit from an additional section that would provide an intuitive, illustrative example of how and why the proposed approach outperforms self-training. Ideally, it should compare and contrast the convergence of (1) self training, (2) the auxiliary task, and (3) SCL. The paper would also benefit by tightening the narrative around the ALPHA parameter, which, in the main paper, is only discussed in the theoretical framework. Appendix A provides no value of ALPHA for the first dataset, and it proposed (without any justification) a value of 1 for the other two domains. Appendix B is extremely brief and not very helpful. The authors make no recommendation on how to tune alpha, and the argument that even the worst alpha (in the 0.1 - 0.9 range) is better than the original model is fairly weak, given the wide variations of the accuracy due to changes the value of alpha. OTHER COMMENTS: - for Table 1, please add three more rows: 0%, 90%, and 99%. The former is critical to understanding the upper-bound performance, while the later two will bring SCL into a more realistic semi-supervised regime, where unlabeled data is one or two orders of magnitude more abundant than the labeled data - please add to Figure 6 the horizontal lines with the accuracy of the original model for each of the three missing rates - it is still unclear why did you choose to use only 10%of the data for image classification (page 6); is scalability to large datasets a concern? - please spell-check the paper - eg, "perforamnce" on page 4 - page 2: please replace "more sufficient" - page 3: "jointly solving above two tasks" --> "jointly solving THE above two tasks" - page 3: "there are some other works embody" --> "there are some other works THAT embody" - page 4: "are impacted the influence" --> "are impacted BY the influence" - page 7: please replace "well learn" Overall, this paper uses a novel idea to improve the state of the art for semi-supervised training. <doc-sep>This paper proposes a new semi-supervised learning method. Motivated by the error accumulation problem of typical self-training paradigms, the authors propose to explicitly model the confidence of pseudo labels as an auxiliary task. They come up with a self-interested coalitional learning (SCL) strategy to solve both tasks jointly. Under the new framework, the main task is transformed into a cost-sensitive learning problem. Experiments demonstrate that pseudo labels are substantially more accurate with the new method and better performance of the main tasks at different label missing rates. Pros: - Overall the paper is well-structured and easy to follow. - The new method achieves its original goals and improves SSL effectiveness by jointly solving the main and the auxiliary tasks. - The authors introduce a new SCL strategy to solve the problems, which can be applied to a broader class of learning problems. Cons: - Lack of experiments - The proposed method is only compared with the self-learning method (with the same base learner). While this demonstrates how the model is improved with SCL, it is also necessary to compare with state-of-art SSL methods. - It's also valuable to include the supervised method with fully-labeled dataset as a reference in all experiments. - For data imputation, a more common case is that missing state is correlated with input/output instead of simply random missing. It also checks the method robustness against labeled/unlabeled distribution shift. - Compared with original self-learning method, the new method has an extra discriminator model, which are based on the same base learners as for the main tasks. It's meaningful and more fair to compare with supervised models of higher capacity. - The paper doesn't cover how SCL can work together with consistency regularization, which is commonly used together with self-learning. Besides, I have a few questions: - Although Table 1 doesn't have a row for Missing rate = 0% (full dataset), it seems SCL methods have better accuracy than model trained with full dataset for the first two tasks. Is this because the SCL has double model capacity due to the extra discriminator? - Why is the test accuracy of pseudo-labels 100% for SCL method in Figure 4? Are they calculated differently? This is an interesting paper from technique perspective. But it definitely needs more empirical studies to demonstrate practical value. <doc-sep>This paper proposes a new semi-supervised learning framework by introducing an auxiliary task that distinguishes whether the pseudo-labels are truly labeled or not. Then, this information is used to add a reweighting loss to the main objective. Experiments on several simple benchmark datasets show that the proposed method outperforms some naive baselines. The idea of introducing the auxiliary task that discriminates whether an instance is labeled is quite interesting. In effect, such a strategy is first introduced in active learning [1]. In the VAAL method [1], a similar discriminator is introduced to identify where an example is labeled or not, which is then used to indicate the uncertainty of an example for active selection. Therefore, the proposed method has a close connection to a recent work in SSL [2] that also employs the uncertainty measure to select high-quality pseudo-labels. I have the following concerns. 1. The derivation in section 3.2 is confusing. For example, in Eq. (3), the second equality is incorrect and the term $\\frac{dd}{dx}$ should be added. Also, it would be better to change the notation of $d$ (discriminator) to another one, since derivative dx also uses the notation d. Besides, I actually did not understand why $\\mathcal{L}_B$ depends on $f$, since $f$ and $\\mathcal{L}_B$ are from two different branches without sharing network blocks (Figure 1). 2. The experimental section is not convincing and this is my main concern. The datasets and the baselines are too simple. State-of-the-art SSL methods should be employed to support the claims. In particular, the uncertainty-based SSL method [2] should be compared. As I have discussed above, the proposed method can implicitly be equal to existing techniques in SSL. Is the proposed method complementary to existing methods? Or it is contradictory to some techniques? These questions require an in-depth empirical analysis. Overall, this work is below the bar of an ICLR paper regarding its poor experiments. [1] Sinha S, Ebrahimi S, Darrell T. Variational adversarial active learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 5972-5981. [2] Rizve M N, Duarte K, Rawat Y S, et al. In defense of pseudo-labeling: An uncertainty-aware pseudo-label selection framework for semi-supervised learning[J]. arXiv preprint arXiv:2101.06329, 2021. Interesting idea, poor experiments and confusing derivation.
This paper proposes a new method for the important problem of semi-supervised learning. This method relies on an auxiliary task, label observability prediction, to weight the examples according to the confidence in their pseudo-labels, so as to avoid the propagation of errors encountered in self-training. Limited experiments show that the proposed method can compete with other methods in terms of performance or training time. On the positive side, all evaluators agree on the potential value of the proposed approach, which is generic in nature. On the negative side, the experimental evaluation, although strengthened during the discussion, is not yet strong enough to have really convinced of the real merits of the method. In particular, comparisons with the state of the art still need to be improved. In addition, the paper would benefit from some rewriting, in particular of the mathematics (e.g. the d notation for task B should be avoided as suggested by one reviewer, there is a misplaced partial derivative in equation 6). The authors could also simplify their derivation by using the envelope theorem. I therefore recommend rejection, with an encouragement to strengthen the experimental part, and to improve the derivation of the proposed method.
The main goal of this paper is to introduce a simple methodology for optimizing transformer based models for efficiency and effectiveness. The paper introduces two main ideas: 1)A top-down strategy for pruning components of a transformer model: Given a specific focus, say speed, the strategy is to consider pruning large coarse-grained components first followed by smaller finer-grained components. The pruning decision is made based on a “significance analysis” -- a component is considered significant for pruning if it from the model does not result in a substantial increase in the model’s loss (as decided by a pruning threshold). 2) Pruning and approximating techniques for different components: For example feed-forward networks are pruned by removing weights in groups (determined via a hyperparameter). For approximating self-attention a sign-matching technique for deciding which top K keys to use for computing Query x Key dot products. The main strengths of this work are as follows: 1) The techniques do not require training networks from scratch and can be applied directly during fine-tuning. 2) The techniques are simple and should apply widely to most transformer-based models. 3) The empirical results support the claim that the technique can yield significant speed-up and memory-reductions while maintaining accuracy and even provide improvements in accuracy if that is the pruning goal. They show that technique is orthogonal to other models explicitly designed for speed and memory footprint (Q8BERT, DistillBERT) and can provide further improvements in both efficiency and effectiveness. 4) This is a practical and useful approach that should be widely applicable along with many useful insights about optimizing transformer-based systems. I appreciate that the experimental results are reported with averages across multiple runs! I don’t see any major weaknesses in the paper. Here are some areas that can be improved: 1) The description of the pruning strategies was hard to follow and needed to be tightened up. Possibly adding equations and some pseudo-code to the description should help. 2) I am curious to know what components get pruned cross the different models that were optimized. I wonder if there are systematic differences between original and distilled models and between auto-regressive (GPT) and auto-encoding style models. 3) Also some level of ablation analysis on the strategies used will be helpful. For example if the elements were not ordered based on the granularity would the results be any different? Since this is an iterative strategy the order should play an important role in selection and utility of the subsequent pruning steps. Same goes for the set of pruning strategies. A related question would be what gives the biggest gains. 4) What is the impact on the fine-tuning time? The baseline only requires one fine-tuning pass. Does this method require multiple fine-tuning passes? Or can the loss thresholds be computed on a smaller subset of the target data? This may be a good future work to look into for tasks where the training data is relatively large, where one cannot afford to exhaustively search through all the pruning strategies. <doc-sep>After reading the rebuttal, some of my concerns are addressed by the additional experiments. But I also agree with other reviewers that the result is not very surprising. As R4 mentioned, the proposed method depends on the a specific downstream task where the "small" "general" BERT can be further pruned. For a fair comparison to previous work, baselines that are applied to a specific fine-tuning task need to be compared. ===== This paper presents a new framework for creating small fine-tuned pre-trained language models. The framework has 3 components: 1. a set of transformer components to be pruned 2. a significant analysis for identifying unimportant elements. 3. a set of techniques to prune or approximate the transformer element. Pros: 1. The framework is very adaptive by considering different basic elements of the transformer. 2. The framework is very efficient by removing large components (e.g., layers, attention blocks, ffd layers) at first and small components (e.g., weight group) later. 3. The framework gathers multiple different pruning/approximation techniques and tries to explore the limit of pruning pre-trained models, which is appreciated. Cons/Questions: 1. Is the loss used in significant analysis computed using the development set? If the validation loss is used, the experiment results in Table 1 are not reliable. 2. There are many BERT pruning papers. Providing comparison to these papers is very important to evaluate the proposed method. Can the model prune more weight at the same performance level? or Can the model perform better at the same pruning ratio? 3. It is also helpful to present how much computing resource is needed to prune the network. E.g., how many prune-finetune cycles are needed. 4. Lack of results of pruning BERT-base on GLUE, which is a very standard and common setting. 5. In Figure 3, why Q8BERT + Speed Focus is even larger/slower than Q8BERT? With the same speed, Q8BERT + Speed Focus is significantly worse than Q8BERT. Minor: Page 5: less the minimum loss seen ==> less 'than' the minimum loss<doc-sep>This paper presents a method for improving a fine-turned Transformer in terms of a specific metric such as size, speed, or accuracy. The candidates of removed elements are considered hierarchically with some heuristics and are evaluated in terms of training and validation loss to determine whether they should actually be removed from the model. The authors apply their method to several state-of-the-art Transformer models and show that they can produce fast and compact models without losing much accuracy. Although the individual techniques employed to realize the whole pruning process are not particularly novel, the paper presents a well-thought-out approach to combine those and reports very promising experimental results. I think this is a nice contribution to the community, given that the computation cost is increasingly important in dealing with BERT-like models. It seems to me that the authors used transformers whose weights are shared between different layers like Universal Transformers or ALBERT. Maybe I missed something, but I think the authors should clarify if this is really the case in the manuscript. The entire process of pruning is a bit vague and hard to replicate. Would it be possible to describe the whole process in pseudo code? (Is Algorithm 1 the whole process?) I think the authors should also describe the computational cost (or maybe wallclock time) required to perform the proposed pruning processes. It seems to me that the search space is rather large and requires a considerable amount of computation. > p.5 … we prune the element only if the training/validation loss I think you should be more specific here. How did you actually use both the training and validation loss? Why do you need to look at the training loss when you are interested in the generalization error? > p.5 … weight groups of (Wn) … Why is this Wn? I thought this should be W. Minor comments: p.5 less the -> less than the? p.6 doesn’t -> does not p.6 ’’attention -> ``attention p.7 second order -> second-order? <doc-sep>Thanks to the authors for the detailed feedback! I still have concerns about the clarity of the presentation, and some contributions of the papers are not strong enough, so I'll keep my score. === Summary: This paper presents a framework to systematically perform pruning and layer approximation. The framework includes a queue of potential elements for compression. At each time step, the framework will evaluate the head element of the queue, try to prune the whole element or perform approximation (quantizing, pruning attention heads, and approximating with sign-matching attention), and keep the transformation only if the loss in performance is acceptable. The paper performs experiments with various models on GLUE and shows speedups or compressions compared to the original model. Reasons for score: The techniques used in the paper are not novel, and the choices on how to apply multiple compression techniques need more justification. The experiment results are okay but not surprising. The presentation of the paper needs to be polished. See below for more details. Pros: 1. I like the insight that {approximate, fine-tune, approximate} cycles doesn’t work for fine-tuning. 2. I like the insights used to determine which elements to be examined first: start from the larger blocks and later layers. I hope this point can be emphasized more and compared with more brute-force and less-efficient algorithms. For example, for each round, one can choose a layer that causes the least loss of performance to prune. You can compare your greedy algorithm with this algorithm to show the gain of using a less efficient algorithm is not significant. 3. The sign-matching attention proposed in the paper is new. I would like to see more emphasis and ablation studies on the effectiveness of this special module. Cons: 1. It is well-known that compressing the model is easier during the fine-tuning phase [1, 2]. I don’t think this should be a contribution to emphasize for the paper. 2. The whole compression framework has a single global error bound. Combining this with the greedy layer-by-layer approach taken by the framework, will the following case be possible: a layer that is early in the queue causes a huge drop of accuracy and thus makes all the future layers impossible to remove because the global error bound has been reached. A better way is to only remove the layer with the lowest loss reduction. It will be better to justify this point with an ablation study, or at least show the final pruned model doesn’t have this issue in the paper. 3. At the end of page 5: “When optimizing for speed, however, removing weight groups with low significance from arbitrary locations does not help, since it introduces unstructured sparsity in the weight matrix that can be difficult to exploit to achieve speedups.” It’s true that if you remove random entries in a matrix will not help for the actual speedups, but you can remove an arbitrary set of rows of the matrix, and then restructure the weight matrix (i.e. concatenate all the remaining rows to form a new matrix) to make it efficient for modern parallel hardwares. 4. I don’t really understand the point of using accuracy as the final goal. If the framework is for compression, the goal should be about speedup or size. If accuracy really matters, it should be enforced as the threshold instead of as the final goal. Also, I don’t see the difference in the framework between using speedup or size as the goal, since all the thresholds are defined by loss. 5. The results in the paper are okay, but compared to previous works in computer vision [3], it seems that the model size can be further compressed. 6. There are multiple places where the presentation can be improved: a. It’s more clear to use a pseudo-code instead of a diagram in Figure 2. b. It should be more clear to present Table 1 as multiple tables. c. It’s better to put the results comparing with previous works in a table (in the middle of page 8). Minor comments: - On page 5, 3rd paragraph from the bottom, “less the minimum loss” -> “less than minimum loss” References: [1] Jiao, Xiaoqi, et al. "Tinybert: Distilling bert for natural language understanding." arXiv preprint arXiv:1909.10351 (2019). [2] Shen, Sheng, et al. "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT." AAAI. 2020. [3] Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding." arXiv preprint arXiv:1510.00149 (2015).
This paper introduces a set of techniques that can be used to obtain smaller models on downstream tasks, when fine-tuning large pre-trained models such as BERT. Some reviewers have noted the limited technical novelty of the paper, which can be seen more as a combination of existing methods. This should not be a reason for rejection alone, but unfortunately, the results in the experimental section are also a bit weak (eg. see [1-4]), there are not very insightful analysis and it is hard to compare to existing work. For these reasons, I believe that the paper should be rejected. [1] DynaBERT: Dynamic BERT with Adaptive Width and Depth [2] Training with quantization noise for extreme model compression [3] MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices [4] SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
The paper is a natural extension of [1] which shows the importance of spectral normalization to encourage diversity of the discriminator weights in a GAN. A simple and effective parametrization of the weights similar to SVD is used: W = USV^T is used along with an orthonormal penalty on U and V and spectral penalty to control the decay of the spectrum. Unlike other parametrizations of orthogonal matrices which are exact but computationally expensive, the proposed one tends to be very accurate in practice and much faster. A generalization bound is provided that shows the benefit of controlling the spectral norm. Experimental results show that the method is accurate in constraining the orthonormality of U and V and in controlling the spectrum. The experiments also show a marginal improvement of the proposed method over SN-GAN [1]. However, the following it is unclear why one would want to control the whole spectrum when theorem 2 only involves the spectral norm. In [1], it is argued that this encourages diversity in the weights which seems intuitive. However, it seems enough to use Spectral Normalization to achieve such purpose empirically according to that same paper. It would be perhaps good to have an example where SN fails to control the spectrum in a way that significantly impacts the performance of the algorithm while the proposed method doesn't. Overall the paper is clearly written and the proposed algorithm effectively controls the spectrum as shown experimentally, however, given that the idea is rather simple, it is important to show its significance with examples that clearly emphasize the importance of controlling the whole spectrum versus the spectral norm only. Revision: Figure 1 is convincing and hints to why SN-GAN acheives slow decay while in principle it only tries to control the spectral norm. I think this paper is a good contribution as it provides a simple and efficient algorithm to precisely control the spectrum. Moreover, a recent work ([2], theorem 1 ) provides theoretical evidence for the importance of controling the whole spectrum which makes this contribution even more relevant. [1] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral Normalization for Generative Adversarial Networks. Feb. 2018. [2] M. Arbel, D. J. Sutherland, M. Bin ́kowski, and A. Gretton. On gradient regularizers for MMD GANs. NIPS 2018 <doc-sep>The paper builds on the experimental observations made in Miyato et al. (2018) in which the authors highlight the utility of spectral normalization of weight matrices in the discriminator of a GAN to improve the stability of the training process. The paper proposes to reparameterize the weight matrices by something that looks like the singular value decomposition, i.e. W = U E V^T. Four different techniques to control the spectrum of W by imposing various constraints on E have been discussed. For maintaining the orthonormality of U and V penalties are added to the cost function. The paper also derives a bound on the generalization error and experimentally shows the "desirable slow decay" of singular values in weight matrices of the discriminator. Other experiments which compare the proposed approach with the SN-GAN have also been given. (1)The paper puts a lot of stress on the stability of the training process in the beginning but clear experiments supporting their claim related to improved "stability" are lacking. (2)It would be helpful for the readers if more clarity is added to the paper with respect to the desirability of "slow decay of singular values" and spectral normalization. (3)The point regarding convolutional layers should be part of the main paper. <doc-sep>This paper proposes to parameterize the weight matrices of neural nets using the SVD, with approximate orthogonality enforced on the singular vectors using Orthogonal Regularization (as opposed to e.g. the Cayley transform or optimizing on the Stiefel manifold), allowing for direct, efficient control over the spectra. The method is applied to GAN discriminators to stabilize training as a natural extension of Spectral Normalization. This method incurs a slight memory and compute cost and achieves a minor performance improvement over Spectral Normalization on two benchmark image generation tasks. I'm a bit back and forth on this paper. On the one hand, I think the ideas this paper proposes are very interesting and could provide a strong basis off which future work can be built--the extension of spectral normalization to further study and manipulation of the spectra is natural and very promising. However, the results obtained are not particularly strong, and as they stand do not, in my opinion, justify the increased compute and memory cost of the proposed methods. The paper's presentation also wavers between being strong (there were some sections I read and immediately understood) and impenetrable (there were other sections which I had to read 5-10 times just to try and grip what was going on). Ultimately, my vote is for acceptance. I think that we should not throw out a work with interesting and potentially useful ideas just because it does not set a new SOTA, especially when the current trend with GANs seems to suggest that top performance comes at a compute cost that all but a few groups do not have access to. With another editing pass to improve language and presentation this would be a strong, relevant paper worthy of the attention of the ICLR community. My notes: -The key idea of parameterizing matrices as the SVD by construction, but using a regularizer to properly constrain U and V (instead of the expensive Cayley transform, or trying to pin the matrices to the Stifel manifold) is very intriguing, and I think there is a lot of potential here. -This paper suffers from a high degree of mathiness, substituting dense notation in places where verbal explanation would be more appropriate. There are several spots where explaining the intuition behind a given idea (particularly when proposing the various spectrum regularizers) would be far more effective than the huge amount of notation. In the author's defense, the notation is generally used as effectively as it could be. My issue is that it often is just insufficient, and communication would be better served with more illustrative figures and/or language. -I found the way the paper references Figure 1 confusing. The decays are substantially different for each layer--are these *all* supposed to be examples of slow decay? Layer 6 appears to have 90% of its singular values below 0.5, while layer 0 has more than 50%. If this is slow decay, what does an undesirable fast decay look like? Isn't the fast decay as shown in figure 2 almost exactly what we see for Layer 6 in figure 1? What is the significance of the sharp drop that occurs after some set number of singular values? The figure itself is easy to understand, but the way the authors repeatedly refer to it as an example of smooth singular decays is confusing. -what is D-optimal design? This is not something commonly known in the ML literature. The authors should explain what exactly that D-optimal regularizer does, and elucidate its backward dynamics (in an appendix if space does not permit it in the main body). Does it encourage all singular values to have similar values? Does it push them all towards 1? I found the brief explanation ("encourages a slow singular value decay") to be too brief--consider adding a plot of the D-optimal spectrum to Figure 1, so that the reader can easily see how it would compare to the observed spectra. Ideally, the authors would show an example of the target spectra for each of the proposed regularizers in Figure 1. This might also help elucidate what the authors consider a desirable singular value decay, and mollify some of the issues I take with the way the paper references figure 1. -The explanation of the Divergence Regularizer is similarly confusing and suffers from mathiness, a fact which I believe is further exacerbated by its somewhat odd motivation. Why, if the end result is a reference curve toward which the spectra will be regularized, do the authors propose (1) a random variable which is a transformation of a gaussian (2) to take the PDF of that random variable (3) discretize the PDF (4) take the KL between a uniform discrete distribution and the discretized PMF and (5) ignore the normalization term? If the authors were actually working with random variables and proposing a divergence this might make sense, but the items under consideration are singular values which are non-stochastic parameters of a model, so treating them this way seems very odd. Based on figure 2 it looks like the resulting reference curves are fine, but the explanation of how to arrive there is quite convoluted--I would honestly have been more satisfied if the authors had simply designed a function (a polynomial logarithmic function perhaps) with a hyperparameter or two to control the curvature. -"Our experimental results show that both combinations achieve an impressive results on CIFAR10 and STL-10 datasets" Please do not use subjective adjectives like "impressive." A 6.5% improvement is okay, but not very impressive, and when you use subjective language you run the risk of readers and reviewers subjectively disagreeing with you, as is the case with this reviewer. Please also fix the typo in this sentence, it should at least be "...achieve [impressive] results" or "achieve an [impressive] improvement on..." Section 3: -What is generalization supposed to mean in this context? It's unclear to me why this is at all relevant--is this supposed to be indicating the bounds for which the Discriminator will correctly distinguish real vs generated images? Or is there some other definition of generalization which is relevant? Does it actually matter for what we care about (training implicit generative models)? -What exactly is the use of this generalization bound? What does it tell us? What are the actual situations in which it holds? Is it possible that it will ever be relevant to training GANs or to developing new methods for training GANs? Experiments: -I appreciate that results are taken over 10 different random seeds. -If the choice of gamma is unimportant then why is it different for one experiment? I found footnote 4 confusing and contradictory. -For figure 3, I do not think that the margin is "significant"--it constitutes a relative 6.5% improvement, which I do not believe really justifies the increased complexity and compute cost of the method. -I appreciate Table 1 and Figure 4 for elucidating (a) how orthogonal the U and V matrices end up and (b) the observed decay of the spectra. Appendix: -Please change table 7 to be more readable, with captions underneath each figure rather than listed at the top and forcing readers to count the rows and match them to the caption. What is the difference between SN-GAN and Spectral Norm in this table? Or is that a typo, and it should be spectral-constraint? -I Would like to see a discussion of table 7 / interpretation of why the spectra look that way (and why they evolve that way over training) for each regularizer. Minor: -Typos and grammatical mistakes throughout. -As per the CIFAR-10/100 website (https://www.cs.toronto.edu/~kriz/cifar.html) the Torralba citation is not the proper one for the CIFAR datasets, despite several recent papers which have used it. -Intro, last paragraph, "Generation bound" should be generalization bound? -Page 4, paragraph 2, last sentence, problem is misspelled.
All the reviewers agree that the paper has an interesting idea on regularizing the spectral norm of the weight matrices in GANs, and a generalization bound has been shown. The empirical result shows that indeed regularization improves the performance of the GANs. Based on these the AC suggested acceptance.
The paper is about a method for synthesizing binaural audio from a mono recording of a single speaker's speech. First, I think the title is too general. The paper does not attempt to convert all possible sounds, but it tries to convert a single speaker's monaural speech signal to binaural audio where the speaker is moving. I think this inherent assumption is important since the method will probably not work for multiple overlapping audio sources. I suggest changing the title to "Neural synthesis of binaural speech of a single moving speaker." The first part of the network "neural time warping" is an interesting component that is capable of adjusting the delays conditioned on the location and orientations of the source and microphone such that a location dependent binaural audio is formed by estimating time-varying delays of the original mono recording separately for two channels. It is believable that such a module would be helpful for a single moving speaker. However, such a model would not help or work when there are more than two active audio sources. A separation module would be required for that scenario. Neural time warping is an autoregressive model which can work online. The second stage convolutional network which uses conditioned hyper convolutions is also an interesting architecture that takes the warped signals and applies time-convolutions with kernels obtained from the conditioning input which has the time-varying locations and orientations of the source and the microphone. The section about the loss function is also interesting in that, the time domain l2 loss is shown to not work well for accurate phase estimation, so the authors propose to add a separate phase loss term to compensate for that. I think it would be better if Figure 2 is replaced with a plot of epsilon/|yhat| versus amplitude error divided by |yhat| in (a) and versus the phase error in (b). It could be clearer than the current 2D figure which is hard to interpret. The use of "sine activation" is not well justified. "sine" activation is useful in the first layer of a "signal representation network" which is different from a signal prediction network. I do not see how and why that could be helpful here. In terms of comparisons, 2.5D method uses visual information as conditional information to generate complex masks to produce binaural audio. In this paper, visual information is replaced with the spatial conditioning information. It would help to get more information about the window size and hop size used in 2.5D since they may be an important factor that relates to the amount of delays they can introduce. For wavenet comparison, it was not clear how the wavenet model was trained to generate binaural data. Did it use any conditioning information? If so , how? Was it applied in an auto-regressive way with randomized sampling? The wavenet audio example sounded noisy which is not typical of wavenet generated audio. It looks like the DSP method can utilize a listener specific HRTF which may be difficult to incorporate for the proposed neural model. Is it an important issue? How does the model generalize to unseen speakers and rooms? The training and testing strategy uses the same room and the same speaker(s). Would we have any problem when the monaural audio is recorded in some other room with some other speaker? In Figure 8, maybe it is OK not to draw the original binaural signal for every method. In general, I liked the neural warping and conditional convolutional components which are interesting and I liked the analysis about the loss function. The approach is an interesting way to obtain binaural version of a monaural mono speaker recording in a room. The dataset produced for the paper would also be useful for research. **Update after revision** The revision improved the paper. Thanks for taking care of my comments. Justification of sine activations, generalization to unseen speakers experiment are nice additions. The new title is a bit better and I think it may be OK since the goal is to perform a moving source simulation for single speech sources. Multiple speech sources can be simulated separately and added together as mentioned. The authors may consider possibly a better name: "Neural binaural synthesis from mono speech" which emphasizes that the synthesized target is "binaural speech" from a single speech recording. Just a few more points. 1. I think it is essential in wavenet to apply the model in an auto-regressive fashion over samples. Just using the network architecture and the loss function from wavenet is not equivalent to "using a wavenet model" since an essential part of the model is the autoregressive sampling which makes sure the samples are dependent and coherent. Without auto-regressive sampling, the resulting sound is poor as observed by the authors. So, I suggest to emphasize that "autoregressive sampling" is not performed in the paper to avoid misleading the readers. 2. More explanation of 2.5D is appropriate. One wonders if using a larger STFT window size would improve its results.<doc-sep>Strengths: 1. The paper is well written. It includes clear math notations and figures. Readers can easily follow the thought process of the authors. For example, Figure 2 shows the relation of l2 loss and phase loss with respect to target energy, indicating the importance of penalizing phase loss in the end to end system. The same observation is supported by Figure 3. 2. Strong results. The proposed end2end model significantly outperforms previous SOTA in terms of objective measures and subject tests. The video demo is very convincing. The model improved spatialization and sound quality. 3. High novelty. This paper proposes to impose monotonicity and causality to the learned warping function, which incorporates the physics of sound propagation. I am excited to another example of applying domain knowledge to an end-to-end model. The model includes two novel components: the neural warp network compensates the errors from geometry warp, the temporal convolution works as a post processing module to account for reverberation and other effects. Ablation study shows both components are critical. To be improved: 1. The caption for Figure 4(a) seems to be incomplete. 2. It would be good to include a table to compare the proposed model with baselines in terms of model size and inference speed.<doc-sep>This paper presents a neural network-based model to generate binaural audio given a single-channel audio and positions of source/listener & their angles. The authors developed a dataset of binaural audio, which will be released on acceptance. Technical details and model architecture are available in the body of the paper, whereas additional details such as baseline DSP-based approach, proof, and dataset are available in the appendix. The model was evaluated using the dataset developed in this work. A demo video demonstrating the capability of the model is also provided as a supplementary material. There are a few parts need to be addressed. (1) it is unclear why DTW-based warping is required. IIRC the warpfield here can represent not only a shift but also other monotonic & causal such as repeating. If there is only delay between left and right, just having a shift is enough isn't it? It is great if the authors can explain the motivation to use warpfield more clearly. (2) The use of hyperconvolution is an interesting idea. The equation 5 uses conditional temporal convolution. However, audio generative models such as WaveNet uses a different architecture; gated convolution. The gating mechanism can give additional non-linearity and so I'm wondering if you can evaluate the performance of hyperconvolution against gated convolution. (3) too large confidence intervals in Table 4. Although there were many evaluations, the confidence intervals were pretty large and there were overlaps among them (e.g., small overlap between DSP and "ours" in cleanliness, large overlaps between spatialization and realism between DSP and ours). With this result it is difficult to claim that there was a significant improvement over the baseline system. Please check your results and design the experiment more carefully to figure out whether there is any significant difference between them. Conducting side-by-side comparision is one possiblity. Comments: - This paper claims that it works in real time but no information about speed such as real-time factor & hardware specification are provided. - Sampling rate information is not explicitly provided in the experiment section. - 0.6 MOS difference is large, not "a bit". - Modern WaveNet models often use mixture-of-logistics (refer to Parallel WaveNet paper for details) as output rather than mu-law to achieve better quality.
+ Interesting method for binaural synthesis from moving mono-audio + Nice insight into why l2 isn't the best loss for binaural reconstructions. + Interesting architectural choice with nice results. + Nicely motivated and clearly presented idea -- especially after addressing the reviewers comments. I agree with the idea of a title change. While I think its implied that the source is probably single source, making it explicit would make it clearer for those not working in a closely related topic. Hence, "Neural Synthesis of Binaural Speech from Mono Audio" as suggested in the review process sounds quite reasonable.
Summary: The paper considers the adversarial attacks via a surrogate model constructed using data from a different domain. The authors propose a defense from such attacks by a special kind of adversarial training inspired by the idea of domain adaptation. The idea can be useful but raises a lot of questions, especially when looking at the evaluation of the proposed approach. ########################################################################## Reasons for score: I vote for a reject, as some findings are intriguing, while the experimental results are questionable. The first major concern is, why do authors consider NLP models and attacks in the paper? It is much easier to work with Image datasets, and if the general idea is new, I suggest to start from this point to verify that the considered domain adaption works well in this scenario. Also, the proposed attack is not new. It is just a surrogate model attack but using a surrogate model training on the data from a different domain (as the authors suggest due to the unavailability of the initial domain data). Also, for this new attack, the authors don't compare a surrogate model attack trained using the same domain data, which would be interesting to compare. The authors use only one dataset, which is a bit strange for modern papers. For this dataset, they don't provide a full study, limiting the scope of experiments to particular pairs of source-target domains. From the paper, it is not clear how widely applicable are obtained results. The comparison is not full. There are a lot of options to be tuned for alternative approaches like Adversarial training or other defenses. The hyperparameter selection for them has a crucial effect on their success. So, we can't say that the proposed approach works better than others. ######################################################################### Major concerns: * Only one dataset considered. I think that the inclusion of additional datasets (at least three) would improve the paper and make the conclusion by the authors more solid * Usage of surrogate models trained on other dataset is not new for general adversarial attacks [1 (mentioned in the paper), 2] and for adversarial attacks in NLP [3] * LSTM is not the state of the art model for the processing of NLP data * 4.2. what attack do you use? not explicitly specified. so the results can't be verified by replication of the described experiments * Table 2 will benefit from adding after-attack accuracy for the original domain. If it is similar to the presented accuracies, then why bother with a new method? * Table 3 comparison is not fair, as we have no details about training for each approach, e.g. we don't know how many additional examples we add during adversarial training. Also note, that the state-of-the-art for adversarial training is different from described in the paper. See [4, 5] * Table 4 After-Defense Accuracy for what model is presented? because it should be different for LSTM/GRU/CNN attack model * Tables 2,3,4 - I suggest to keep the list of pairs (target domain, substitute domain) similar for all tables to be sure that the presented examples are not cherry-picked (also, please consider running your approach on all pairs (target domain, substitute domain) and aggregating all these results) * Domain adaptation models, from my experience, are not easy to train. It is interesting to access the quality of the models for different runs of Learn2Weight (is it stable? etc.) 1. Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016a. 2. Cheng, S., Dong, Y., Pang, T., Su, H., & Zhu, J. (2019). Improving black-box adversarial attacks with a transfer-based prior. In Advances in Neural Information Processing Systems (pp. 10934-10944). 3. Fursov, I., Zaytsev, A., Kluchnikov, N., Kravchenko, A., & Burnaev, E. (2020). Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers. arXiv preprint arXiv:2006.11078. 4. Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., ... & Goldstein, T. (2019). Adversarial training for free!. In Advances in Neural Information Processing Systems (pp. 3358-3369). 5. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. ICLR, 2017. ######################################################################### Proposed minor improvements: Table 1: demonstrates one example that breaks the semantics of the attacked sentence. Can you provide good examples of why your approach work? Definition 1: is not a definition, is X one instance or many instances? in this definition also not specified that X and X' should be similar Equation 1: why you avoid standard number of equations \\begin{equation} \\label{eq:sample_equation} sample text \\end{equation}?<doc-sep>Summary: This paper is about generating adversarial examples for some target model and protecting from such attacks. Authors consider a setting when an adversary has access to some "similar to target " domain data, and can use this data to generate a surrogate model. Using this surrogate model an adversary can generate adversarial examples, that apparently also fool the target model. Then authors also propose a defense mechanism from this type of attack, Learn2Weight. This is a learnt network that, for a given example, returns perturbation of weights to the target model which will be applied to the target before inference. This model is trained by a defender on synthetic domains generated as perturbations to the target data Overall, this type of an attack is interesting. The paper is well organized and written, and easy to follow. Enough background is given for a reader to follow without the need to research around or going to appendix. Well done on clarity! I do have a problem understanding how effective this attack is (compared to other blackbox attacks) and how the proposed defense compares to standard domain generalization methods like learning domain invariant features. 1) One concern I have is about practicality an availability of such "similar" domains. For testing authors used Amazon multi-domain sentitment classification, where domains are easily available. But how would you attack a pre-trained Imagenet for example? - What domains are similar? - and further more, how much data for these similar domains you need to have to train a good enough surrogate model? - Also you don't really have a way to calculate that your data is close to the actual target data. 2) Definition 2: f(A, W_T) = f_S(A) requires an access to your model f, so I would not call this type of attack "without access to the target model" 3) How does this attack compares to any other black box attack that uses target model? It really should be in Table 2. If other attacks are able to make target model performance worse than this type of attack, it is of less value to defend from a weaker attack 4) Algo 3 - what are the adversarial perturbations you are talking about?  5) I am not sure algorithm 2 is the best way of doing it? Why not to try any of domain generalization techniques (e.g. train on all domains with an adversarial head tries to distinguish between domains, or MMD or whatever). May be this way you won't need Learn2Weight model at all (since you are already learning domain invariant features only) Minor: - Table 2: What are u boldening ? I would expect the bolden result to be per source model (book) and the worse performance you get (so dvd attack gives the lowest after attack accuracy). You are boldening "baby", which is the weakest domain (on which your attack mode is trained) for an attack.  - Algo 2 Compute weights of f trained on TY=W_T-W_T (just assign 0s?) <doc-sep>In this paper, the authors propose a learn2weight framework to defend against similar-domain adversarial attacks. Experimental studies on Amazon dataset are done to verify the proposed learn2 weight. The paper is not easy to follow. The presentation and organization should be further improved. Here are the detailed comments: (1) Adversarial attacks are widely used in various application domains, e.g., computer vision [ref1] and reinforcement learning [ref2]. It is necessary to discuss with these related works, and highlight the difference and importance of adversarial attack methods on NLP tasks. [ref1] Adversarial Examples that Fool both Computer Vision and Time-Limited Humans [ref2] Minimalistic Attacks: How Little it Takes to Fool Deep Reinforcement Learning Policies (2) The authors highlight “domain adaptation theory” several times. Please give a clear description on what it is. (3) Where is Table 1 used in the main content? (4) Regarding definition 2, the following two points are unclear: (1) is f_S(A) the true label of A. Based on the figure 1 (a), only correctly classified source samples are used while the definition does not show this. (2) why f(A,W_T) = f_S(A)? f is the target classifier, are you generating the domain-invariant samples? (5) The rationale of similar domain adversarial attack is confused. It is more reasonable to use source data to help generate target adversarial samples X’ which confuse the classifier to deviate the label f(X) \\neq f(X’) where X is the original target sample. However, the paper generates source adversarial samples, which naturally may confuse the target classifier due to the domain divergence. It is unclear why and how these source adversarial samples can contribute to the robustness of the target classifier. (6) Regarding the accuracy drops in Table 2, it is highly possible caused by the data shift between different domains. How to differentiate the importance of the data shift and adversarial in the accuracy drops? (7) The technical part is not easy to follow. The sections 5.1 to 5.3 are not linked well. It is necessary to give more contents on the motivation and flow of these algorithms instead of just putting them in algorithm charts. (8) Why target data is used in Algorithm 2 and also transfer loss optimization? In the introduction, target domain information is assumed to be unavailable. Moreover, algorithm 2 is to reduce the domain divergence (if I understand correctly). I am quite curious how the proposed method differentiates from other transfer learning methods. Update: Thanks for the authors' response. After reading the response and the other reviewers' comments, I think the paper needs to be further improved, and thus I will keep my score.
The submission considers a new attack model for adversarial perturbation in a framework where the attacker has neither access to the trained model nor the data used for training the model. The submission suggests a"domain adaptation inspired attack": learn a different model on a similar domain and generate the adversarial perturbations using that model. The authors then also develop a defense for this type of attack and provide some empirical evaluations of the resulting losses on a few NLP benchmark datasets. The paper refers to the literature on domain adaptation theory to motivate their suggested defense, but this analysis remains on an intuitive (rather than a formally rigorous) level. Furthermore, the empirical evaluation does not compare to a variety of attacks and the defense is only evaluated with respected to the self-suggested attack. This is a very minimal bar for a defense to meet. The reviewers have criticized the submission for the rather minimal extend of empirical evaluation. Given that the submission also doesn't provide a sound theoretical analysis for the proposed attack and defense, I agree with the reviewers that the submission does not provide sufficient novel insight for publication at ICLR. In contrast to some of the reviewers, I do find it legitimate (and maybe recommendable even) to focus on one chosen application area such as NLP. I don't see a requirement to also present experiments on image data or re-inforcement learning applciations. However, I would recommend that the authors highlight more explicitly what general lessons a reader would learn from their study. This could be done through a more extensive and systematic set of experiments or a through analysis in a well defined theoretical framework.
This paper proposes a detailed analysis on pruning heuristics, and its applications to early pruning. It thoroughly analyzed magnitude-based pruning, loss-preservation based pruning, and gradient-norm based pruning. The paper demonstrated the results on CIFAR-10 and CIFAR-100 datasets. it's very timely research to guide the audience which heuristic is better. My major concern is the novelty over existing pruning heuristics, since the techniques have all been proposed before. The other concern is the evaluation and the scale of the dataset. Given the results in table 2 different by less than a percent, and Cifar training is very noisy, it's hard to tell the difference. Just like the Lottery Ticket hypothesis works on Cifar but does not work on ImageNet, different pruning heuristics needs to be verified on the large scale ImageNet dataset in order to be convincing. <doc-sep>## Summary This paper studies different families of pruning criteria and their impact on training dynamics (especially early training). Stemming from the observations, authors provide improvements to the 1st and 2nd order saliency methods. ## Pros - Authors provide simple and useful explanations to various pruning criteria that are based on the Taylor approximation of the loss functions. - Even the authors don't mention this in the contributions, they propose some improved versions of existing criteria. For example the updated taylor score with $\\theta^2g(\\theta)$ or absolute valued GrasP. This is great and it might worth focusing on these criteria further providing further evidence on their usefulness. Currently, they seem a bit arbitrary. For example, why not third power $\\theta^3g(\\theta)$ or additive biasing of magnitude $(g(\\theta)+c)*\\theta$. I recommend authors to run their versions in unstructured setting too. ## Cons - Authors choose to focus on structured pruning since resulting networks are dense and acceleration is straight-forward. However, they miss an important work on structured pruning [1]. This relatively well-known work shows that pruned (structured) networks can be trained to full accuracy from scratch. In other words, their value lies on doing some kind of an architecture search over layer widths. The motivation of the work needs to be revisited in the light of these results. Since we can retrain pruned networks from scratch, it probably doesn't matter which neuron we choose and therefore which criteria is better. Unstructured pruning doesn't have this training from scratch issue, and I recommend authors to at least include and maybe shift the focus to unstructured pruning. - "but requires specially designed hardware (Han et al. (2016a)) or software (Elsen et al. (2020)). While results in this paper are applicable in both settings, our experimental evaluation focuses on structured pruning due to its higher relevance to practitioners." All neural networks require special hardware if you want to accelerate them. I think a better motivation here is to point out to the difficulties at accelerating sparse operations and limited availability/support for such operations in existing frameworks. And I am not sure how useful structured pruning algorithms are given the results of [1]. - "The larger the magnitude of parameters at a particular instant, the smaller the model loss at that instant will be." This is likely to be true in simple settings, however it is not a sufficient condition; especially for networks with batch norm. You can arbitrarily scale neurons if there is a batch-norm and you can come-up with arbitrary ordering if needed. I recommend re-phrasing this observation and/or stating the assumptions better (I don't remember seeing any assumption on the network itself). How the regularization or gradient noise will effect this statement? - "Thus, the parameter with the most negative value for Θ(t)g(Θ(t)) is likely to also have a large, negative value for Θ(t)H(Θ(t))g(Θ(t))" This is not clear to me. Assume 1d case where Θ(t)= -1; g(Θ(t))=2; H(Θ(t))=-1 -> Θ(t)g(Θ(t))=-2; Θ(t)H(Θ(t))g(Θ(t))=2. I can see the correlation in the figure but it doesn't seem like an obvious thing. Maybe because hessian don't have many negative eigenvalues? ## Rating I found the results and analysis interesting, however motivation needs to be updated. The work would also benefit from including unstructured pruning experiments. ## Minor Points - "Recent works focus on pruning models at initialization (Frankle & Carbin (2019);..." Lottery Ticket paper prunes after training and show existence of some initializations that achieve good performance.. - Equations 6/7 $\\frac{dL}{dt}= ||g(\\theta)||^2$ assuming gradient descent shouldn't be a learning rate? - "...than magnitude-agnostic techniques." Which methods are these? As far as I see, all methods use magnitude information in their formulas directly or indirectly. - In Table:1, I recommend authors to bold both scores if they lie within the std of each other; so that we can identify which improvements are significant. - It would be nice to show how the temperature parameter is used for GrasP [1] https://arxiv.org/abs/1810.05270 <doc-sep>Summary: The authors study proposed importance metrics for pruning neurons/channels in deep neural networks and analyze what properties of parameters are favored by each approach by studying the relationship between model parameters, gradients, 2nd order derivatives and loss. Through this analysis they develop a rich understanding of the consequences of different pruning criteria and use their understanding to propose modifications to existing techniques that produce higher quality models across different settings. Pros: The framework used by the authors is clear and easy to understand but also very general. The authors’ mix of empirical results and theoretical analysis makes a convincing case for the accuracy of their observations. The authors go beyond observation and analysis and use their insights to design new approaches to pruning that outperform existing techniques. The paper is well written and well organized. Cons: This paper has few limitations. The main limitation is that all experiments were conducted on relatively small datasets (CIFAR). Given that is has been shown that some techniques in model compression produce state-of-the-art results on small tasks but fail on larger models and datasets [1, 2], I’d encourage their authors to further validate their insights on a larger dataset (i.e., ImageNet). Comments: I found that the authors waited a long time to explain the term “gradient flow”, which was important in sections 1-3 but not fully detailed until the start of section 4. On page 1 the authors say in parenthesis that gradient flow is “gradient descent with infinitesimal learning rate”, but I found this explanation was not clear. The second sentence of section 4 “the evolution over time of model parameters, gradient, and loss” was much more clear. I’d encourage the authors to potentially work some of these details earlier into the text. References: 1. https://arxiv.org/abs/1902.09574 2. https://arxiv.org/abs/2003.03033 <doc-sep>The paper contributes to explaining why saliency measures used for pruning trained models may (or may not) also be effective for pruning untrained or minimally trained models, by developing the relationship between those saliency measures and different forms of the norm of model parameters based on the evolution of model parameters via gradient flow (basically derivatives w.r.t. time). This result leads to several interesting interpretations that could shed some light on on-going efforts to understand recent methods of pruning early-on (e.g., pruning at initialization or after minimal training) and potential extensions to existing saliency measures. The idea of employing gradient flow is novel for its purpose and seems to be accurately executed. The main concern is that there is a gap between the flow model and the actual optimization method used in this work (SGD with momentum), or more generally standard optimization methods for deep learning. In this regard, the claim of “evolution dynamics” seems a bit exaggerated and remains as theoretical; experiments are strictly speaking not entirely valid to support it either. (minor) Related work is written as if pruning is only done via saliency-based methods (e.g., “pruning frameworks generally define importance measures”) without taking into account various others such as optimization based methods employing sparsity inducing penalty terms. On a different but related note, the writing becomes a bit loose when it comes to referencing existing methods; it is worth correcting them and clarifying the scope/focus of this work. Further questions: - Why do you study structured pruning *only*? The provided reasons (“unstructured pruning requires specially designed hardwares or softwares” or “higher relevance to practitioners”) don’t seem valid enough if the purpose really lies in analyzing. Can you provide any results for unstructured pruning? - Can you provide evidence to support the claim “GraSP without large temperature chooses to prune earlier layers aggressively” (besides Raghu et al. 2017)? - Based on Tables 1 and 2 the proposed extension to loss-preservation method works the best, while the differences across different methods seem a bit marginal. Is my understanding correct?
This paper proposes a broad framework for unifying various pruning approaches and performs detailed analyses to make recommendations about the settings in which various approaches may be most useful. Reviewers were generally excited by the framework and analyses, but had some concerns regarding scale and the paper's focus on structured pruning. The authors included new experiments however, which mostly addressed reviewer concerns. Overall, I think is a strong paper which will likely be provide needed grounding for pruning frameworks and recommend acceptance.
Summary. This paper aims to explain dropout from the lens of game theoretic interactions. Let x denote the input of a deep neural net (DNN), intuitively, the interaction between two variables x_i and x_j quantifies how much the presence/absence of the j-th variable affects the contribution of the i-th variable to the output of the DNN. With the above definition in place, the authors show theoretically and empirically that dropout reduces the interactions between input variables of DNNs. As this type of interactions turn out to be strongly correlated with overfitting, the authors suggest that dropout alleviates overfitting by reducing interactions between input variables (or activation units) of DNNs. Based on this understanding of dropout, an alternative regularization technique is proposed, which explicitly penalizes pairwise interactions between variables. Strengths. 1. The paper is well written and clearly presented. 2. Although it is already well known (or at least widely accepted) in the community that dropout reduces dependencies among activation units in DNNs, the explanation of dropout from the perspective of game theoretic interactions is interesting, and it is supported both theoretically as well as empirically. Weakness. 1. Hyperparameter settings (e.g., optimization-related ones) to reproduce the results are not provided. It is not clear what dropout rate was used in the experiments. Is it 0.5? In all cases, different dropout rates should be investigated before claiming the superiority of the proposed interaction loss (regularization) over dropout. 2. Experiments are curried out on only one task (classification), one type of data (images), and one family of DNNs (convolutional neural nets). However, the paper draws quite general conclusions regarding the understanding of dropout from the perspective of game theoretic interactions. Therefore, considering at least one more task involving a different type of data and another family of DNNs would reinforce the findings of this paper. 3. Computational time analysis of the proposed interaction loss and training time comparisons with dropout are lacking. Additional comments 1. Dropout is used both at convolutional and at fully connected layers. However, one can argue that applying dropout to convolutional layers does not make sense owing to the sparsity of connections in this type of layers. 2. I would recommend revising the title of the paper. What is proposed is more of an alternative regularization form to dropout than an improvement for the latter. <doc-sep>*Paper Summary* The authors provide a novel interpretation of dropout regularization using Banzhaf interactions, a tool from game theory. *Pros* * The authors are able to mathematically prove that dropout is capable of suppressing neural co-adaptations, the latter being one of the reasons for overfitting. Visualizations are also provided in this respect on a dataset for face analysis. * Through their mathematical analysis, authors are able to improve upon the classical dropout training, by making it more compatible with batch normalization, so that these two classical regularization strategies show a better complementarity. *Cons* * Some of the results does not read well, like Table 3 or Figure 4, but this is really minor and fixable *Preliminary Evaluation* I believe that the overall analysis provided by authors is complete and interesting, so I am akin to call for a full acceptance of the paper which I deem suitable for such a venue like ICLR. In order to improve their paper, I would encourage authors to better investigate over the following aspect: since many times authors established a principled connections between dropout and neural activations, it would be very interesting to discuss the relationship with the present work and another paper [Gomez et al, Targeted Dropout, NeurIPS Workshops 2018] in which a computational variant of dropout is proposed, such that the dropout rate depends upon neural activations. *Post-Rebuttal Evaluation* I have carefully read the response provided by authors and checked the revised manuscript. I confirm my preliminary acceptance rate.<doc-sep>Summary: This paper analyzes the effect of dropout on interaction between units in a neural network. The strength of the interaction is measured using a metric that is used in game theory to quantify interaction between players in a co-operative game. The paper shows that dropout reduces high-order interaction (as measured by this metric), and that reduction in interaction is correlated with better generalization. The paper introduces a new regularizer that explicitly minimizes the metric and claims that using this regularizer instead of dropout has some advantages. Pros: - The idea that dropout reduces overfitting by breaking up complex co-adaptations and regularizing interactions is widely believed to be true. However, this paper tries to explicitly quantify the amount of interaction and presents theoretical and experimental evidence that interaction reduces as a result of having dropout. Cons: - The proposed metric is hard to compute exactly since it requires summing over exponentially many terms, each term requiring a forward prop through the network. - The assumptions made in computing this metric approximately seem unclear to me (Appendix H). I could not understand what probability distributions are being expressed and why. In particular, how is the term in Eq 38 approximated by the one in the first line of Eq 41. The paragraph after Eq 40 was also unclear. - It is not discussed how this metric for evaluating interaction strength compares to something conceptually simpler like the Hessian \\\\(\\nabla^2_{i,j} L\\\\) which directly measures the dependence of the network's loss on pairs of input variables, and its magnitude is proportional to the interaction strength. - The paper mentions that an advantage of the proposed loss is that the weight \\\\(\\lambda\\\\) applied to the interaction loss can be explicit controlled, whereas the strength of dropout cannot be controlled (Section 4 "advantages", "Unlike the interaction loss, people cannot explicitly control the strength of dropout .."). This does not seem correct. The dropout probability provides such as control mechanism for dropout. - For the experimental results in Table 3, it is not mentioned what value of the dropout probability was used, whether this value was tuned for each architecture, and which network layers was dropout applied in. These factors can have a significant impact on overall performance. On the other hand, the \\\\(\\lambda\\\\) parameter for the proposed interaction loss is tuned. So the resulting comparison is not fair. - It is not clear what additional insight this metric provides about dropout, beyond confirming what is intuitively apparent : that having randomly dropped neurons will make it harder for the network to learn high-order interactions. Other comments and suggestions: - The introduction includes a discussion around Banzhaf value, without describing what it means. The concept of Banzhaf value might be new to many readers in the ML community. I would suggest including a short explanation to give some intuition about what it means, before discussing it in more detail. - " the output of the DNN corresponds to the score f" : would it make sense to say that (negative) loss corresponds to the score f, rather than the output of the network ? - "award" -> "reward" or "utility" ? (I'm not familiar with game theory literature, so I'm not sure if "award" is a commonly used term there). - The title of the paper is a bit misleading as it seems to suggest that the paper is about using dropout in Game theory (i.e. solving problems in game theory using dropout). Post rebuttal The authors addressed the concerns around the clarity of the paper and added useful additional experiments. I will increase my score to 7.<doc-sep>Summary: The paper proves that dropout can suppress the strength of interactions between input variables from the perspective of game theory. It further improves the utility of dropout by introducing an explicit interaction loss. Experimental results verify the theoretic proof and the effectiveness of the proposed loss. Strengths: 1. The paper introduces a new perspective of game theory to understand dropout. 2. Experiments are conducted on various datasets to support the theoretic proof and the proposed interaction loss Concerns: 1. Although I have no background in game theory, I try my best to understand the terminology and the analysis. However, I do not have the ability to verify the correctness of its proof. Thus, I cannot evaluate the main contribution of this paper. For experimental results, the conclusion that dropout suppressing the input interactions is not a new story. 2. It would be more interesting if the author can further explain the disharmony between dropout and bn from the perspective of game theory.
The paper introduces a game-theoretic framework to improve our understanding of dropout. All reviewers appreciated the contribution of the paper. While they had a number of questions/suggestions, almost all of them were adequately addressed. Three reviewers are satisfied and recommend acceptance, while a lone reviewer is on the fence, he/she admits he/she is less knowledgeable about game theory. Overall, I think this paper makes a solid contribution to ICLR.

Dataset Card for Dataset Name

Dataset Summary

The Meta-Review dataset is a dataset created based on the ORSUM dataset proposed in the paper "Meta-review Generation with Checklist-guided Iterative Introspection" by Zeng et al. Downloaded from their official GitHub Repo: https://github.com/Mankeerat/orsum-meta-review-generation

Supported Tasks and Leaderboards

Multi-Document Summarization

Languages

English

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

[More Information Needed]

Downloads last month
17
Edit dataset card