paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_t6O08FxvtBY
Advancing Model Pruning via Bi-level Optimization
The deployment constraints in practical applications necessitate the pruning of large-scale deep learning models, i.e., promoting their weight sparsity. As illustrated by the Lottery Ticket Hypothesis (LTH), pruning also has the potential of improving their generalization ability. At the core of LTH, iterative magnitude pruning (IMP) is the predominant pruning method to successfully find ‘winning tickets’. Yet, the computation cost of IMP grows prohibitively as the targeted pruning ratio increases. To reduce the computation overhead, various efficient ‘one-shot’ pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP. This raises the question of how to close the gap between pruning accuracy and pruning efficiency? To tackle it, we pursue the algorithmic advancement of model pruning. Specifically, we formulate the pruning problem from a fresh and novel viewpoint, bi-level optimization (BLO). We show that the BLO interpretation provides a technically-grounded optimization base for an efficient implementation of the pruning-retraining learning paradigm used in IMP. We also show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure. By leveraging such bi-linearity, we theoretically show that BiP can be solved as easily as first-order optimization, thus inheriting the computation efficiency. Through extensive experiments on both structured and unstructured pruning with 5 model architectures and 4 data sets, we demonstrate that BiP can find better winning tickets than IMP in most cases, and is computationally as efficient as the one-shot pruning schemes, demonstrating $2-7\times$ speedup over IMP for the same level of model accuracy and sparsity.
Accept
The reviewers had significantly diverging opinions on this manuscript. The main issue under discussion was whether the framing of this paper as a lottery ticket work was correct, given that the main evaluations use no reinitialization or rewinding. On balance, I think that while one reviewer was very negative about the paper, the disagreement was mostly terminological. The substantial concern is whether the evaluation comparison (wherein BLO with no rewinding is compared against methods that use rewinding in Figure 3) is fair. The authors respond to this by providing comparisons in Figures 6A and A11 that evaluate rewinding on some tasks. However, these figures seem to show that the accuracy of BLO is completely insensitive to rewinding—and even to complete reinitialization in the original lottery-ticket sense. This raises the natural question: why not just evaluate primarily in the reinitialized case, where there's no need to redefine the term "winning ticket"? That is, the whole presentation seems to be backwards. The way it should be presented/evaluated is: * First, we show that BLO outperforms other methods in the classic lottery ticket regime, where we reset all the weights to initialization (100% rewind) — this would replace the present Figure 3. This would be a fair comparison, comparing classical-lottery-ticket-setting pruning methods to each other. * Next, we show that one advantage of BLO is that unlike other winning-ticket-finding methods, its performance is invariant to rewinding. That is, if what we want is just accuracy of the pruned model (and not to do some sort of scientific investigation of the lottery ticket hypothesis) then BLO outperforms other methods when we don't do any rewinding at all. With this sort of presentation, I think the authors could have avoided the negative reviewer's objections. Despite these presentational/terminological issues, I think that there's enough technical contribution here with Figures 6A and A11 to move forward with acceptance, especially considering the enthusiasm of the other reviewers and the technical novelty of the bi-level approach. The empirical results _are_ there (and Figures 6A and A11 show a clear connection with the lottery ticket work), they're just presented strangely. And I think there are not any fundamental technical issues here that forbid acceptance.
val
[ "k0_Hr_4oX9c", "SU9wf_zsJj7", "Q1S0FPLOen_", "iqE3UuIVU6w", "KV5uNe2P1f", "w0uMl2b2CQu", "j4zV7io0Kz5", "7_xIW1Rvdi1", "py64YZVA8fG", "9mgpDJnpnHQ", "xQKR7_vc4kZ", "kvP_reBAY95", "LgRy-aZggvz", "ckz79KyIMNi", "f5j1qts4oPL", "1ClJcuzWCYS", "56-6HcaPDxx", "05nXc4nigBB", "16FWk0si9c8", "jBWSEYbF4ke", "56J1_z-Dcb", "VE5Xepxej7f", "6LCTzm0sycR", "3pfCIFhqNF", "xK2RVWyjAna", "axWLRoemTAz", "_HzCMw77WGT", "yH1lA6wFPk8", "jzC7z6ztw2o", "cQiBjp_bI0P", "OX68iAHfmv", "K-BISb3ND_a", "D-YYStKy9FM" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for posting [Response to authors](https://openreview.net/forum?id=t6O08FxvtBY&noteId=Q1S0FPLOen_). Please see our follow-up clarification and response below.\n\n**Q1: Since novelty of contributions was listed as one of the initial weaknesses — No, that wasn't in my review or my response.**\n\nA1: The comment \n\n> Finally, you are not the first to do bi-level optimisation for pruning. \"Differentiable Network Pruning for Microcontrollers\" by Liberis and Lane would be an example prior art (and I'm sure there are others).\n\n is what we understood as a criticism to our technical contribution. We responded to this comment, and you accepted the novelty regarding IG.\n\n**Q2: Finding Lottery tickets for a pre-trained model has limited practical value. The primary value of LTH is to try to reduce the cost of training models. The original IMP is not supposed to be practical in any real sense. It is not a practical benchmark in any sense other than accuracy.**\n\nA2: We have a different understanding on “winning tickets”. Yes, the trainability is a great merit of the original LTH, while the quality of pruning (finding subnetwork with improved generalization) is a more important property to us and should not be omitted. For example, LTH offers sparsity which not only maintains the great generalization ability but also benefits striking a graceful balance with other performance metrics, e.g., OOD robustness [R1] and transfer learning ability [R2]. IMP, albeit being very computationally expensive, is still the scheme that finds subnetworks with the highest accuracies. Our proposal offers a new optimization basis to find subnetworks as accurate or more than IMP but significantly more efficient than IMP. Those points have been clearly stated in the paper and our previous responses.\n\n> [R1] Diffenderfer, James, et al. “A winning hand: Compressing deep networks can improve out-of-distribution robustness.” Advances in Neural Information Processing Systems 34 (2021): 664-676.\n>\n> [R2] Chen, Tianlong, et al. “The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n**Q3: Clearly categorize the baselines and make it clear why they may / may not beat your method. It's not clear as currently written.**\n\nA3: From our point of view, we did categorize our pruning baselines and provided the rationale behind them. As shown in Sec. 2, those one-shot, initialization-based pruning methods are motivated and incorporated from the computation efficiency perspective; see our response on [the full spectrum of the computational efficiency and quality trade-off](https://openreview.net/forum?id=t6O08FxvtBY&noteId=KV5uNe2P1f).\n\n**Q4: I am not required to agree with the other reviewers on everything.**\n\nA4: Citing the other reviewers’ assessment was not to force the reviewer to have to agree with them. Instead, we aim to provide justifications from other reviews why the score of **1 poor** for presentation does not seem a quite fair rating to our work.\n", " **Q1: Since novelty of contributions was listed as one of the initial weaknesses — No, that wasn't in my review or my response.**\n\nA1: The comment \n\n> Finally, you are not the first to do bi-level optimisation for pruning. \"Differentiable Network Pruning for Microcontrollers\" by Liberis and Lane would be an example prior art (and I'm sure there are others).\n\n is what we understood as a criticism to our technical contribution. We responded to this comment, and you accepted the novelty regarding IG.\n\n**Q2: Finding Lottery tickets for a pre-trained model has limited practical value. The primary value of LTH is to try to reduce the cost of training models. The original IMP is not supposed to be practical in any real sense. It is not a practical benchmark in any sense other than accuracy.**\n\nA2: We have a different understanding on “winning tickets”. Yes, the trainability is a great merit of the original LTH, while the quality of pruning (finding subnetwork with improved generalization) is a more important property to us and should not be omitted. For example, LTH offers sparsity which not only maintains the great generalization ability but also benefits striking a graceful balance with other performance metrics, e.g., OOD robustness [R1] and transfer learning ability [R2]. IMP, albeit being very computationally expensive, is still the scheme that finds subnetworks with the highest accuracies. Our proposal offers a new optimization basis to find subnetworks as accurate or more than IMP but significantly more efficient than IMP. Those points have been clearly stated in the paper and our previous responses.\n\n> [R1] Diffenderfer, James, et al. “A winning hand: Compressing deep networks can improve out-of-distribution robustness.” Advances in Neural Information Processing Systems 34 (2021): 664-676.\n>\n> [R2] Chen, Tianlong, et al. “The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models.” Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n**Q3: Clearly categorize the baselines and make it clear why they may / may not beat your method. It's not clear as currently written.**\n\nA3: From our point of view, we did categorize our pruning baselines and provided the rationale behind them. As shown in Sec. 2, those one-shot, initialization-based pruning methods are motivated and incorporated from the computation efficiency perspective; see our response on [the full spectrum of the computational efficiency and quality trade-off](https://openreview.net/forum?id=t6O08FxvtBY&noteId=KV5uNe2P1f).\n\n**Q4: I am not required to agree with the other reviewers on everything.**\n\nA4: Citing the other reviewers’ assessment was not to force the reviewer to have to agree with them. Instead, we aim to provide justifications like other reviewers why the score of **1 poor** for presentation does not sound like a quite fair rating for our work.\n", " > Since novelty of contributions was listed as one of the initial weaknesses\n\nNo, that wasn't in my review or my response. I will not raise the contribution score unless I am satisfied that the work has been evaluated correctly.\n\n> These variants are well documented and exactly follow the line of research on LTH.\n\nI agree.\n\n> We would like to stress that the definition of winning tickets is evolving with time and is never rigid or constrained to a specific paper, especially in the development of different initialization strategies of non-zero model weights given a pruning mask [R1-R4]. Therefore, we disagree that our definition of winning tickets is “far away” from the literature.\n\nIt is misleading to suggest that finding lottery tickets for a pre-trained model is as much as an achievement as it is for randomly initialised models. It has limited practical value, because the primary value of the LTH if we can exploit it in practice is that we can try to reduce the cost of training models. Taking pretrained models and pruning them does not yield this benefit.\n\nThis is crucial. If you have a method for pruning after training -- which is what you do have -- then you should evaluate it like one.\n\n> The original LTH paper also pruned the model starting from pre-trained weights.\n\nFollowing on from the previous point, the original iterative magnitude pruning method is not supposed to be a \"practical\" method in any real sense. It is proposed as a method which gets acceptable results, demonstrating the existence of lottery tickets, but without necessarily being practical. It is not a practical benchmark in any sense other than accuracy.\n\n>  We also respectfully bring the reviewer’s attention to the fact that all the other reviewers rated our presentation either “good” or “excellent”. We strongly disagree that writing is a weakness of this work, although we are unfortunate to learn that the reviewer rated our submission with only a score of 1 (poor presentation).\n\nI am not required to agree with the other reviewers on everything. If we always did, then there would be no point in having multiple reviewers. Other reviewers will also have a different research background to me, which will naturally lead us to have different perspectives on this work.\n\n> comparing methods that prune at random initialization, such as GraSP, was very common in evaluating pruning methods that start from pre-trained models; please see [R3, R7], and Table 2 and Table 4 in [R5] for comparing with the IMP.\n\nThere are two points to be clear about here: 1) just because someone else has done it doesn't make it right. And 2) I am not even suggesting that you don't include the comparisons, but that you clearly categorise the baselines and make it clear why they may / may not beat your method. It's not clear as currently written.\n\n> Finally, please note that the ProsPr [R6] baseline was suggested by the reviewer in your original comment, and we thus added this to our experiment. Therefore, we hope that we will not be criticized by comparing ourselves with ProsPr.\n\nI don't plan to criticise you for including proper baselines. Per my comment above, you should accurately categorise the competing methods.\n\n> A5: We thank the reviewer for the clarification. We believe we had already discussed the justification for the hessian-free assumption in our original submission (line 232-235 in the current revision). Our response to you reiterated the same.\n\nWhen you read my review again you will see that I was not even complaining about this assumption. I agree it is reasonable. I hoped you would find the additional context useful if you weren't already aware of it.\n\n> We would like to point out that the other three reviewers rate the presentation of our work\n\nSee comment above. I am not required to agree at this stage. I respectfully disagree with their opinion, and I am allowed to. My complains regarding the evaluation of your work and meaningful comparison to baselines remain.\n\n> Once again, we would appreciate it if the reviewer could be specific about your reason for not being convincing.\n\nSee above regarding the utility of pruning methods that prune before, during, and after training.\n\n> It is confusing to us that addressing the reviewer's comments led them to score our paper as a \"Strong Reject\"\n\nPoor evaluation and limited impact. \n\n====\n\nI'll stay at my score.", " We thank the reviewer very much for your response, but we strongly disagree with the current criticism and the reasons for lowering the score. \n\n**Q1: I will accept the novelty regarding IGs, but it is important to be specific and contextualize your contributions**\n\nA1: It is encouraging for authors to see that the reviewer accepts the novelty of IG. In fact, our response and submission (Lines 211-216 and the section “Optimization foundation of BIP”) have clearly described the technical challenge of IG and our technical novelty. Since **novelty of contributions** was listed as one of the initial weaknesses, we would appreciate it if the contribution score was updated to reflect the fact that we could address the concern regarding the novelty of contributions.\n\n\n**Q2: Your definition of winning tickets is far away from the standard definition used by the literature. The conflation of what a \"lottery ticket\" is deeply concerning to me. Taking pre-trained dense models and assessing the lottery ticket hypothesis in this context is far from the norm in the literature.**\n\nA2: We respectfully disagree with this comment. \n\n**First**, we do not think we have conflated the idea of \"lottery ticket\" in our paper. To our best understanding, the reviewer’s concern remains in the term “winning ticket” (note that we did not use the term “lottery ticket” in our paper). As we have replied in our [previous response](https://openreview.net/forum?id=t6O08FxvtBY&noteId=xK2RVWyjAna), the definition of the “winning ticket” covers the early-epoch rewinding variant [R1] and the no-rewinding (i.e., finetuning) variant [R2] as special cases. These variants are well documented and exactly follow the line of research on LTH. We would like to stress that the definition of winning tickets is evolving with time and is never rigid or constrained to a specific paper, especially in the development of different initialization strategies of non-zero model weights given a pruning mask [R1-R4]. Therefore, we disagree that our definition of winning tickets is “far away” from the literature.\n\n**Second**, we disagree with the comment, “Taking pre-trained dense models and assessing the lottery ticket hypothesis in this context is far from the norm in the literature.” The original LTH paper also pruned the model starting from pre-trained weights. This always occurs at the first pruning stage of IMP used in LTH. To justify our argument, please refer to [the repo of IMP for LTH contributed by Jonathan Franckle](https://github.com/google-research/lottery-ticket-hypothesis) or [IMP for LTH implemented in PyTorch](https://github.com/rahulvigneswaran/Lottery-Ticket-Hypothesis-in-Pytorch/). In the first repo, please see [Line 61](https://github.com/google-research/lottery-ticket-hypothesis/blob/1f17279d282e729ee29e80a2f750cfbffc4b8500/foundations/experiment.py#L61). In the second repo, please see [Line119-120](https://github.com/rahulvigneswaran/Lottery-Ticket-Hypothesis-in-Pytorch/blob/34a8c9678406a1c7dd0fec4c9f0d25d017be55fb/main.py#L119). In both repos, the pruning starts from the pre-trained model. \n\nIt is unconvincing to us that the reviewer claimed that our strategy is “far from” the norm in the literature but without providing any evidence of such a claim.\n\n>[R1] Renda, Alex, Jonathan Frankle, and Michael Carbin. \"Comparing rewinding and fine-tuning in neural network pruning.\" arXiv preprint arXiv:2003.02389 (2020).\n>\n>[R2] Chen, Tianlong, et al. \"Long live the lottery: The existence of winning tickets in lifelong learning.\" International Conference on Learning Representations. 2020.\n>\n>[R3] Chen, Tianlong, et al. \"Coarsening the granularity: Towards structurally sparse lottery tickets.\" arXiv preprint arXiv:2202.04736 (2022).\n>\n>[R4] Chen, Tianlong, et al. “The lottery ticket hypothesis for pre-trained bert networks.” Advances in neural information processing systems 33 (2020): 15834-15846.", " **Q3: It is even less clear to me why you have chosen to frame your work this way.**\n\nA3: It is not clear which aspect of our work the reviewer is not clear about. We would appreciate it if the reviewer could be specific about what “frame your work this way” is referring to. \n\nWe have provided a detailed description of the problem and our motivation in Section 2. We wish to advance the optimization basis of model pruning for obtaining high pruning accuracy while being significantly more computationally efficient. To develop such a pruning scheme, we leverage the bilevel framework and show that pruning is a very special class of bilevel optimization problem that can be solved accurately and efficiently. We also respectfully bring the reviewer’s attention to the fact that all the other reviewers rated our presentation either “good” or “excellent”. We strongly disagree that writing is a weakness of this work, although we are unfortunate to learn that the reviewer rated our submission with only a score of 1 (poor presentation). \n\n\n**Q4: Your justification for why you are comparing your method -- which prunes a pre-trained model -- with GraSP / ProsPr which prunes a random initialisation, is unconvincing. It is not sufficiently well explained in the paper. It is not really surprising that early bird is not competitive with your work: it is pruning throughout training, but as you have clarified in the rebuttal, you are actually pruning pre-trained models.**\n\nA4: We respectfully disagree with this comment. \n\nFirst, we would like to make it very clear that, comparing methods that prune at random initialization, such as GraSP, was very common in evaluating pruning methods that start from pre-trained models; please see [R3, R7], and Table 2 and Table 4 in [R5] for comparing with the IMP. This is because GraSP is a representative pruning method with the least computation complexity, and it also aims to find winning tickets; see our motivation section Sec 2 and Fig. 2. Therefore, this practice does not hurt the validity of our approach, but only supports it, as has been argued in aforementioned works. \n\nSecond, we have **not only** compared with methods such as GraSP, but have also compared with methods such as IMP and Hydra, which prune from **pretrained** models. By putting all these methods together, we hope to provide a full spectrum of the computational efficiency and quality trade-off of different types of model pruning methods; see the comparison with BiP in Fig. 5.\n\nThird, about the early-bird approach, we have made it very clear in our initial submission (before the rebuttal; see Line 282), since the proposed BiP approach starts from a pre-trained model like IMP, thus, the early-bird approach was not included in our initial submission because we know that comparing with it would be unfair. Knowing that such a comparison is unfair, we nevertheless add it in the revised manuscript **at the request of the reviewer**, and the new results just confirmed our earlier intuition. However, the reviewer seems to criticize us because we followed your request? \n\nFinally, please note that the ProsPr [R6] baseline was suggested by the reviewer in your original comment, and we thus added this to our experiment. Therefore, we hope that we will not be criticized by comparing ourselves with ProsPr.\n\n\n> [R5] Wang, Chaoqi, Guodong Zhang, and Roger Grosse. \"Picking winning tickets before training by preserving gradient flow.\" arXiv preprint arXiv:2002.07376 (2020).\n>\n> [R6] Alizadeh, Milad, et al. \"Prospect pruning: Finding trainable weights at initialization using meta-gradients.\" arXiv preprint arXiv:2202.08132 (2022).\n>\n> [R7] Chen, Xiaohan, et al. \"The elastic lottery ticket hypothesis.\" Advances in Neural Information Processing Systems 34 (2021): 26609-26621.", " **Q5: I agree that this was a sensible assumption in my original review. However, my point was to suggest that you need to justify it better in the text as it's not strictly true.**\n\nA5: We thank the reviewer for the clarification. We believe we had already discussed the justification for the hessian-free assumption in our original submission (line 232-235 in the current revision). Our response to you reiterated the same.\n\n**Q6: I am experienced and published in this field, and I couldn't follow what the paper was doing. I do believe that the authors may be able to write a more convincing and easy to follow paper.**\n\nA6: We have never doubted that the reviewer is well experienced and published in this field, and in fact we believe that all our reviewers are well qualified to review our paper. We would like to point out that the other three reviewers rate the presentation of our work with a 3 (Reviewer [cu5X](https://openreview.net/forum?id=t6O08FxvtBY&noteId=cQiBjp_bI0P)), 4 (Reviewer [c9rK](https://openreview.net/forum?id=t6O08FxvtBY&noteId=OX68iAHfmv)), 4(Reviewer [iDDi](https://openreview.net/forum?id=t6O08FxvtBY&noteId=K-BISb3ND_a)). Reviewer cr9K summarizes our paper as “The Paper is concise and well written.” and “The theory is easy to follow.” Reviewer iDDi appreciates our writing with the comment, “The paper is well organized and easy to follow.” We really do not think that our paper should be scored by “1, poor presentation”.\n\n**Q7: I am not convinced it is interesting to the community either.**\n\nA7: It is unfortunate that the reviewer feels like this work is not interesting to the community. Once again, we would appreciate it if the reviewer could be specific about your reason for not being convincing. \n\nSince we do not know what makes the reviewer unconvincing, we cannot really address your concern. We can only cite other reviewers’ comments and high ratings to support our claim of novelty and contributions. As a side note, we have made a substantial effort and tried our best to address each of your previous questions (e.g., the IG-related one you accepted). If there is a positive point, we hope the reviewer could give this credit to our work. \n \n**Q8: I am actually going to lower my score and argue for rejection on the basis that the paper's framing is deeply confusing.**\n\nA8: We are disappointed that the reviewer feels so negatively about our work. We have tried our best to address the concerns the reviewer had in the initial review and addressed the novelty concern (as the reviewer themselves acknowledged), and added additional experiments suggested by the reviewer (with our proposed scheme being significantly better).\n\nIt is confusing to us that addressing the reviewer's comments led them to score our paper as a \"Strong Reject\", which as per reviewer guidance, is defined as \"a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations\". It is not clear from our discussion that we have \"major technical flaw\", \"poor evaluation\", \"limited impact\", \"reproducibility\", or \"unaddressed ethical considerations\" in our paper. If there is, again we would appreciate the reviewer to be specific about it. ", " Dear reviewer cu5X:\n\nWe are glad to see that our previous responses have addressed some of your concerns. For your remaining concern, please allow us to provide the following responses. \n\nFollowing your comments, we have been running additional experiments on (ResNet18, ImageNet) and (MobileNet, TinyImageNet). \n\nWe just finished the experiments on (ResNet18, ImageNet) and show our results in **[Figure](https://ibb.co/9HcGbs1)** (Figure A17 in our revision). As we can see, consistent with the existing results on (ResNet50, ImageNet) in Figure 3, our proposed BiP method remains superior to the IMP baseline. We would also like to mention that pruning on ImageNet over multiple sparse ratios is very time-consuming (especially for IMP). Conducting the suggested experiment has taken 12 V100 GPUs (nearly all of our computing resources). We hope that the reviewer can see our effort in addressing your comment. \n\nWe are running another experiment on (MobileNet, TinyImageNet) and will update this response once the results are ready. \n\nWe kindly point out that although Table 1 only contains CIFAR-10/100 results, it can readily be extended to cover our existing results on **TinyImageNet and ImageNet**. This is because this table is associated with Figure 3 but provides a clearer description of the accuracy of the winning ticket vs. its highest pruning ratio. To make this point clearer, we provide the enriched Table 1 in **[Figure](https://ibb.co/DgWFv94)** and Table A4 in the revision (we did not add the additional results to Table 1 due to the space limitation). \n\nIt is a great comment to consider less redundant models. Yet, we kindly bring the reviewer’s attention to two aspects. \n**First**, we have considered less redundant model architectures ResNet20 and ResNet56, which have only 0.27M and 0.85M parameters and much less than ResNet18 (11M) (see Table A1). As shown in Fig. 3 and 4, our method consistently outperforms the baselines, although less redundant models would be more difficult to prune. **Second**, when we choose model architectures for various datasets, we stick to the dense models widely used in the pruning literature, e.g., the NeurIPS’21 benchmark [R1]. We can achieve state-of-the-art accuracy on each dataset (note that the less redundant models typically correspond to lower test accuracy). **Third**, compared to experiments in many recent pruning works [R1-R5], the architectures and datasets considered in our work are indeed comprehensive: We kindly bring reviewers' attention to our results in the appendix, e.g., Figs A6, A7, A9, A10 besides Figure 3 and 4 in the main texts. By contrast, [R2, R3] did not consider VGG16, and [R1, R4, R5] did not consider ResNet56.\n\nIn summary, we highly appreciate the reviewer's effort in providing many useful comments on our submission. Based on our existing experiments (in both the main paper and the supplement) and the newly-added experiments (in both the first-round response and the current response), we sincerely hope that the remaining concern has been alleviated and you could be open to adjusting your rating. \n\nThank you very much,\n\n\n\n> [R1] Ma, Xiaolong, et al. “Sanity checks for lottery tickets: Does your winning ticket really win the jackpot?.” Advances in Neural Information Processing Systems 34 (2021): 12749-12760.\n>\n> [R2] Singh, Sidak Pal, and Dan Alistarh. \"Woodfisher: Efficient second-order approximation for neural network compression.\" Advances in Neural Information Processing Systems 33 (2020): 18098-18109.\n>\n> [R3] Evci, Utku, et al. \"Rigging the lottery: Making all tickets winners.\" International Conference on Machine Learning. PMLR, 2020.\n>\n> [R4] Peste, Alexandra, et al. \"Ac/dc: Alternating compressed/decompressed training of deep neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 8557-8570.\n>\n> [R5] Alizadeh, Milad, et al. \"Prospect pruning: Finding trainable weights at initialization using meta-gradients.\" arXiv preprint arXiv:2202.08132 (2022).\n", " GR1:\n\nI will accept the novelty regarding IGs, but it is important to be specific and contextualise your contributions\n\nGR2:\n\nSee below. The framing of this work strikes me as extremely confusing\n\nQ1/A1:\n\nYour definition of winning tickets is far away from the standard definition used by the literature. Taking a pre-trained model and pruning it is not usually within the realm of the lottery ticket hypothesis. It is even less clear to me why you have chosen to frame your work this way. There's nothing wrong with simply having a technique that prunes pre-trained models -- but the casual reader who isn't as familiar with this part of the literature will be confused that this is actually what you are doing.\n\nQ2/A2:\n\nSee answer to previous question.\n\nYour justification for why you are comparing your method -- which prunes a pre-trained model -- with GraSP / ProsPr which prune a random initialisation, is unconvincing. It is not sufficiently well explained in the paper. I don't see how they can be meaningfully compared with the additional context provided by the rebuttal.\n\nQ4/A4:\n\nIt is not really surprising that early bird is not competitive with your work: it is pruning throughout training, but as you have clarified in the rebuttal, you are actually pruning pre-trained models.\n\nQ8/A8:\n\nI agree that this was a sensible assumption in my original review. However, my point was to suggest that you need to justify it better in the text as it's not strictly true.\n\nQ11/A11:\n\nThis seems like a bit of an excuse. In theory timing the models is not a large amount of work -- I've done it myself for other works. It remains an important thing to assess: if only a small number of parameters are pruned early in the model then there will be only minor speedup. You could provide this information in a future version of this work.\n\n====\n\nI am actually going to lower my score and argue for rejection on the basis that the paper's framing is deeply confusing. I am experienced and published in this field, and I couldn't follow what the paper was doing. The submission cannot solely be rated on its technical merits but also on how it is written and how the community will benefit from its publication.\n\nThe conflation of what a \"lottery ticket\" is deeply concerning to me. Taking pre-trained dense models and assessing the lottery ticket hypothesis in this context is far from the norm in the literature. I am not convinced it is interesting to the community either.\n\nMy issue is primarily with the writing and framing. I do believe that the authors may be able to write a more convincing and easy to follow paper if they actually focused solely on post-training pruning, and clearly compared to the state of the art baselines in this area. Magnitude pruning, for example, is an old baseline.\n\nThis is a case of interesting work combined with confusing writing. At other venues I review for I would ask for accept with major revisions -- which I would want to see -- but this is not an option at NeurIPS. As such, I will strongly argue for rejection.", " I would like to thank the authors' feedback and revisions in the manuscript, which could solve my several concerns. But, I still want to keep my initial score because I don’t agree that the experiments are enough to prove this pruning method works well for general architectures. The models have been known as highly redundant models. Unfortunately, I strongly think we need another baseline except for CIFAR-10. At least, I think this paper should include many results by using ImageNet or other tasks. For Table 1, I also keep my opinion on CIFAR-10/100 datasets. I hope I could find this paper as a refined version with various and challenging architectures/dataset in near future, but I also respect other reviewers’ opinions. \n", " Dear Reviewer iDDi:\n\nWe are very grateful for your acknowledgment of our novelty and contributions. We have tried our best to address your questions. We list all the paper revisions in the **[summary of paper revisions and additional experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=ckz79KyIMNi)**. As there are only two days left for author-reviewer discussions, we sincerely hope that you can provide us feedback before the discussion phase ends, and we are happy to answer any follow-up questions. Once again, thank you for your time and suggestions on our work.\n\nBest regards,\n\nAuthors", " Dear Reviewer c9rK:\n\nWe are very grateful to your valuable comments. Thank you again for your prompt response. We have made a substantial effort in responding to your questions and have made an additional experiment inspired by your comment. The results and discussions can be found in our **[further response](https://openreview.net/forum?id=t6O08FxvtBY&noteId=1ClJcuzWCYS)**. We also list all the paper revisions and additional experiments in the **[summary of paper revisions and additional experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=ckz79KyIMNi)**. As only two days are left for author-reviewer discussions, we are happy to answer any follow-up questions.\n\nBest regards,\n\nAuthors", " Dear Reviewer mQEK:\n\nWe are very grateful to your constructive comments. We have made a substantial effort in responding to your questions and listed all the paper revisions and additional experiments in the **[summary of paper revisions and additional experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=ckz79KyIMNi)**. As there are only two days left for author-reviewer discussions, we sincerely hope that you could provide us feedback before the discussion phase ends, and are happy to answer any follow-up questions.\n\nBest regards,\n\nAuthors", " Dear Reviewer cu5X:\n\nWe are very grateful to your constructive comments. We have made a substantial effort in responding to your questions and listed all the paper revisions and additional experiments in the **[summary of paper revisions and additional experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=ckz79KyIMNi)**. As there are only two days left for author-reviewer discussions, we sincerely hope that you could provide us feedback before the discussion phase ends, and are happy to answer any follow-up questions.\n\nBest regards,\n\nAuthors", " Dear reviewers,\n\nWe are glad to receive your valuable and constructive comments. We have made a substantial effort to clarify your doubts and enrich our experiments in the rebuttal phase. Below is a summary of revisions:\n\nChanges suggested by the reviewers:\n- (Reviewer cu5X): (1) We discuss the importance of unstructured pruning; see Line 30\\~32. (2) We add related work on unstructured pruning; see Line 94. (3) Following Table 1, we add the results for Tiny-ImageNet and ImageNet to Table A4.\n\n- (Reviewer c9rK): (1) We add a vertical separator line for the datasets in Table1. (2) We add related work on optimization-based pruning; see Line 98.\n\n- (Reviewer mQEK): (1) We discuss “Differentiable Network Pruning for Microcontrollers” and talk about our paper’s novelty on bi-level optimization; see Line 162\\~165. (2) We clarify the definition of the winning ticket; see Line 104\\~108. \n\n- (Reviewer iDDi): (1) We add related work on L0-based pruning; see Line 97.\n\n- (All reviewers): Due to paper limitations, we defer the major revisions such as moving ablation studies and algorithm blocks to the main texts in the next version. Thank you again for the suggestions.\n\nBelow is a summary of the additional results:\n\n- Reviewer [cu5X](https://openreview.net/forum?id=t6O08FxvtBY&noteId=cQiBjp_bI0P):\n 1. We conducted new experiments to investigate the performance of BiP if more training epochs are used at a higher pruning rate; see results in **[Figure](https://ibb.co/SfWD49m)** and analysis in [Q4&A4](https://openreview.net/forum?id=t6O08FxvtBY&noteId=56J1_z-Dcb). The results are also included in the revised manuscript (Figure A12).\n 2. We conducted new experiments on (ResNet18, ImageNet); see results in **[Figure](https://ibb.co/9HcGbs1)**. The results are also included in the revised manuscript (Figure A17). \n- Reviewer [c9rK](https://openreview.net/forum?id=t6O08FxvtBY&noteId=jzC7z6ztw2o):\n 1. We verified the convergence of pruning masks by tracking the IoU score between the masks at two adjacent epochs; see results in **[Figure](https://imgbb.com/hcYQQX3)** and analysis in [Q3&A3](https://openreview.net/forum?id=t6O08FxvtBY&noteId=jzC7z6ztw2o). The results are also included in the revised manuscript (Figure A14).\n 2. We showed the training dynamics of BiP using different lower-level SGD steps and demonstrated the effectiveness of using one-step SGD in the lower level of BiP; see results in **[Figure](https://imgbb.com/qYdpJHv)** and analysis in [Q4&A4](https://openreview.net/forum?id=t6O08FxvtBY&noteId=jzC7z6ztw2o). The results are also included in the revised manuscript (Figure A13).\n 3. We showed the influence of the different upper- and lower-level data batch schemes on the pruning accuracy of BiP; see results in **[Figure](https://ibb.co/hZhJR0P)** and analysis in [Further Response](https://openreview.net/forum?id=t6O08FxvtBY&noteId=1ClJcuzWCYS). The results are also included in the revised manuscript (Figure A16).\n- Reviewer [mQEK](https://openreview.net/forum?id=t6O08FxvtBY&noteId=D-YYStKy9FM):\n 1. For unstructured pruning settings, we add the baselines Early-Bird and ProsPr under CIFAR-10, CIFAR-100, and TinyImageNet datasets (6 model architecture + dataset combinations). The results can be found in **[Figure](https://ibb.co/TbqXNFc)**. A detailed discussion can be found in [Q3&A3](https://openreview.net/forum?id=t6O08FxvtBY&noteId=axWLRoemTAz). The results are also included in the revised manuscript (Figure A9).\n 2. For structured pruning settings, we add ProsPr as the latest initialization-based baseline (4 model architecture + dataset combinations). The results can be found in **[Figure](https://ibb.co/K0PTPnP)**. A detailed discussion can be found in [Q7&A7](https://openreview.net/forum?id=t6O08FxvtBY&noteId=axWLRoemTAz). The results are also included in the revised manuscript (Figure A10).\n 3. To verify the insensitivity of BiP to rewinding epochs, we conducted additional experiments on 3 more dataset-model architecture combinations (ResNet18 + CIFAR-10, ResNet18 + CIFAR-100, ResNet18 + TinyImageNet). The results can be found in **[Figure](https://ibb.co/Dzf5YQN)**. More detailed discussions can be found in [Q10&A10](https://openreview.net/forum?id=t6O08FxvtBY&noteId=_HzCMw77WGT). The results are also included in the revised manuscript (Figure A11).\n", " Dear reviewer cu5X,\n\nThank you very much for taking the time to review our paper. We cherish your comments very much. In our earlier posted responses, we have made a point-to-point response (see [Part I](https://openreview.net/forum?id=t6O08FxvtBY&noteId=jBWSEYbF4ke), [Part II](https://openreview.net/forum?id=t6O08FxvtBY&noteId=56J1_z-Dcb), [Part III](https://openreview.net/forum?id=t6O08FxvtBY&noteId=VE5Xepxej7f) respectively) to alleviate your concern. If you have additional comments, we are happy to address them.\n\nAuthors\n", " Inspired by the reviewer’s follow-up comments, we conducted some additional experiments to further study the influence of the different upper- and lower-level data batch schemes on the pruning accuracy of BiP (**[Figure](https://ibb.co/hZhJR0P)**) as well as its convergence (**[Figure](https://ibb.co/ggkfxjD)**). We consider two variants of BiP using different data batch schemes, termed BiP (reverse batch) and BiP (same batch), respectively. For BiP with reverse batch (a special mismatch case), the data batches from the same data loader are distributed to the upper-level SPGD and the lower-level SGD in the reverse order per epoch. For BiP with the same batch, the upper- and lower-level always adopt the same data batch throughout the training. By default, BiP refers to our current implementation that uses different random data batches at two levels. \n\nAs the results of BiP (reverse order) and BiP suggest, data batch mismatch is beneficial to BiP. Even in the deterministic reverse order setting, the superior accuracy performance of BiP remains (see **[Figure](https://ibb.co/hZhJR0P)**). And its convergence behavior is similar to BiP (our current implementation) and outperforms BiP (same batch) (see **[Figure](https://ibb.co/ggkfxjD)**). We feel that it is a promising research direction to seek the optimal data batch curriculum for BiP’s lower-level and upper-level optimization in the future. \n\nThanks again for this great comment. \n", " We are happy to learn that our response has addressed most of your concerns adequately. Per reviewer’s encouragement, we would like to make a preliminary study to see how BiP performs if the upper-level data batch is in the reverse order of the lower-level data batch. We will update this response once we have these results.", " Most of my raised concerns are adequately satisfied, and I would indeed want to see future works that try with diverse batches, and study if it affects the convergence (either positively or negatively).", " Dear Reviewer mQEK,\n\nThank you very much for sparing your time to review our paper. In the posted response, we have tried our best to (1) clarify possible misunderstandings regarding the novelty of our work (see [General Response Link](https://openreview.net/forum?id=t6O08FxvtBY&noteId=3pfCIFhqNF) ), (2) conduct a series of additional experiments requested in the comments (see [summary of added experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=6LCTzm0sycR)), and (3) address your questions point by point (see [Part I](https://openreview.net/forum?id=t6O08FxvtBY&noteId=xK2RVWyjAna), [Part II](https://openreview.net/forum?id=t6O08FxvtBY&noteId=axWLRoemTAz), [Part III](https://openreview.net/forum?id=t6O08FxvtBY&noteId=_HzCMw77WGT) respectively).\n\nWe hope that you can find our effortful response convincing. If you have additional comments, please feel free to let us know. We will try our best to address them.", " We thank the reviewer for the detailed feedback. Please find our detailed responses below. \n\n**Q1: I have a concern on the real effectiveness of unstructured pruning although this method is a novel idea for unstructured pruning.**\n\n**A1:** This is a great comment. First of all, we agree that unstructured pruning may not result in direct acceleration, and it is the reason for considering structured pruning for the actual model deployment. However, this does NOT restrict the novelty of our work. Yes, we considered unstructured pruning as the representative use case, but the proposed BLO (bi-level optimization)-oriented pruning algorithm (that we call BiP) is generic and can be readily extended to the use case of structured pruning. This is also one of our novelties. We have shown the superior performance of BiP in two structured pruning settings, namely, filter pruning and channel pruning across various datasets (see results in Figure 4, A5, A6, A7, and Lines 298-301, 337-343). We would like to kindly stress that the main goal of our work is to re-think the optimization basis of network pruning through the lens of BLO and to attain high pruned model accuracy (like IMP) and high computational efficiency (like OMP). \n\nSecondly, although unstructured pruning cannot bring in realistic acceleration, it still serves as a technology basis for sparse learning of deep neural networks (DNNs). Besides efficiency, many other metrics are strongly related to sparsity, e.g., adversarial robustness [R1], out-of-distribution generalization [R2], and model transferability [R3]. All the aforementioned work focused on unstructured pruning. Thus, unstructured pruning is still an important topic to study. We will surely add the above discussion in the revision to improve our motivation for unstructured pruning. Thank you for your great comment!\n\n> [R1] Sehwag, Vikash, et al. \"Hydra: Pruning adversarially robust neural networks.\" Advances in Neural Information Processing Systems 33 (2020): 19655-19666.\n>\n> [R2] Diffenderfer, James, et al. \"A winning hand: Compressing deep networks can improve out-of-distribution robustness.\" Advances in Neural Information Processing Systems 34 (2021): 664-676.\n>\n> [R3] Chen, Tianlong, et al. \"The lottery tickets hypothesis for supervised and self-supervised pre-training in computer vision models.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n**Q2: This method can achieve outperformed results on lower pruning rates less than 50%. But, there is less noticeable improvement on effective pruning rates. So, in my opinion, the resulting accuracy seems to be not the main contribution of this paper.**\n\n**A2:** At the first glance, the most significant accuracy improvement of BiP over baselines stays in the sparse regime of less than 50%. Yet, this is not precise. First, our results in Tab. 1 have shown that BiP consistently identifies a ‘winning’ subnetwork with the highest sparsity level compared to the other baseline methods (see Line 333-335). Second, we agree that the effectiveness of pruning inevitably drops as the network becomes increasingly sparser. Thus, how to improve the effectiveness of pruning in the extremely-sparse regime is an open challenge. Third, we have shown that BiP consistently improves accuracy in both unstructured and structured pruning scenarios. Based on the above, we believe that the accuracy performance of BiP is still the main contribution of this paper. ", " **Q3: The experimental results seem to be so restricted and most of the results on this paper are limited to the ResNet arch. and CIFAR-10 dataset. I think this paper should extend the experimental results to various architectures (ex. MobileNet, Transformer, …) or bigger dataset with smaller model (ex. ResNet-18 on ImageNet). The experimental designs are so restricted. It should be extended to other challenging datasets and architectures.**\n\n**A3:** We respectfully disagree that our experiments are restricted. We have followed the experiment setup in the latest pruning benchmark [R4] to make sure that our improvement is consistent and solid across different settings. Reviewer c9rK also recognized the strength of our experiments in her/his comment “the experiment section is strong involving ablations across different hyperparameters.” To be specific, our experiment plans include **four datasets** (CIFAR-10, CIFAR-100, Tiny-ImageNet (200 classes), and ImageNet (1000 classes)), **five model architectures** (ResNet-20, ResNet-56, ResNet-18, ResNet-50, and VGG-16), and **three pruning settings** (unstructured pruning, filter-wise structured pruning as well as channel-wise structured pruning); see a summary of our experiment setting in Sec. 4.1 and some results in Tab. 1, Figure 3, 4, A6, and A7. \n\nThank you for suggesting extending the experimental results to the other architectures (e.g., MobileNet, and Transformer) and ResNet-18 on ImageNet. We plan to include the new experiment on (ResNet-18, ImageNet) as a supplement to (ResNet-50, ImageNet) in Fig. 3. We will report the results once this experiment is finished. If time is allowed, we will add the suggested experiment on MobileNet. Yet, we feel that pruning Transformer could be quite different from pruning CNN-type networks; E.g., token sparsification is a key step of transformer pruning [R5]. We believe this is outside the scope of our work. Thus, we will leave this study for future research. \n\n> [R4] Ma, Xiaolong, et al. “Sanity checks for lottery tickets: Does your winning ticket really win the jackpot?.” Advances in Neural Information Processing Systems 34 (2021): 12749-12760.\n>\n> [R5] Rao, Yongming, et al. \"DynamicViT: Efficient vision transformers with dynamic token sparsification.\" Advances in neural information processing systems 34 (2021): 13937-13949.\n\n**Q4: I’m curious why this method works with consistent time regardless of pruning rates. Or if I put much time for higher pruning rates, then can I gather higher accuracy by using this method?**\n\n**A4:** The reason for constant time regardless of pruning rates is that the proposed optimization scheme is independent of the specific value of the pruning ratio. This differs from IMP, where the number of pruning rounds increases as the pruning rate increases. In BiP (Eq. (1)), the pruning ratio is regarded as an upper-level constraint. Thus, even if a higher pruning rate is applied, the only change made to BiP is using a different projection threshold in the upper-level SPGD; see (m-step) at Line 261. This does not require the increase of optimization steps. \n\nBased on the reviewer’s suggestion, we also conducted new experiments to allow more time (training epochs) for BiP when a higher pruning rate is considered; see results in [Figure](https://ibb.co/SfWD49m). Specifically, we test three datasets and consider three pruning ratios (p=86.58%, 94.50%, 97.75%). For each pruning ratio, we examine the test accuracy of BiP versus the training epoch number from 50 to 500. Note that the number of training epochs in our original experiment setup was 100. As we can see, the performance of BiP gets saturated when the epoch number is over 100. Thus, increasing the training epoch number (over 100) does not improve accuracy at a higher pruning ratio. ", " **Q5: How about bringing the Appendix B (including Figure A1) to the main manuscript?**\n\n**A5:** Thanks for your suggestion. We will add the content of Appendix B to the main manuscript in the revised version.\n\n**Q6: I don’t understand why LTH is referred as a pruning method. LTH paper is a novel paper to represent why we train the over-parameterized neural networks, not represent a new pruning method. I think IMP with gradual pruning rates was proposed in Zhu’s paper. There have been many other results on unstructured pruning methods.**\n\n**A6:** Yes, we agree with the reviewer that IMP is the name of the pruning method that we should call. We have avoided the use of ‘LTH pruning’ in the original submission and will carefully check our statement to make it preciser in the revision. Meanwhile, we would like to bring to the reviewer’s attention that the LTH paper and its follow-up work made some modifications/customizations to the vanilla IMP approach, e.g., the use of initialization rewinding at every pruning round [R6, R7, R8]. Thus, when we refer to IMP, it represents the IMP algorithm used to find the best winning tickets in the line of LTH research [R7, R8] rather than the one used in Zhu’s paper [R10]. Sorry for this confusion, and we will cite [R10] and clarify its difference with the LTH work in the revision.\n\nThank you very much for pointing out the additional related work [R10-R13]. We will be sure to cite them and state their differences from ours in the revised paper. Meanwhile, we would like to kindly stress that different from these work, our paper aims to re-think the optimization basis of network pruning through the lens of BLO and to develop a theoretically-grounded and effective BLO solver for model pruning that can attain high pruned model accuracy and high computational efficiency.\n\n> [R6] Frankle, Jonathan, and Michael Carbin. \"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\" arXiv preprint arXiv:1803.03635 (2018).\n>\n> [R7] Frankle, Jonathan, David J. Schwab, and Ari S. Morcos. “The early phase of neural network training.” arXiv preprint arXiv:2002.10365 (2020).\n>\n> [R8] Renda, Alex, Jonathan Frankle, and Michael Carbin. \"Comparing rewinding and fine-tuning in neural network pruning.\" arXiv preprint arXiv:2003.02389 (2020).\n>\n> [R9] Ma, Xiaolong, et al. “Sanity checks for lottery tickets: Does your winning ticket really win the jackpot?.” Advances in Neural Information Processing Systems 34 (2021): 12749-12760.\n>\n> [R10] Zhu, Michael, and Suyog Gupta. “To prune, or not to prune: exploring the efficacy of pruning for model compression.” arXiv preprint arXiv:1710.01878 (2017).\n>\n> [R11] Evci, Utku, et al. \"Rigging the lottery: Making all tickets winners.\" International Conference on Machine Learning. PMLR, 2020.\n>\n> [R12] Singh, Sidak Pal, and Dan Alistarh. \"Woodfisher: Efficient second-order approximation for neural network compression.\" Advances in Neural Information Processing Systems 33 (2020): 18098-18109.\n>\n> [R13] Peste, Alexandra, et al. \"Ac/dc: Alternating compressed/decompressed training of deep neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 8557-8570. ", " We have conducted a series of new experiments based on the reviewer's comments. For ease of reading, we summarize them below. \n1. For unstructured pruning settings, we add Early-Bird(one-shot pruning baseline) and ProsPr (initialization-based baseline) to the CIFAR-10, CIFAR-100, and TinyImageNet datasets (6 model architecture + dataset combinations). The results can be found in [Figure](https://ibb.co/TbqXNFc). A detailed discussion can be found in Q3&A3.\n2. For structured pruning settings, we add ProsPr as the latest initialization-based baseline (4 model architecture + dataset combinations). The results can be found in [Figure](https://ibb.co/K0PTPnP). A detailed discussion can be found in Q7&A7.\n3. To verify the insensitivity of BiP to rewinding epochs in more complicated settings, we conducted additional experiments on 3 more dataset-model architecture combinations (ResNet18 + CIFAR-10, ResNet18 + CIFAR-100, ResNet18 + TinyImageNet). The results can be found in [Figure](https://ibb.co/Dzf5YQN). More detailed discussions can be found in Q10&A10.\n\nWe will include the additional experiments' results in the revised version.", " Thank you very much for providing us with very constructive comments. In what follows, we begin by making a few general responses (GRs) based on the reviewer’s comments and then list our point-to-point answers to the raised questions.\n\n**GR1: Possible misunderstanding on our bi-level contribution.**\n\nThanks for pointing out the missing reference [R1] “Differentiable Network Pruning for Microcontrollers” (by Liberis and Lane) and raising the question “you are not the first to do bi-level optimization for pruning (and I’m sure there are others)”. We will be sure to cite [R1] and discuss our **novelties** (vs. [R1]) in the revision. **Yet, we still believe** that a systematic study of BLO for model pruning was lacking in the literature, and ours is the first one in this direction.\n\n1. Reference [R1] claimed using BLO for model pruning, but it refers to the **alternating optimization (AO)** procedure where pruning and training alternatively perform gradient descent. Strictly speaking, this AO process does **NOT** exactly solve a BLO problem since it excludes the derivation of implicit gradient (IG) (see Line 217 - 230 in our submission). The IG challenge is a known problem in BLO; see the optimization literature [71] in our submission. To the best of our knowledge, we, for the first time, derived the closed form of IG for BLO-oriented pruning and showed that the bi-linearity of pruning variables makes the IG-involved gradient Eq. (2) easily solvable. The computational complexity is almost the same as that of computing the first-order gradient just once, as supported by Eq. (6). Our theoretical finding was summarized in Proposition 1. And Fig. 6-(b3) made a sanity check for the importance of IG in BLO-oriented pruning. This is also a key difference from pruning methods that directly call for Darts-like formulation and approach (e.g. [R2], which was cited in [R1]), where the special BLO characteristic–bi-linearity of pruning variables–was not explored and exploited to simplify the IG computation.\n2. The advantage of BLO for model pruning was not fully exploited in the existing literature (including [R1] and [R2]). We respectfully argue that we did not see any prior work to provide the explicit BLO interpretation of IMP and disentangle the non-sparse re-training from pruning using customizable lower-level optimization tasks; see our BLO formulation and its advantages in Line 189-216. **As Reviewer [iDDi](https://openreview.net/forum?id=t6O08FxvtBY&noteId=K-BISb3ND_a) pointed out**, “The proposed BIP pruning algorithm is original and practical.”\n\n> [R1] Liberis, Edgar, and Nicholas D. Lane. “Differentiable Network Pruning for Microcontrollers.” arXiv preprint arXiv:2110.08350 (2021).\n>\n> [R2] Ning, X., Zhao, T., Li, W., Lei, P., Wang, Y., and Yang, H. DSA: More efficient budgeted pruning via differentiable sparsity allocation. arXiv preprint arXiv:2004.02164, 2020.\n\n**GR2: Clarification of baseline method selection** \n\nThe reviewer suggested several **one-shot/initialization-based baselines** (Early bird [R3], ProsPr [R4], Single-shot Structured Pruning [R5]) and questioned us why not consider the **one-shot pruning benchmark** ([R6] “Pruning Neural Networks at Initialization: Why Are We Missing the Mark?” by Frankle). Based on those comments, we feel that the reviewer might **mistakenly regard our proposed method as another one-shot or initialization-based pruning method**. This is not the main purpose of our work. Our goal is to seek the proper optimization basis for successful model pruning that can attain high pruned model accuracy (like IMP) without incurring a high computation cost as the model sparsity increases (namely, enjoying computation efficiency like one-shot pruning). Thus, performance-wise, **IMP is our strongest and the main baseline throughout the experiments**. Meanwhile, we also consider comparing BiP with one-shot pruning since the latter gives a lower bound on the computation complexity of model pruning. **We also conducted additional experiments based on the reviewer's suggestion to enrich our baseline methods.** However, the conclusion is consistent: BiP outperforms all the newly added baselines (see [the summary of experiments](https://openreview.net/forum?id=t6O08FxvtBY&noteId=6LCTzm0sycR)).\n\n> [R3] You, Haoran, et al. “Drawing early-bird tickets: Towards more efficient training of deep networks.” arXiv preprint arXiv:1909.11957 (2019).\n>\n> [R4] Alizadeh, Milad, et al. “Prospect pruning: Finding trainable weights at initialization using meta-gradients.” arXiv preprint arXiv:2202.08132 (2022).\n>\n> [R5] van Amersfoort, Joost, et al. “Single shot structured pruning before training.” arXiv preprint arXiv:2007.00389 (2020).\n>\n> [R6] Frankle, Jonathan, et al. “Pruning neural networks at initialization: Why are we missing the mark?.” arXiv preprint arXiv:2009.08576 (2020).", " **Q1: The use of “winning ticket” seems to be overloaded in this work. What is the definition of “winning tickets” used? Lines 35 to 37 give the more widely accepted definition, but line 325 gives a different definition which I don’t agree with.**\n\n**A1:** This is a great comment. We apologize for the imprecise statement about winning tickets at Line 325. Yes, our definition of “winning tickets” follows Lines 35-37 but covers the early-epoch rewinding variant [R7] and the no-rewinding (i.e., fine-tuning) variant [R8] as special cases. To be more concrete, a winning ticket $(\\mathbf m, \\boldsymbol \\theta^\\prime )$ is given by a pair of sparse mask $\\mathbf m$ and model “initialization” $\\boldsymbol \\theta^\\prime$, from which the non-sparse model weights are retrained to achieve the test accuracy greater than or equal to the test accuracy of the original dense model. It is worth noting that in the original LTH work [R9] $\\boldsymbol \\theta^\\prime$ was set by the random initialization $\\boldsymbol \\theta_0$ used in dense model training, namely, $\\boldsymbol \\theta^\\prime =\\boldsymbol \\theta_0$ to realize the isolated training process. However, its follow-up work [R7] found that the early-epoch rewinding strategy (which sets $\\boldsymbol \\theta^\\prime$ as an early-epoch dense model, i.e., $\\boldsymbol \\theta^\\prime =\\boldsymbol \\theta_t$ for t-epoch training) typically yields the best test accuracy than the case of rewinding to the random initialization ($\\boldsymbol \\theta^\\prime =\\boldsymbol \\theta_0$) and the case of no rewinding (i.e., $\\boldsymbol \\theta^\\prime$ is set by the currently non-pruned model weights). Yet, rewinding has a downside as it takes additional computation costs besides pruning. We follow the above line of work to define our winning ticket $(\\mathbf m, \\boldsymbol \\theta^\\prime )$ to produce a subnetwork that can match or surpass the performance of the dense model. If the definition of winning tickets has to be aligned with the original LTH paper [R9], we could also use the notion of “matching subnetwork” to reflect the quality of a pruning method following Chen & Frankle’s work [R10].\n\n> [R7] Renda, Alex, Jonathan Frankle, and Michael Carbin. \"Comparing rewinding and fine-tuning in neural network pruning.\" arXiv preprint arXiv:2003.02389 (2020).\n>\n> [R8] Chen, Tianlong, et al. \"Long live the lottery: The existence of winning tickets in lifelong learning.\" International Conference on Learning Representations. 2020.\n>\n> [R9] Frankle, Jonathan, and Michael Carbin. \"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\" arXiv preprint arXiv:1803.03635 (2018).\n>\n> [R10] Chen, Tianlong, et al. “The lottery ticket hypothesis for pre-trained bert networks.” Advances in neural information processing systems 33 (2020): 15834-15846.\n\n**Q2: It seems that the pruning is done throughout training. This cannot reasonably be compared to IMP or Grasp: the pruning mask is set at iteration 0 (or close to 0) and does not change throughout training.**\n\n**A2:** Thank you for raising this great question. However, the comment “pruning is done throughout training” is not precise for our method (BiP). BIP is performed after dense model training as its initialization is given by the (pre-trained) dense model weights (see Line 282). This is also why we adopt the one-step gradient descent to realize model re-training (see Line 285 and Fig. 6(b1)). \n\nWe agree that BIP involves the weight retraining process (i.e., the lower-level optimization task). But this is the same as IMP, which also requires weight re-training. In IMP, the $(t+1)$-th pruning round prunes the nonzero model weights that are retrained at the end of the $t$ pruning round. Thus, the pruning mask is also updated throughout training. In this sense, we feel that it is quite reasonable to compare IMP with BiP. And IMP gives an **upper bound** of the computation complexity of model pruning (see Line 109 - 123). \n\nYes, GraSP is the method of pruning at random initialization. Thus, the pruning mask is fixed and independent of training. However, we feel that it is also necessary to compare BiP with Grasp as the latter provides a **lower bound** of the computation complexity of model pruning.", " **Q3: It is also worth noting that Grasp is not SOTA anymore: comparisons with stronger initialization-based pruning methods, you should compare to ProsPr by Alizadeh et al. (cited by your work already)**\n\n**A3:** Following your suggestion, we conduct additional experiments to compare BiP with ProsPr in unstructured pruning (see [Figure](https://ibb.co/TbqXNFc)) and structured pruning (see [Figure](https://ibb.co/K0PTPnP)). The results show that ProsPr is indeed better than GraSP but is still not as good as IMP and our method in different architecture + dataset combinations. Meanwhile, except for the unstructured pruning settings of ResNet18 pruning over CIFAR10 and CIFAR100, ProsPr, as a pruning before training, can achieve comparable performance to the state-of-the-art implementation of OMP [R11]. However, the gap between this SOTA pruning-at-initialization method and our method still exists. As a side note, we have covered more than one initialization-based baseline (SNIP, SynFlow) in Figure A3.\n\n> [R11] Ma, Xiaolong, et al. “Sanity checks for lottery tickets: Does your winning ticket really win the jackpot?.” Advances in Neural Information Processing Systems 34 (2021): 12749-12760.\n\n**Q4: The only valid comparison I can see would be to the Early-Bird work by You et al. – but you don’t provide any in this direction.**\n\n**A4:** Thank you for suggesting “the only valid comparison” with Early-Bird work [R11]. However, this might be a misunderstanding about our work. The suggested early-bird training in [R12] provides us with another one-shot pruning baseline. Additional experiments are conducted in [Figure](https://ibb.co/TbqXNFc) to compare with the early-bird training on 6 different settings. The [Figure](https://ibb.co/TbqXNFc) shows that early-bird training can effectively achieve comparable or even better testing performance than OMP in most different architecture+dataset combinations, which is also the main contribution of [R11]. However, the early-bird training is still not as strong as IMP in testing performance. Thus, we disagree that early-bird training is the only valid baseline for BiP. Please refer to [GR2](https://openreview.net/forum?id=t6O08FxvtBY&noteId=3pfCIFhqNF) for our clarification on the selection of baseline methods. \n> [R12] You, Haoran, et al. “Drawing early-bird tickets: Towards more efficient training of deep networks.” arXiv preprint arXiv:1909.11957 (2019). \n\n**Q5: In addition, you have provided few comparisons to methods which prune after training, and perhaps do some small amount of fine tuning.**\n\n**A5:** We respectfully disagree. Many of our baselines such as OMP and Hydra all prune after model training. This is in contrast to Grasp/SNIP/ProsPr, which are given by pruning at random initialization (i.e., before training). \n\n**Q6: You are not the first to do bi-level optimisation for pruning. “Differentiable Network Pruning for Microcontrollers” by Liberis and Lane would be an example of prior art (and I’m sure there are others).**\n\n**A6:** We respectfully disagree. Please refer to [GR1](https://openreview.net/forum?id=t6O08FxvtBY&noteId=3pfCIFhqNF).\n\n**Q7: “I am not sure you characterize the related work on structured pruning fairly. \"Single Shot Structured Pruning\" by van Amersfoort et al. and ProsPr by Alizadeh et al. both assessed this direction at initialisation.**\n\n**A7:** Thank you very much for pointing out these references. Following the reviewer’s suggestion, we conduct additional experiments to compare BiP with ProsPr in the context of structured pruning; see results in [Figure](https://ibb.co/K0PTPnP). As we can see, BiP consistently outperforms ProsPr and still stands top among all the baselines. During the rebuttal window, we were not able to add the comparison with **“Single Shot Structured Pruning”** as the codes are not released in the paper. In the revision, both aforementioned papers will be cited and discussed in the related work. \n\n**Q8: On L229 you say that we can assume ∇2l=0. I am not convinced this is actually true in practice.**\n\n**A8:** As mentioned in Line 233-235 of our submission, this hessian-free assumption is not strict for ReLU-based neural networks (NNs) as the decision boundaries of NNs with ReLU activations are piecewise linear in a tropical hyper-surface [98]. And in practice, this is also a reasonable assumption and has been used in BLO-involved applications such as meta-learning [99] and adversarial learning [84]. \n\n**Q9: Is there a good reason to use reference 18 as the benchmark rather than the benchmark provided by Frankle’s missing the mark work?**\n\n**A9:** We believe this is also a question related to baseline selection; see [GR2](https://openreview.net/forum?id=t6O08FxvtBY&noteId=3pfCIFhqNF). Frankle’s work is the benchmark of all different **one-shot pruning methods**. In contrast, reference [R11] provides the benchmark for the iterative magnitude pruning method.", " **Q10: Figure 6a is not enough to justify the robustness to rewinding that marks a winning ticket. You’ve provided results on the smallest model with the easiest dataset. More difficult examples are needed.**\n\n**A10:** Following the reviewer’s suggestion, we conduct additional experiments over more complex model-dataset combinations; see results in [Figure](https://ibb.co/Dzf5YQN). Our method is insensitive to rewinding epoch numbers, even for larger models (ResNet-18) or more complicated datasets (Tiny-ImageNet). A carefully tuned rewinding scheme does not lead to significant improvements, and thus, it is not necessary for BiP to rewind (and retrain) to achieve superior performance.\n\n**Q11: The final line of the conclusion is a bit concerning: in practice, structured pruning methods could prune the same number of parameters but yield totally different speedups. It is an important thing to measure. How fast are the channel pruned networks?**\n\n**A11:** Thanks for your suggestion. The main purpose of this paper is to advance the optimization foundation of the pruning problem through the lens of BLO. As the hardware acceleration is not the main purpose of this paper, we only mentioned this could be a future research direction to maximize the practical utility of BLO-enabled structured pruning to achieve hardware acceleration when compressing deep models at structural units, such as kernels/filters/channels. ", " We sincerely appreciate your careful review and a great summary of our contributions. And thank you very much for the very constructive comments. In what follows, please see our responses.\n\n**Q1: It would be really helpful to add some discussion about the similarities and differences between the BLO and the L0-based pruning.**\n\n**A1:** This is an insightful question. In terms of similarity, both lines of research parameterize the pruning mask so that it is learnable compared to the heuristics-based pruning methods. The difference between BiP and L0-based pruning lies in the following aspects. First, BiP is not a sparse training algorithm since it calls a pre-trained model (see Line 282) as an initialization like IMP or OMP. This differs from L0-based pruning, which can also be modeled as a sparsity-inducing model training from scratch. See [R1] (Sec. 2) for a systematic classification of pruning methods. Second, BiP and L0-based pruning enjoy quite different optimization foundations. Specifically, bi-level optimization (BLO) centered for BiP needs to tackle the challenge of implicit gradient (see Line 217 - 230 in our submission) due to the hierarchical learning structure of BLO. By contrast, the L0-based pruning is rooted in sparsity-inducing optimization [R2], which is typically formulated as a single-level minimization problem. However, we think the hierarchical learning structure is critical for pruning as this is also implied in IMP, the predominant pruning method to find “winning tickets”. Third, BiP solves a constrained optimization problem, e.g., it calls the projected gradient descent for pruning. Yet, the L0-based pruning adopts a regularization scheme to strike a balance between performance and sparsity. \n\n> [R1] Liu, Shiwei, et al. \"Sparse training via boosting pruning plasticity with neuroregeneration.\" Advances in Neural Information Processing Systems 34 (2021): 9908-9922.\n>\n> [R2] Bach, Francis, et al. \"Optimization with sparsity-inducing penalties.\" Foundations and Trends® in Machine Learning 4.1 (2012): 1-106.\n \n**Q2: In Figure 6(b1) the best accuracy is achieved when lower-level steps N <= 3, which would be unexpected and need some explanation.**\n\n**A2:** In BiP, the (lower-level) $\\theta$-step is initialized by a pre-trained model of high quality (test accuracy) already. Adopting the large lower-level step number (N > 3) will incur aggressive weight updating and lead to overfitting the current mask. In such a case, it is more difficult for the BLO solver to find a better mask. Therefore, the best performance is usually achieved at N <= 3.\n\n**Q3: Could you talk about the schedule of BIP to prune with multiple sparsity ratios? When pruning for multiple ratios, the time complexity of IMP could be amortized and I wonder if BIP could be efficient in this setting.**\n\n**A3:** This is a very constructive comment. We agree that if multiple sparsity ratios are considered, then the time complexity of IMP could be amortized. Yet, it is worth noting that IMP typically calls for a strict sparsity schedule [R3] to achieve state-of-the-art performance (e.g., for the target pruning ratio of 51.2%, the schedule is 80% -> 64% -> 51.2% [R3]). Thus, IMP imposes “constraint” on the achieved “multiple sparsity ratios”. By contrast, our proposed BiP algorithm has no such constraint. In the worst case, one can call BiP multiple times to achieve multiple sparsity ratios. However, BiP could also be accelerated in this scenario using the low-sparsity solution as a warm-up to find the next higher-sparsity solution. This is a great direction to investigate: We will do it in the revised version. \n\n> [R3] Frankle, Jonathan, and Michael Carbin. \"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\" arXiv preprint arXiv:1803.03635 (2018).\n\n**Q4: It would be better to move some ablation studies like Figure A4 to the main manuscript.**\n\n**A4:** Thanks for your suggestion. We will move more ablation studies (Figure A4) to the main manuscript in the revised version.", " We sincerely appreciate your careful review and a great summary of our contributions. Thank you very much for the very constructive comments. Please see our response below. \n\n**Q1: Can the authors include an algorithm block for a better understanding of the overall flow?**\n\n**A1:** Yes, we will add an algorithm block in the revision.\n\n**Q2: How are the diverse batches chosen for the training? Does it involve some kind of submodular optimization to get the schedule? What happens if the batch for SGD is qualitatively different from the batch of SPGD?**\n\n**A2:** The diverse batches for SGD and SPGD are realized by calling different rounds of random batch sampling from the data loader. We did not involve submodular optimization to get the schedule, but we think choosing qualitatively different batches is inspiring. For example, it might be an interesting future work to define a curriculum for training data used at upper-level and lower-level optimization, respectively. We will add this direction in Conclusion. Thanks for the comment.\n\n**Q3: Does the convergence result happen due to the fact that the masks still keep on changing (that is finding the subnetwork) but all of those subnetworks have similar performance when trained (performed SGD)?**\n\n**A3:** The mask also converges at the end of the training. To verify this argument, we show the training trajectory of the mask similarity between two adjacent-epoch models in [Figure](https://ibb.co/hcYQQX3) at different pruning ratios. Here the mask similarity is represented through the intersection of the union (IoU) score of the two masks found by two adjacent epochs. The IoU score ranges from 0.0 to 1.0, and a higher IoU implies a larger similarity between the two masks. As we can see, the IoU score converges to 1.0 in the end, which denotes that the mask also converges at the end of the training phase. Also, with a smaller pruning ratio, the mask turns to converge more quickly.\n\n**Q4: However, at the very first step of pruning, why would still having just 1 SGD step suffice?**\n\n**A4:** Thank you for raising this inspiring question. Based on the reviewer's comment, we conducted additional experiments to demonstrate the effectiveness of using one-step SGD in BiP. In our new experiments, we consider the number of SGD steps, 1, 3, and 5. We report the training trajectories of BiP in [Figure](https://ibb.co/qYdpJHv). As we can see, the use of multi-step SGD accelerates model pruning convergence at its early phase. Yet, if we run BiP for a sufficient number of epochs (we used 100), the final test accuracy of using different SGD settings shows little difference. Although the use of multiple SGD steps could improve the convergence speed, it introduces extra computation complexity per BLO (bi-level optimization) step. Thus, from the overall computation complexity perspective, using 1 SGD step (even running for more epochs) is advantageous in practice. \n\n**Q5: Can authors add a vertical separator line for the datasets in Table 1?**\n\n**A5:** Yes, we will add a vertical separator line for the datasets in Table 1 in the revision.\n\n**Q6: Additional references that can be discussed**\n\n**A6:** Thank you for pointing out the additional related works. We will discuss the suggested related works in the revised version of our paper.\n1. Soft Threshold Weight Reparameterization for Learnable Sparsity, ICML’20. The referred work developed an optimization-based pruning method that alternatively optimizes the parameters and the pruning thresholds. However, STR does not study the relationship between these two terms through the lens of bi-level optimization. \n2. Effective Sparsification of Neural Networks with Global Sparsity Constraint, CVPR’21. This also provided an optimization-based pruning method that alternatively optimizes the model parameters and their corresponding pruning probabilities. Yet, we kindly stress that our proposed BiP algorithm is built on BLO, which, in contrast to the ordinary alternating optimization, requires an in-depth analysis of implicit gradients (see Line 217 - 230 in our submission).\n3. Rethinking Bi-Level Optimization in Neural Architecture Search: A Gibbs Sampling Perspective, AAAI’21. This work considered optimizing the model architecture and parameters in a bi-level formulation, similar to DARTS. However, our proposed BLO for pruning is quite different from the previous DARTS-alike methods, since we show that the bi-linear nature of pruning variables gives a very special class of BLO problems that can be solved as easily as first-order optimization (see Line 252-254).\n\nWe stress that there exist other methods using non-BLO alternating optimization-based pruning schemes. In contrast to our work, these methods neglect the role of implicit gradient (IG) Eq. (2) imposed by BLO. In our ablation study (Figure 6(b3)), we have demonstrated the necessity of the IG enhancement to fully exploit the potential of BLO to improve the performance of model pruning.", " This paper aims to solve unstructured pruning as a bi-level optimization problem. To find pruning masks and values of unpruned weights, they define bi-level equations and perform two SGD-processes iteratively. Their goal is proposing a new method that can show comparable accuracy as IMP(iterative magnitude-based pruning) with restricted training time. ### Strengths\n- For unstructured pruning of DNNs, this paper defines a bi-level optimization problem and shows theoretical contents for that.\nThis paper shows the comparable results of this method using various dataset and architectures.\n- The compression (including re-training) time is not increased with various pruning rates.\n\n### Weaknesses\n- I have a concern on the real effectiveness of unstructured pruning. As many papers have been mentioned, there is no realistic acceleration method for unstructured pruning due to random locations of unpruned weights. There is no note for this widely-known problem. Even though this method is a novel idea for unstructured pruning, we cannot use this method for our inference. When using CSR formats (ex. cusparse library in CUDA), we must need higher pruning rates to gain faster inference speed. I think that problem is why there have been a few studies on unstructured pruning these days unlike structured pruning.\n- In this aspect, this method can achieve outperformed results on lower pruning rates less than 50%. But, there is less noticeable improvement on effective pruning rates. So, in my opinion, the resulting accuracy seems to be not the main contribution of this paper. \nMoreover, the experimental results seem to be so restricted. We are living in 2022 and the neural networks have evolved since Song Han’s magnitude-based pruning. I don’t think the results on the CIFAR-10 dataset can prove the novelty of a pruning method. It can just show a pruning method works well. Most of the results on this paper are limited to the ResNet arch. and CIFAR-10 dataset. CIFAR-10 consists of just ten classes. I think this paper should extend the experimental results to various architectures (ex. MobileNet, Transformer, …) or bigger dataset with smaller model (ex. ResNet-18 on ImageNet). \n- Then, there remains another main contribution, faster pruning-retraining speed regardless of pruning rate. To strengthen this contribution, I think there should be more ablation studies and experiments. I’m curious why this method works with consistent time regardless of pruning rates. Or if I put much time for higher pruning rates, then can I gather higher accuracy by using this method? - How about bringing the Appendix B (including Figure A1) to the main manuscript? At the first time, it was a little hard to understand the overview of this method. \n- I don’t understand why LTH is referred as a pruning method. As far as I understand, the LTH paper just used the magnitude-based pruning method for finding winning-tickets. LTH paper is a novel paper to represent why we train the over-parameterized neural networks, not represent a new pruning method. I think IMP with gradual pruning rates was proposed in Zhu’s paper (described below).\nWhy are the below papers not mentioned in this paper? There have been many other results on unstructured pruning methods.\n - Zhu, Michael, and Suyog Gupta. \"To prune, or not to prune: exploring the efficacy of pruning for model compression.\" arXiv preprint arXiv:1710.01878 (2017).\n - Evci, Utku, et al. \"Rigging the lottery: Making all tickets winners.\" International Conference on Machine Learning. PMLR, 2020.\nSidak Pal Singh and Dan Alistarh. Woodfisher: Efficient second-order approximation for neural network compression. Conference on Neural Information Processing Systems (NeurIPS), 33, 2020.\n - Peste, Alexandra, et al. \"Ac/dc: Alternating compressed/decompressed training of deep neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 8557-8570.\n - This paper argues for an unstructured pruning method, but there is a critical issue on the unstructured pruning. I think this paper has no/less consideration on that.\n- The experimental designs are so restricted. It should be extended to other challenging datasets and architectures.\n", " Search for the winning lottery in the Lottery Ticket Hypothesis (LTH) is of great interest in the Machine Learning (ML) community and this paper aims to find such winning tickets (in most cases). This work formulates the model pruning (primarily unstructured but also extends to structured pruning) as a Bilevel Optimization (BLO) where the lower-level optimization finds the best possible set of weights given the sparse neural network (that is weight masks) and the upper-level optimization is optimizing for the boolean mask (done using continuous relaxation followed by threshold-based rounding). Authors first derive the general expression for the gradient with respect to mask using an implicit gradient, which involves second-order derivatives, matrix inverse, and (n, n) matrices where n in #parameters in the network. However, with the hessian free assumption and given the nature of the problem (bilinear in mask “m” and parameters “theta”), the final expression of gradient turns out to have only first-order derivatives. Having defined the expressions, they finally proceed to describe the algorithm which involves lower-level SGD and upper-level SPGD, done until convergence, which does happen, in practice. Then the paper proceeds to the experiments involving multiple architectures, multiple datasets, and different pruning ratios. Performance metrics involve final accuracy, performance relative to the dense pre-trained model, and overall run time. Strong experiments show that BiP achieves superior performance than the original dense model (i.e. finds winning lottery) and has a comparable (or superior) performance to iterative magnitude pruning (IMP) while having much lesser run-time similar to one-shot pruning methods. Strengths:\n\n1. The Paper is concise and well written. \n2. The theory is easy to follow and the experiment section is strong involving ablations across different hyperparameters. \n3. Charts are well organized. \n\nOverall, I enjoyed reading this work. I’ve some suggestions which primarily are additional references to be discussed and some additional visualization that could help to make this a strong submission which I describe in the following weakness section. \n\nWeaknesses:\n1. While the diagram is mentioned in the appendix, can the authors include an algorithm block for a better understanding of the overall flow? \n2. How are the diverse batches chosen for the training? Does it involve some kind of submodular optimization to get the schedule? What happens if the batch for SGD is qualitatively different from the batch of SPGD? \n3. The convergence result shown in the appendix involves final accuracy. Does this happen that the masks still keep on changing (that is finding the subnetwork) but all of that subnetworks have similar performance when trained (performed SGD)? \n4. It was mentioned that SGD steps are kept fixed to 1. However at the very first step of pruning, why would still having just 1 SGD step suffice? \n5. Table 1. Can authors add a vertical separator line for the datasets?\n\nAdditional references that can be discussed:\n1. Soft Threshold Weight Reparameterization for Learnable Sparsity. ICML’20\n2. EFFECTIVE TRAINING OF SPARSE NEURAL NETWORKS UNDER GLOBAL SPARSITY CONSTRAINT. CVPR’21\n3. Rethinking Bi-Level Optimization in Neural Architecture Search: A Gibbs Sampling Perspective. AAAI’21\n As mentioned in the main review. As mentioned in the main review.", " This paper provides a novel reformulation of model pruning as a bi-level optimization (BLO) problem, in which the paradigm of pruning-retraining pruning can be viewed as two optimization levels: (1) finding the pruning mask (the upper-level), and (2) masked model retraining (the lower-level).\n\nThe paper further proposed an algorithm, bi-level pruning (BIP), to be a BLO solver that uses only the first-order gradient, which makes it as efficient as one-shot pruning.\n\nThe experiment results show that BIP equips the high efficiency of one-shot pruning and maintains the high accuracy of iterative magnitude pruning (IMP) in both structured and unstructured pruning schemes. Strengths:\n\n- The idea of the BLO reformulation of the model pruning problem is new, which provides a theoretical basis for BLO algorithms to be explored for model pruning. \n- The proposed BIP pruning algorithm is original and practical. \n- The paper is well organized and easy to follow.\n\nWeakness:\n\n- It would be really helpful to add some discussion about the similarities and differences between the BLO and the L0-based pruning (https://arxiv.org/pdf/1712.01312.pdf). L0-based methods view pruning masks as random variables defined by specific parameters, thus making the pruning masks could be learned together with the models. This is a different perspective compared to BLO. And these two views may be combined and unified, which makes it worth having a discussion and comparison. - It would be really helpful to add some discussion about the similarities and differences between the BLO and the L0-based pruning (https://arxiv.org/pdf/1712.01312.pdf). \n- In Figure 6(b1) the best accuracy is achieved when lower-level steps N <= 3, which would be unexpected and need some explanation. As a larger N tends to find more optimal model parameters that are close to $\\theta^*$.\n- Could you talk about the schedule of BIP to prune with multiple sparsity ratios? IMP could initialize from models with low pruning ratios to produce models with high pruning ratios. Thus when pruning for multiple ratios, the time complexity of IMP could be amortized. I wonder if BIP could be efficient in this setting. The authors have addressed the limitations adequately. It would be better to move some ablation studies like Figure A4 to the main manuscript, though the pages may be limited.", " The authors (correctly) identify that iterative magnitude pruning (IMP) is inefficient to extract \"winning tickets\" from neural networks. To this end, they investigate a method utilising bi-level optimisation (BLO) to extract winning tickets -- including structured results which can easily yield real-world speedups. In the paper they formulate pruning as a BLO problem, and subsequently evaluate across CIFAR-10/100, Tiny-ImageNet and ImageNet. The results are consistently better than IMP, and gradient-based saliency methods such as Grasp.\n\n==\n\nSee below for response to authors; score has been updated after rebuttal. I am suspicious of the contributions of this paper, since the use of \"winning ticket\" seems to be overloaded in this work. In the abstract this term is defined as: \"i.e., pruned sparse models with better generalization than the original dense models\". But -- this is not the widely agreed upon definition. Lines 35 to 37 give the more widely accepted definition, but line 325 gives a different definition which I don't agree with. It is crucial that the work is self-consistent, and consistent with other literature.\n\nFollowing on from this observation, it seems that the pruning is done throughout training. This cannot reasonably be compared to IMP or Grasp: the pruning mask is set at iteration 0 (or close to 0) and does not change throughout training. It is also worth noting that Grasp is not SOTA anymore: you should compare to ProsPr by Alizadeh et al. (cited by your work already). The only valid comparison I can see would be to the Early-Bird work by You et al. -- but you don't provide any in this direction. In addition, you have provided few comparisons to methods which prune after training, and perhaps do some small amount of fine tuning. These are also reasonable competitors to this work.\n\nFinally, you are not the first to do bi-level optimisation for pruning. \"Differentiable Network Pruning for Microcontrollers\" by Liberis and Lane would be an example prior art (and I'm sure there are others).\n\nHere are some more minor thoughts:\n\n- Code is included which is very nice since the method is fairly complicated.\n- I really commend this work for paying attention to structured pruning -- it's a really hard problem, and far more relevant than most other pruning directions -- but I am not sure you characterise the related work fairly. \"Single Shot Structured Pruning\" by van Amersfoort et al. and ProsPr by Alizadeh et al. both assessed this direction at initialisation.\n- On L229 you say that we can assume $\\nabla^2 l = 0$. I am not convinced this is actually true in practice: after all, this is how many earlier pruning saliency methods worked. However, I am not saying that this makes the work incorrect; approximations pop up everywhere in DL.\n- Is there a good reason to use reference 18 as the benchmark rather than the benchmark provided by Frankle's missing the mark work? It is very extensive, and other works have built upon it in recent years.\n- Figure 6a is not enough to justify the robustness to rewinding that marks a winning ticket. You've provided results on the smallest model with the easiest dataset. More difficult examples are needed.\n- The final line of the conclusion is a bit concerning: in practice, structured pruning methods could prune the same number of parameters but yield totally different speedups. It is an important thing to measure. Many of the questions are motivated from above:\n\n1. What is the definition of \"winning tickets\" used?\n2. Is the pruning done continuously through training?\n3. What is the motivation for not using Frankle's benchmark?\n4. How fast are the channel pruned networks? I don't think there is any explicit discussion of limitations in the main paper. The checklist points us to appendix C4 but there's nothing there regarding limitations (and I have discussed some above)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 2 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "w0uMl2b2CQu", "Q1S0FPLOen_", "w0uMl2b2CQu", "7_xIW1Rvdi1", "7_xIW1Rvdi1", "7_xIW1Rvdi1", "py64YZVA8fG", "kvP_reBAY95", "LgRy-aZggvz", "K-BISb3ND_a", "OX68iAHfmv", "D-YYStKy9FM", "cQiBjp_bI0P", "nips_2022_t6O08FxvtBY", "cQiBjp_bI0P", "05nXc4nigBB", "05nXc4nigBB", "jzC7z6ztw2o", "D-YYStKy9FM", "cQiBjp_bI0P", "cQiBjp_bI0P", "cQiBjp_bI0P", "D-YYStKy9FM", "D-YYStKy9FM", "D-YYStKy9FM", "D-YYStKy9FM", "D-YYStKy9FM", "K-BISb3ND_a", "OX68iAHfmv", "nips_2022_t6O08FxvtBY", "nips_2022_t6O08FxvtBY", "nips_2022_t6O08FxvtBY", "nips_2022_t6O08FxvtBY" ]
nips_2022__w-ivKc1cj
Learn what matters: cross-domain imitation learning with task-relevant embeddings
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent. Such cross-domain imitation learning is required to, for example, train an artificial agent from demonstrations of a human expert. We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge. We jointly train the learner agent's policy and learn a mapping between the learner and expert domains with adversarial training. We effect this by using a mutual information criterion to find an embedding of the expert's state space that contains task-relevant information and is invariant to domain specifics. This step significantly simplifies estimating the mapping between the learner and expert domains and hence facilitates end-to-end learning. We demonstrate successful transfer of policies between considerably different domains, without extra supervision such as additional demonstrations, and in situations where other methods fail.
Accept
All three reviewers have elected to accept the paper, with two weak accepts and one accept. The reviews were thorough and demonstrated an understanding of the paper, and the authors have addressed many of the suggested edits. I find figure 2 of the paper (comparison to XIRL on XMagical benchmark) compelling. Recommendation: accept.
val
[ "mXIxZ5suybP", "iyRUVFt7iW", "fpFap6hXOUX", "bA1NzvlyChe", "N0jb191LUvL", "S8sRJsCAxmF", "DFOQYNOULHkn", "SVXX1dLb-8T", "AdbUDNl_GUb", "FTaTgTMQwfi", "Iwa3uyYtQnm", "VX4wzQPy9Sc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the thorough response and additional evaluations. I will be keeping the original score (7). ", " Thanks, very thorough response. I don't know if you just made all the changes just because I requested them, or if you actually think it makes the paper better, but regardless, personally I do think they all make the paper much better. Upgrading my score as follows\n\n(note for posteriority):\nprevious overall rating: 5\nupdated overall rating: 6\n\nprevious presentation: 3, good\nupdated presentation: 4, excellent", " Dear reviewer 2, thank you for your feedback on our response. We likewise find that the suggested edits made our paper much better.\n\nDear reviewers 1 and 3, as the discussion period will end tomorrow, we just wanted to ask whether there are still any open questions? \n\nBest wishes,\nThe authors.", " We would like to thank all reviewers for their thorough and fair evaluations, and the valuable feedback. Below, please find the in-line answers to each of the reviews. \n\nWe further revised the manuscript accordingly and marked important changes in red (old version) and blue (new proposed version).\n\nThree new experiments/ baselines requested by the reviewers can be found in the revised appendix, with results shown in Figures 8, 9 and 11.\n\nPlease let us know if any aspects remain unclear.\n\nBest wishes,\n\nThe authors", " >Strengths And Weaknesses: \n> Strengths:\n> - Observational cross-domain imitation learning has grown increasingly relevant as a problem setting, for which the authors propose a novel self-supervised method (to the best of my knowledge). While the individual components -- mutual information based representation learning, adversarial imitation learning -- are not novel, the construction of the method and problem domain seems novel. This problem setting and proposed method has high relevance to the research community, as methods developed in this area opens the door for unsupervised learning of behaviours from widely available online demonstrations (eg. Youtube videos).\n> - The method is well explained and straightforward. The authors carry out a range of evaluations and ablations to show the capabilities of their proposed method, comparing against recent work in this domain, and show that their proposed method performs well.\n> - The paper is very well written, structured nicely, and was a pleasure to read.\n> - Being able to adjust how much of the task-relevant information to retain using the embedding size is an interesting outcome of the method.\n\n> Weaknesses: \n\n> - While not explicitly directed at imitation learning across different embodiments, there are some relevant works in unsupervised methods for domain regularization in observational imitation learning, which should be cited in the paper:\n> - Stadie, Bradly C., Pieter Abbeel, and Ilya Sutskever. \"Third-person imitation learning.\" (2017).\n> - Cetin, Edoardo, and Oya Celiktutan. \"Domain-robust visual imitation learning with mutual information constraints.\" (2021).\n\nThank you for this input. We added these works to the related works section.\n\n> - While the performance in Section 5.1 compared to XIRL seems strong, I would like to see the same evaluations carried out on different combinations of agent embodiments rather than just the two that are most different. In the other embodiment cases, does UDIL still outperform XIRL or is the performance more similar as the embodiment gap closes?\n\nThank you for this suggestion. We ran these experiments and added a comparison of the performance of UDIL and XIRL also for the remaining two embodiments (mediumstick and shortstick). Please see section 7.3.1 or Figure 8 for the results in the updated paper. We found that UDIL outperforms XIRL consistently over all tested configurations. Interestingly, the performance gap between UDIL and XIRL was even larger for the newly added scenarios. \n\n> - It also seems like the authors do not use the adversarial imitation learning setup in the comparisons against XIRL. It would be interesting to see if the adversarial setup improves or hurts performance in this case.\n\nThank you for this suggestion. We have added this evaluation in section 7.3.3. (”Results for UDIL with adversarial training.”, see Figure 9). We find that the unchanged implementation of UDIL that uses adversarial training outperforms the XIRL baseline in both scenarios. \n\n> Questions:\n\n> - Have the authors considered / experimented with using multiple expert agents providing demonstrations? One of the stated motivations for this work is wanting to avoid reliance on a large number of different demonstrators, but it would be interesting to see if there is any performance improvement in the multi-demonstrator case (i.e. would the quality of the learned embedding space improve, or is one demonstrator sufficient for capturing the task-relevant features?).\n\nWe have not done such an evaluation yet, but do agree that it would be highly interesting and we share the intuition that demonstrations from distinct demonstrators might improve the task-relevant embedding. In this sense, we see this as interesting direction for future work. \n\n> - In the ablations plot (Figure 5) with Hopper from HalfCheetah, what is the intuition behind the noisy reward curve for UDIL? As adversarial training objectives can be unstable to train, I am curious if the authors have seen any other similar instabilities / difficulties with training.\n\nWe agree with the intuition that the noisy reward curve should be an artefact of the adversarial training objective. A better choice of hyper-parameters for the adversarial imitation learning algorithm might improve this aspect, but we instead used the hyper-parameter settings of the original authors to ensure comparable results.\n\n> - Do the authors have any hypotheses for why the performance increases then decreases for the larger embedding case for Hopper from HalfCheetah in Figure 5?\n\nWe hypothesise that this is due to the different locomotion modes that the Hopper can adopt, i.e. we hypothesise that the hopper changes its locomotion mode to one that is more similar to the expert embedding, which however yields lower returns (travels less far).\n\n\n\n\n", " > 9. Note that the comparison between the present work and Kim et al [17] could be improved. Basically the present work discards [17] because it needs some proxy tasks. But could the authors describe why specifically they need proxy tasks? This is an especially important comparison because Kim et al generally falls in the category of doing cross-embodiment generative advsersarial imitation. So the details really matter here with the comparison, I think.\n\nThe work of Kim et al. uses the proxy task demonstrations to first learn a mapping between the two domains, which is then used to find the policy in the subsequent step. Our approach combines these two steps, thereby omitting the need for demonstrations of proxy tasks in both domains.\n\n> Minor \n\n> 4. Lines 104-106 -- they make it seem like an extra-special thing to not know the actions of the expert, but in the cross-embodiment case, the learner has different actions anyway, so it's kind of already implied that the actions won't be that useuf.\n\nThank you for this remark. We pointed this out as related work (e.g. GWIL [9]) also uses the actions of the expert agent, which is different from our approach.\n\n> 7. May consider just using Z for notation, rather than ZE, since the embedding space is used by both learner and expert, rather than just expert.\n\nThank you for this suggestion, we updated our notation accordingly.\n\n> 8. Line 230-233 is confusing -- who is \"we\" here? Are you describing how you modify [40], or what the proposed algorithm uses?\n\nWe follow XIRL here, thank you for pointing this out, we made it more clear now.\n\n> Limitations\n\n> The authors mention only that \"the risk of learning incorrect policies remains\". I encourage the authors to consider and discuss other limitations that I raised in the Weaknesses discussion.\n\nWe now revised our conclusion accordingly.\n\n\n", " > 4. Probably should take one of the contributions off the list? Specifically the first stated contribution, \"We devise a framework to learn the mapping between the learner and expert domains in an unsupervised fashion, i.e., without additional proxy task demonstrations.\" also applies to for example [40] and [9], so I don't think it can be claimed as a contribution of the present work. However, I think the other 2 contributions in the list are strong and sufficient. \n\nThank your for this valid critique. We have rephrased the contributions accordingly.\n\n> 5. Some cases where the proposed algorithm definitely fails. For example consider a case where of simple environment where an agent visits every state, in a particular order, exciting every possible state-next-state (s,s′) transition an equal number of times. In this case, the task-relevant embedding would fail to do anything useful, since although the long-horizon order may be considered what matters, instead the algorithm only uses state-next-state pairs to find the task-relevant embedding. Although this is a simple case, it may in general point to problems for using the proposed algorithm on long-horizon, multi-step tasks.\n\nThank you for pointing this out. We agree that no meaningful embedding can be found if the expert’s policy induces a uniform state-next-state distribution. Even though this specific scenario seems rather unlikely in practice, we included it in our discussion of limitations in the conclusion, as it has theoretical relevance. We also acknowledge that the mutual information objective might yield degenerate solutions in the case of long-horizon tasks, especially if the the environment state is only partially observed or the observations are noisy, and address this in the conclusion. \n\n> 6. Some cases where the proposed algorithm I think would fail. On this point, I may be wrong, but I just don't see how specifically the learning of g(the mapping from learner state to shared embedding space) is required to learn anything useful. I just don't see what's stopping g from deciding that it wants to completely scramble the ordering of all states visited. While Section 4.4 discusses avoiding a degenerate mapping for f, it doesn't discuss it for g. \n\nWe agree that it was unclear why no time-invariance constraint was used to learn the expert encoder f. In fact, our problem formulation in section 4.3 results in a time-invariant encoder f, which we now state explicitly.\n\n> 7. Some missing key related work. Another very related work is \"Reinforcement Learning with Videos\" (https://arxiv.org/pdf/2011.06507.pdf). Although I think it still significantly different from the proposed work, for example not adopting the GAIL style formulation, it does also address cross-embodiment imitation and also learns a shared embedding space. Note too that the v2 November 2021 version actually corrected a prior mistake which was that, contrary to the published conference version, the method does not actually need paired data. Accordingly it has, like the other works mentioned too, basically the same data requirements as the presented work. \n\nThank you for this suggestion, we now added this reference to our literature overview. We initially omitted this work in the related work section because it assumes that both the correct reward signal and demonstrations of the task are given (effectively combining reinforcement learning and imitation learning). This setting is different to ours, as we do not assume access to the reward signal. This is also highlighted by the authors of the given work, which state that the demonstrations primarily act to speed up exploration, which is a problem formulation different from ours. However, we have now cited it. \n\n> 8. Issue with time-invariant mapping not discussed. This related to my point #6. If f is time-invariant, it can't know about histories of states mattering in any particular way, which makes long-horizon demonstrations with potentially repetitive tasks not addressable. Also, they don't discuss how the actually make it time invariant. It already looks time invariant in Section 4.3 -- seems like they don't need to do anything else to make it so. \n\nPlease see the answer to question 5. There was also an error in our notation in section 4.3, which we now corrected.\n\n \n\n", " >1. Discussion of relation to existing works\n> - Issue: The discussion of prior work is kind of annoyingly pedantic, and doesn't have to be, in attempting to carve out a novelty statement about what other methods can or can't do. In general, paragraph 4 of the intro (continuing onto page 2), and the \"cross-domain imitation learning\" discussions in the Related Work, and the brief mention of related work in the Abstract, are all annoying. For example, they say that [40] needs multiple demonstrator agents, but as they show themselves, it is pretty trivial to just use [40] but with a single demonstrator agent. Also, [9] is another prior work that effectively addresses the same problem statement. Also, some of the limitations of it, such as the footnote on page 2, also apply to the presented work. To be honest, I'm not sure the authors are able to say any type of clean statement about a type of scenario that their algorithm can address that others can't, and I think they know that. But that's okay! That doesn't mean their work isn't interesting or isn't useful.\n> - Suggested solution: I think the authors would do well to focus on: rather than trying to come up with statement of what other methods can't address (which has some issues as noted above), they might do well to just instead focus on how their proposed formulation is *different*, how it is interesting, and how at least in the shown experiments it may also work better. Additionally, they could be more positive towards, and appreciative of, the prior works in this challenging subfield, which have helped carve a path that they can follow and attempt to improve upon. Rather than their current final sentence on lines 81-84, they could basically just say \"We propose a different formulation using X, Y, and Z, and further show that compared to [40] and [9], empirical results suggest our method is more capable than these prior works on the tested settings.\" Or something like that. \n\nThank you for this feedback. It was by no means our intention to understate the importance of prior work in the field. We solely intended to point out the differences between ours and previous work with as much detail as possible, while highlighting potential advantages of our approach. We implemented the given suggestions, trying to balance the required detail needed for readers not familiar with the field.\nMore specifically, we stated more clearly that the problem setting considered by GWIL ([9]) is equivalent to ours and that the approach of XIRL ([40]) can be directly adapted to be applicable to our problem setting.\n\n> 2. Is \"cross-domain\" better terminology to be using here than \"cross-embodiment\"? All of the experiments are focused on cross-embodiment imitation learning, rather than any other notion of difference between expert and learner. I don't think it matters too much either way, but as the authors themselves say, the cross-embodiment case is probably in general the most challenging, and hence why they focus on it. \n\nWe agree that our experiments focus on cross-embodiment imitation, which we now stated more clearly in the introduction. As the approach is not limited to cross-embodiment applications but is stated generally enough to address other domain mismatches, we chose the term cross-domain instead of cross-embodiment (also following the nomenclature in recent literature).\n\n> 3. Probably a different title should be necessary? Per my discussion of point 1 above, there are actually prior works which address the cross-embodiment imitation learning setting, and similarly do so with the same amount of supervision as the present work (only expert demonstrations needed from a different embodiment). Accordingly, I don't think the title is very truthful or accurate. It seems to suggest that this paper introduces the idea of doing cross-domain imitation learning, which as discussed in point 1, isn't the case. Further, it's not really unsupervised... there isn't really such a thing as unsupervised imitation learning. Specifically, the expert is providing supervision. A more accurate and useful title for the community might be something like \"Generative Adversarial Cross-Embodiment Imitation with Task-Relevant Embeddings\" or something like that. That title actually calls out how the paper addresses a known problem, but with a different method.\n\nThank you for this remark, we agree that the term “unsupervised” can be overloaded. We use it in the strictest possible sense afforded by imitation learning: even though in imitation learning demonstrations of expert behaviour are always needed, our approach does not require additional demonstrations from proxy tasks, or a reward signal. We now stated this more clearly in section 4. We hence agree that the previous title could cause confusion and would change it to “Learning what matters: cross-domain imitation learning with task-relevant embeddings”. \n\n\n\n", " >Strengths And Weaknesses: \n\n> Strengths \n \n> The paper is well-written and well-motivated. The method and design decisions are explained clearly.\nThe analysis of the size of the task-relevant embedding (dimension d) is interesting and original.\n\n>Weaknesses\n\n> The experiments are limited to environments with small states, where there is a clear distinction between task-relevant dimensions and dimensions which can be discarded. The claims would be strengthened if experiments were extended to environments with visual observations (e.g. atari flavors) or at least more nuanced ones (perhaps MiniGrid). \n\nWe agree that the application of our method to higher-dimensional observation spaces would be the logical next step to broaden its applicability. We believe that if the high-dimensional unstructured observation space is parsed into a meaningful abstract embedding space, our method should work well. At this point, we have run initial experiments with larger observations, which yield promising results but are not yet applicable at larger scale. More specifically, we utilize a pre-trained vision transformer and fine-tune its final attention layer with a loss function based on the mutual information objective in equation 7. In general, we see this as both a limitation and as an interesting direction for future work, and addressed this more clearly in the revised conclusion.\n\n> Questions:\n\n> I wonder if there are additional baselines that could be used. For example, what happens if vanilla imitation learning is done on the task-relevant embedding? I expect this would work poorly since the policy does need some environment-specific information. \n\nThank you for this suggestion. Unfortunately this baseline cannot be implemented in a straightforward way, as the learner’s state is generally not of the same dimension as the task-relevant embedding. We instead added a baseline that is of similar fashion (please see next answer).\n\n> Another baseline could be some form of oracle in which the task-relevant dimensions are hand-picked or make use of the reward in some way. \n\nWe added an additional oracle baseline in appendix 7.3.3. This baseline assumes that an oracle is used to find the learner’s state dimensions that match the task-relevant embedding of the expert state, while the exact order of the state dimensions is unknown. We then run imitation learning directly on the task-relevant embedding, i.e. omitting the learner encoder g. We find that, as hypothesised by the reviewer, this baseline performs poorly. \n\n> Limitations: \n\n> I'd like to see a discussion of limitations, perhaps addressing how the method might extend to more complex observations/environments.\n\nThank you for this remark. We now addressed this more clearly in the conclusion.", " The paper presents UDIL, unsupervised cross-domain imitation learning, a method for learning a policy in one environment using expert demonstrations from another environment. The focus is on learning a task-relevant embedding, which identifies which parts of the state should be used for mapping between learner and expert data. Experiments are presented in two domains: the XMagical benchmark (in which the learner and expert are circular and a long stick or vice versa) and MuJoCo (in which the agents are hopper, halfcheetah, or walker). UDIL outperforms XIRL in the XMagical domain and outperforms GWIL in the MuJoCo domain. Strengths\n* The paper is well-written and well-motivated. The method and design decisions are explained clearly.\n* The analysis of the size of the task-relevant embedding (dimension d) is interesting and original.\n\nWeaknesses\n* The experiments are limited to environments with small states, where there is a clear distinction between task-relevant dimensions and dimensions which can be discarded. The claims would be strengthened if experiments were extended to environments with visual observations (e.g. atari flavors) or at least more nuanced ones (perhaps MiniGrid). I wonder if there are additional baselines that could be used. For example, what happens if vanilla imitation learning is done on the task-relevant embedding? I expect this would work poorly since the policy does need some environment-specific information. Another baseline could be some form of oracle in which the task-relevant dimensions are hand-picked or make use of the reward in some way.\n\nMinor (no need to respond):\n* Typo on line 293 (missing period after domain).\n* Typo on line 318 (3 p's in appendix). I'd like to see a discussion of limitations, perhaps addressing how the method might extend to more complex observations/environments.", " The authors propose to address cross-embodiment imitation learning by using GAILfO but with a learned cross-embodiment embedding. They learn the mappings from (i) expert state space to the embedding space, and (ii) learner state space to the embedding space by respectively (i) using a mutual-information objective with state transition pairs and psuedorandom state transition pairs, and (ii ## Strengths\n\n1. The formulation is pretty clean, satisfying, and nice to follow.\n\n2. The results show that the authors are able to get the method to work pretty well on the challenging problem of cross-embodiment imitation.\n\n## Weaknesses\n\n1. Discussion of relation to existing works\n - Issue: The discussion of prior work is kind of annoyingly pedantic, and doesn't have to be, in attempting to carve out a novelty statement about what other methods can or can't do. In general, paragraph 4 of the intro (continuing onto page 2), and the \"cross-domain imitation learning\" discussions in the Related Work, and the brief mention of related work in the Abstract, are all annoying. For example, they say that [40] needs multiple demonstrator agents, but as they show themselves, it is pretty trivial to just use [40] but with a single demonstrator agent. Also, [9] is another prior work that effectively addresses the same problem statement. Also, some of the limitations of it, such as the footnote on page 2, also apply to the presented work. To be honest, I'm not sure the authors are able to say any type of clean statement about a type of scenario that their algorithm can address that others can't, and I think they know that. But that's okay! That doesn't mean their work isn't interesting or isn't useful.\n - Suggested solution: I think the authors would do well to focus on: rather than trying to come up with statement of what other methods can't address (which has some issues as noted above), they might do well to just instead focus on how their proposed formulation is *different*, how it is interesting, and how at least in the shown experiments it may also work better. Additionally, they could be more positive towards, and appreciative of, the prior works in this challenging subfield, which have helped carve a path that they can follow and attempt to improve upon. Rather than their current final sentence on lines 81-84, they could basically just say \"We propose a different formulation using X, Y, and Z, and further show that compared to [40] and [9], empirical results suggest our method is more capable than these prior works on the tested settings.\" Or something like that.\n\n2. Is \"cross-domain\" better terminology to be using here than \"cross-embodiment\"? All of the experiments are focused on cross-embodiment imitation learning, rather than any other notion of difference between expert and learner. I don't think it matters too much either way, but as the authors themselves say, the cross-embodiment case is probably in general the most challenging, and hence why they focus on it.\n\n3. Probably a different title should be necessary? Per my discussion of point 1 above, there are actually prior works which address the cross-embodiment imitation learning setting, and similarly do so with the same amount of supervision as the present work (only expert demonstrations needed from a different embodiment). Accordingly, I don't think the title is very truthful or accurate. It seems to suggest that this paper introduces the idea of doing cross-domain imitation learning, which as discussed in point 1, isn't the case. Further, it's not really unsupervised... there isn't really such a thing as unsupervised imitation learning. Specifically, the expert is providing supervision. A more accurate and useful title for the community might be something like \"Generative Adversarial Cross-Embodiment Imitation with Task-Relevant Embeddings\" or something like that. That title actually calls out how the paper addresses a known problem, but with a different method.\n\n4. Probably should take one of the contributions off the list? Specifically the first stated contribution, \"We devise a framework to learn the mapping between the learner and expert domains in an unsupervised fashion, i.e., without additional proxy task demonstrations.\" also applies to for example [40] and [9], so I don't think it can be claimed as a contribution of the present work. However, I think the other 2 contributions in the list are strong and sufficient.\n\n5. Some cases where the proposed algorithm definitely fails. For example consider a case where of simple environment where an agent visits every state, in a particular order, exciting every possible state-next-state $(s, s')$ transition an equal number of times. In this case, the task-relevant embedding would fail to do anything useful, since although the long-horizon order may be considered what matters, instead the algorithm only uses state-next-state pairs to find the task-relevant embedding. Although this is a simple case, it may in general point to problems for using the proposed algorithm on long-horizon, multi-step tasks.\n\n6. Some cases where the proposed algorithm I think would fail. On this point, I may be wrong, but I just don't see how specifically the learning of $g$ (the mapping from learner state to shared embedding space) is required to learn anything useful. I just don't see what's stopping $g$ from deciding that it wants to completely scramble the ordering of all states visited. While Section 4.4 discusses avoiding a degenerate mapping for $f$, it doesn't discuss it for $g$.\n\n7. Some missing key related work. Another very related work is \"Reinforcement Learning with Videos\" (https://arxiv.org/pdf/2011.06507.pdf). Although I think it still significantly different from the proposed work, for example not adopting the GAIL style formulation, it does also address cross-embodiment imitation and also learns a shared embedding space. Note too that the v2 November 2021 version actually corrected a prior mistake which was that, contrary to the published conference version, the method does not actually need paired data. Accordingly it has, like the other works mentioned too, basically the same data requirements as the presented work.\n\n8. Issue with time-invariant mapping not discussed. This related to my point #6. If f is time-invariant, it can't know about histories of states mattering in any particular way, which makes long-horizon demonstrations with potentially repetitive tasks not addressable. Also, they don't discuss how the actually make it time invariant. It already looks time invariant in Section 4.3 -- seems like they don't need to do anything else to make it so.\n\n9. Note that the comparison between the present work and Kim et al [17] could be improved. Basically the present work discards [17] because it needs some proxy tasks. But could the authors describe why specifically they need proxy tasks? This is an especially important comparison because Kim et al generally falls in the category of doing cross-embodiment generative advsersarial imitation. So the details really matter here with the comparison, I think.\n\n\n## Minor\n\n1. Line 23 says that \"classic\" imitation learning algorithms do state occupancy matching between learner and expert... this though is the formulation of GAIL? I'm not sure I would call GAIL a \"classic\" IL method yet?\n\n2. Eq. 5 would probably be even more clear if rather than $z_E$ it was called out that this is $f(s_E)$. Then it's clear that both expert and learner state spaces are being mapped to some shared embedding space.\n\n3. Error: line 186, first symbol, should be $g$.\n\n4. Lines 104-106 -- they make it seem like an extra-special thing to not know the actions of the expert, but in the cross-embodiment case, the learner has different actions anyway, so it's kind of already implied that the actions won't be that useuf.\n\n5. Line 109 -- Torabi et al, GAILfO, is a good paper, but as those authors would probably admit themselves, as is highlighted in the name of their algorithm, it is a pretty minor modification to GAIL. Accordingly, it seems justified to give a citation to GAIL as well as GAILfO on line 109.\n\n6. Error: line 154 should say $g: S_L \\rightarrow Z_E$.\n\n7. May consider just using $Z$ for notation, rather than $Z_E$, since the embedding space is used by both learner and expert, rather than just expert.\n\n8. Line 230-233 is confusing -- who is \"we\" here? Are you describing how you modify [40], or what the proposed algorithm uses?\n\n9. Line 241 -- I'm not sure anybody would call hopper, walker, and half-cheetah \"high-dimensional\"? I think walker has a 3D action space, and the others are 6D? 6D isn't small... it's moderately hard... but I don't think \"high-dimensional\". Please see my discussion of Weaknesses for questions.\n The authors mention only that \"the risk of learning incorrect policies remains\". I encourage the authors to consider and discuss other limitations that I raised in the Weaknesses discussion.", " The problem setting under consideration is cross-domain imitation learning, where the goal is to enable an imitation learning agent to learn from expert demonstrations from a different environment or agent embodiment. The authors propose a method for cross-domain imitation learning that requires less supervision than prior methods (i.e. without additional proxy tasks or multiple demonstrators/domains), by primarily leveraging a mutual information-based objective to encourage learning a more task-relevant representation of the expert state space. This embedding is then used by the learner for an imitation learning objective. The method further imposes a time-invariance constraint to prevent learning a degenerate embedding space, and the overall method uses an adversarial imitation learning setup to learn solely from observations. The experiments compare against other recent methods in cross-domain imitation learning, across a range of task settings. Strengths:\n- Observational cross-domain imitation learning has grown increasingly relevant as a problem setting, for which the authors propose a novel self-supervised method (to the best of my knowledge). While the individual components -- mutual information based representation learning, adversarial imitation learning -- are not novel, the construction of the method and problem domain seems novel. This problem setting and proposed method has high relevance to the research community, as methods developed in this area opens the door for unsupervised learning of behaviours from widely available online demonstrations (eg. Youtube videos).\n- The method is well explained and straightforward. The authors carry out a range of evaluations and ablations to show the capabilities of their proposed method, comparing against recent work in this domain, and show that their proposed method performs well. \n- The paper is very well written, structured nicely, and was a pleasure to read. \n- Being able to adjust how much of the task-relevant information to retain using the embedding size is an interesting outcome of the method. \n\nWeaknesses:\n- While not explicitly directed at imitation learning across different embodiments, there are some relevant works in unsupervised methods for domain regularization in observational imitation learning, which should be cited in the paper: \n - Stadie, Bradly C., Pieter Abbeel, and Ilya Sutskever. \"Third-person imitation learning.\" (2017).\n - Cetin, Edoardo, and Oya Celiktutan. \"Domain-robust visual imitation learning with mutual information constraints.\" (2021).\n- While the performance in Section 5.1 compared to XIRL seems strong, I would like to see the same evaluations carried out on different combinations of agent embodiments rather than just the two that are most different. In the other embodiment cases, does UDIL still outperform XIRL or is the performance more similar as the embodiment gap closes? \n- It also seems like the authors do not use the adversarial imitation learning setup in the comparisons against XIRL. It would be interesting to see if the adversarial setup improves or hurts performance in this case. - Have the authors considered / experimented with using multiple expert agents providing demonstrations? One of the stated motivations for this work is wanting to avoid reliance on a large number of different demonstrators, but it would be interesting to see if there is any performance improvement in the multi-demonstrator case (i.e. would the quality of the learned embedding space improve, or is one demonstrator sufficient for capturing the task-relevant features?). \n- In the ablations plot (Figure 5) with Hopper from HalfCheetah, what is the intuition behind the noisy reward curve for UDIL? As adversarial training objectives can be unstable to train, I am curious if the authors have seen any other similar instabilities / difficulties with training.\n- Do the authors have any hypotheses for why the performance increases then decreases for the larger embedding case for Hopper from HalfCheetah in Figure 5? The paper would be improved with a section more clearly discussing the limitations of the proposed approach -- e.g. if there are difficulties with adversarial training, whether there are cross-domain environments or tasks where the proposed mutual information objective would fail, or how much additional overhead is required to search for the best embedding dimension size for the task you care about." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "N0jb191LUvL", "S8sRJsCAxmF", "nips_2022__w-ivKc1cj", "nips_2022__w-ivKc1cj", "VX4wzQPy9Sc", "Iwa3uyYtQnm", "Iwa3uyYtQnm", "Iwa3uyYtQnm", "FTaTgTMQwfi", "nips_2022__w-ivKc1cj", "nips_2022__w-ivKc1cj", "nips_2022__w-ivKc1cj" ]
nips_2022__VF5QKgXoqt
HumanLiker: A Human-like Object Detector to Model the Manual Labeling Process
Popular object detection models generate bounding boxes in a different way than we humans. As an example, modern detectors yield object box either upon the regression of its center and width/height (center-guided detector), or by grouping paired estimated corners (corner-guided detector). However, that is not the pattern we manually label an object due to high degrees of freedom in searching centers or low efficiency of grouping corners. Empirically, humans run two steps to locate an object bounding box manually: 1) click the mouse at the top-left corner of object, and then drag the mouse to the bottom-right corner; 2) refine the corner positions to make the bounding box more precisely, if necessary. Inspired by this manual labeling process, we propose a novel human-like detector, termed as HumanLiker, which is devised as a two-stage end-to-end detector to simulate the two aforementioned. Like we humans in manual labeling, HumanLiker can effectively avert both the thorny center searching and heuristic corner grouping. Different from the mainstream detector branches, i.e., the center/corner-guided methods, the HumanLiker provides a new paradigm which integrates the advantages of both branches to balance the detection efficiency and bounding box quality. On MS-COCO test-dev set, HumanLiker can achieve 50.2%/51.6% and 53.8%/55.6% in term of AP with ResNeXt-101 and SwinTransformer backbones in single/multi-scale testing, outperforming current popular center/corner-guided baselines (e.g., DETR/CornerNet) by a large margin, with much less training epochs and higher inference FPS. Code will be available soon.
Accept
This paper received borderline reviews, with one review leaning negative. However, the reviewer acknowledged that their concerns have been addressed but did not update the rating. The paper provides an interesting new take on object detection with strong empirical results. The concerns raised by reviewers were mainly about more experimental results and clarifications, which the authors have adequately addressed in their rebuttal. For the camera ready version, the authors must change the title of the paper to something more informative. "A human-like object detector" is too vague and non-specific, and misleading given the paper's actual contribution. "human-like" can mean many things (e.g. learning from very few examples), but the paper is only "human like" in one aspect. In addition, "humanliker" should be replaced by something else ("humanliker" can mean "something that likes human").
train
[ "A34q6VAIT3", "s-iJaWAViw5", "1x60o2EMVmZ", "1KTFiYY9uU", "HjQJvOG6Tq", "cQe9Q40DrUH", "8iINixQygGc" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThank you for the detailed responses which address my concerns.\n\nBest", " Thank you very much for your careful review and valuable comments. We address your concerns as follows:\n\n1. This question is interesting. However, we believe your hypothesis that humans may deduce the top-left corner and the bottom-right corner inversely through the center of the object is not tenable. It is because that pinpointing a center needs all four boundaries as reference and a corner only needs two boundaries. This is a strong prior knowledge that locating a corner of an irregular object is easier than a center. Besides, the top-left => bottom-right ordering or a bottom-right => top-left doesn't affect our ‘human-like’ claim because we describe: manual labeling avoids center searching or corner grouping due to high degrees of freedom in searching centers or low efficiency of grouping corners. A top-left => bottom-right ordering in our HumanLiker is just to represent the habits of most people.\n\n\n2. The newest corner-based detector CPN also has a second stage but its accuracy, inference speed, and convergence speed are lower than our HumanLiker. For CornerNet and its varients (CenterNet and CentripetalNet), the Hourglass-104 backbone is very friendly for corner estimation. Replacing the Hourglass-104 with ResNeXt-101+DCN for them suffers 2~3% AP drop. Besides, HumanLiker is grouping-free, we try to adapt the corner estimation/grouping method of CPN (a binary sub-network) and CornerNet (associative embeddings) to our sturcture (FPN/category unknowable heatmap/two-stage refinement/classification) to further control variables. Due to they need corner grouping, for each FPN layer, we optimized the number of corners (64,32,16,8,4) to be extracted through ablation experiments for each layer. Experiments show that the CPN and CornerNet drop about 4% and 7% (from 47.1% to 43.2% and 40.4%) AP than our HumanLiker on Swin-T backbone for 1x epoch, proving the proposed HumanLiker is better than other corner-based detectors.\n\n3. We are sorry for the misleading use of words. The bionic we mean is human-like. The neat, intelligible, and friendly we think they are for an object detection beginner. A beginner can easily understand at a macro level what each module does and the process that corresponds to manual labeling. Of course, we will carefully consider and be willing to replace these inappropriate words in the next version of manuscript.\n\n4. Traditional corner detection model needs to predict corner via outputing heatmap. They always use a large (x4 output stride) and multi-channels (80 for 80 classes) heatmaps to predict corners. Adapting FPN will encounter several problems: (a) More heatmaps (channels) make corner extraction suffer lower efficiency. (b) More predicted corners make corner grouping more difficult. Our HumanLiker only needs one heatmap channel and runs in a grouping-free way so that it is suitable for FPN. Accordingly, we believe the sucess of using FPN for corner-based detector is important and valuable.\n\n5. We think the neat and intelligeble describe are facing object detection beginners at a macro level understand. The beginner can compare his labeling process to understand the design of each module of the HumanLiker. We really appreciate your guidance in writing and we'll use more appropriate words and detailed descriptions in the next manuscript.\n\n6. We change different backbones (ResNet-50,101,DLA-34, Hourglass-52) to draw the training loss of center and top-left corner estimation and all the trends are similar to Figure 4: the top-left corner loss is lower than center. Besides, we sample the AP at the 0.5x,1x,1.5x,and 2x epoch. HumanLiker achieves 40.1%, 41.8%, 43.0%, and 43.9% AP and the new center-guided model (change the top-left corner to center) yields AP of 39.5%, 41.3%, 42.7%, and 43.7% on ResNet-50 backbone, which further proves our claim that a corner is easier to predict than a center especially at the beginning of training is reasonable. This is also easy to understand because a center needs all four boundaries to locate yet a corner only needs two.\n\n7. Typos and equations: Thanks for these useful comments! We will carefully revise our manuscript.\n", " Thank you very much for your valuable and professional comments!We are sorry that we did not report more experimental results and details in the previous version of manuscript. This is due to the limitation of the number of pages and we are very happy to add a page to supplement these contents if this paper can be accepted fortunately. We address your concerns below.\n\n1. Comparison with CPN and CornerNet: CPN and CornerNet are two grouping-based model yet HumanLiker is grouping-free. To group corner pairs more effective, CPN and CornerNet need to assign corners of different categories of objects to different heatmap channels (80 for top-left and 80 for bottom-right), leading to the corner extraction process suffer low efficiency. HumanLiker, with grouping-free manner, only needs one category unknowable heatmap channel for top-left corners. The FPS is 21.7/9.9/5.8 for HumanLiker/CPN/CornerNet under 43.9%/43.8%/41.0% AP on V100 GPU. What's more, the training epoch of HumanLiker is 5x to 10x less than CPN and CornerNet, which shows our HumanLiker is easier to converge. More improtantly, the grouping-free design enjoys better accuracy: We try to adapt the corner estimation/grouping method of CPN (a binary sub-network) and CornerNet (associative embeddings) to our sturcture (FPN/category unknowable heatmap/two-stage refinement/classification). Due to they need corner grouping, for each FPN layer, we optimized the number of corners (64,32,16,8,4) to be extracted through ablation experiments for each layer. Experiments show the CPN and CornerNet drop about 4% and 7% (from 47.1% to 43.2% and 40.4%) AP than our HumanLiker on Swin-T backbone for 1× epoch, proving the HumanLiker is stronger than other corner-based detectors firmly.\n\n2. Center vs. Corner: Thanks for your professional comment. We use the comparision of training loss to show the strong prior knowledge of manual labeling that pinpointing a corner is easier than a center is also true for a model. Based on your said, we use the center and width/height (x_c,y_c,w,h) to replace our design in the stage one of HumanLiker to prove our claim. We sample the AP at the 0.5x,1x,1.5x,and 2x epoch. HumanLiker achieves 40.1%, 41.8%, 43.0%, and 43.9% AP and the new center-guided model yields AP of 39.5%, 41.3%, 42.7%, and 43.7% on ResNet-50 backbone. The experimental results also prove our claim that a corner is easier to predict than a center especially at the beginning of training. This is also easy to understand because a center needs all four boundaries to locate yet a corner only needs two.\n\n3. The refinement in the stage two: We use a corner decoupling strategy (refine each corner separately) to refine each corner of a proposal box. Traditional models utilize a corner coupling (center is calculated via two corners) way. Suppose one boundary of an object is difficult to predict, the center which needs four boundries as reference is also hard to refine. However, corner decoupling (refine each corner separately) can make sure that a corner is not affected by this boundry and yield higher quality results (e.g., better AP_80 to AP_90). The AP_70,80,90 on COCO val of HumanLiker are 55.9%, 45.5%, 26.1%. On contrast, when we use the traditional center refinement on HumanLiker, the AP_70,80,90 are 55.8%, 45.1%, 25.6%. The results shows that our human-like refinement strategy is easier to provide high-quality detection box. \n\n\nFinally, we respectfully emphasize that the main motivation of HumanLiker is to provide a new detector design idea for detection community. As we know, center-guided model is mature and popular but they also suffer a bottleneck. HumanLiker is devised as a new baseline enjoying promising improvement room to inspire researers that use a corner as a positive sample like our humans can also work well. \n\n\n", " Thank you very much for your valuable and professional comments.\nAs you said, for a more fair comparison, we didn't report the AP under a large and fixed input size in the previous version of manuscript. We conducted experiments on Res2Net-101 with fixed size (896x896 and 1024x1024) for 4x training epochs. The APs (on COCO val2017) are 51.4% and 52.6%. The experiment results show that a fixed and large input size does improve the accuracy of HumanLiker.\nBesides, we are more than happy to add the fixed input size and YOLO-based SOTAs for comparision in the next version of manuscript.\n\n\n", " This paper is inspired by the human way of labeling object boxes in an image. It proposes a two-stage detector: \n\n- First, the detector will localize the top-left corner of an object and regress the height and width of the object.\n- Then, the detector will refine the box using the cascaded method.\n\nThe method is verified on MS-COCO test-dev with multiple backbones, e.g., ResNetXt-101 + deformable convolutional module and SwinTransformer-L.\n Strengths:\n- Writing is pretty well-formulated and well-organized. It’s easy to follow. \n- The proposed framework belongs to the two-stage one but is different from the conventional two-stage detector. It doesn’t have the region proposal idea in it. Instead, it is developed based on the key-point and grid box regression. The second stage aims to refine the box regression. \n- The experimental results demonstrate that the method is promising with large improvements. \n\nWeakness:\n- Current training strategy for object detection becomes large-scale jittering and fixed input size (1024 x 1024). The training way in the paper is a little outdated. I understand if changing the training method will make comparisons more difficult. So I suggest the paper could use the new training way without comparisons to produce stronger results. This would ease future comparisons in the detection community. - YOLO-based methods didn’t show up in the big Table.1. It could be better to add more related top-ranking detectors in it. None.", " This paper proposes a new object detector based on CornerNet. The proposed detector adopts a two-stage design. In the first stage, instead of detecting both top-left and bottom-right corners, and grouping them by embeddings, the authors propose to detect only the top-left corners and predict the x and y distances between the top-left corners and bottom-right corners to generate an initial set of regions. In the second stage, instead of refining the width and height of the bounding boxes, they propose to refine the corner locations and the distances. The authors argue the this is closer to how human would draw a bounding box. Experiments on COCO show that the proposed detector achieves state-of-the-art results with same backbone. Strengths:\n\nThe proposed approach is interesting and novel, and demonstrates good results on the challenging COCO benchmark. I like the idea of refining corner locations instead of the whole box in the second stage.\n\nWeaknesses:\n\nThe main weakness of this paper is that the current experiments do not show sufficiently show that the proposed approach is better than the conventional approach.\n\nThe network generates the proposals by predicting the top-left corners, and x and y distances between top-left corners and bottom-right corners. This is different from the corner proposal network [8] or the associative embeddings used in CornerNet. But there is no ablation study that compares these three approaches.\n\nThe authors claim that it is easier to learn to detect corners than the centers by showing that the top-left corner model achieve a lower training loss than the center-based model. There are two problems with this claim. First only comparing the difficulties of detecting corners and centers is not sufficient because there are other predictions that are used to generate the final bounding boxes, which may be difficult to predict. Second, training loss is not a good indicator of the final performance so it would be better to compare the models by their validation performance.\n\nIn the second stage, the authors propose to refine the corner locations and distances where a conventional object detector would directly refine the center locations. But there are no experiments that support refining corner locations and distances is better and more accurate.\n\nAlthough the experiments show that the proposed approach outperforms other detectors with the same backbone network, it is not clear where the improvement comes from because the proposed approach uses a cascade network which is known to improve performance while others don’t. My main concern is that the current experiments do not sufficiently show that the proposed approach is better than the existing approaches. How does regressing distance compare to associative embeddings or corner proposal network in terms of validation performance? How does detecting corners compare to detecting centers in terms of validation performance? How does refining corner location and distances compare to the conventional refining strategy in terms of validation performance? Overall I like the idea of this paper so if the authors provide satisfactory answers to the above questions, I am happy to reconsider my rating. N/A", " In this paper, the authors study the problem of object detection. To be specific, they introduce a new approach for bottom-up detection of objects in images. They take inspiration from how humans label objects (click top-left corner, drag to the bottom-right corner) and propose a two-stage approach mimicking this. They report positive results on the COCO dataset. Strengths:\n+ Taking inspirations from humans in labeling objects is very interesting.\n\n+ A novel mechanism is introduced that mimicks human labeling.\n\n\nWeaknesses:\n\n- The paper talks about how humans label objects and makes certain assumptions about the process but these are based on beliefs and observations of the authors. One can easily argue that humans might look at/around object centers to determine the top-left corner and then the bottom-right corner. It is not clear whether top-left => bottom-right ordering is affected by right-handedness or cultural differences.\n\n- The paper's improvement over the existing corner-based methods is not clearly demonstrated in the experiments. HumanLiker has a second stage, DCN and therefore extra processing compared to e.g. CornerNet and this can have a huge impact. A more controlled evaluation is needed to justify the contribution of the paper.\n\n- Line 71: \"neat and bionic, which makes the model intelligible and friendly to follow.\" => How do you define and measure these? It is not clear how the proposed approach is related to bionics.\n\n- \"2) Instead of single-level feature, we use multi-level features based on FPN [19] to better fit object corners with different size and context;\" => Is this really a contribution?\n\n- The proposed mechanism has too many hyper-parameters. Considering the complicated two-stage approach, I wouldn't call this neat or intelligeble. And I would avoid using unnecessarily positive and/but unjustified adjectives such as powerful, bionic, neat, ..\n\n- Fig 4: The difference is so small between the two lines that I am not sure it is safe to make a conclusion from here. With N different random runs, you might obtain different outcomes.\n\n\nMinor comments:\n- Line 39: \"for object with peculiar geometry\" => \"for objects with peculiar geometry\".\n- Line 158: \"which close to the positive\" => \"which are close to the positive\".\n- Eq 1: \"Objet size\" => \"Object size\". Gk should be defined as a function which takes some input.\n- Eq 2: space after \"if\". \"xy\" => \"x,y\".\n- Line 164: You should cite on this line CornerNet for Eq 2. The reference on line 167 is insufficient & indirect for the source of Eq 2.\n- Eq 2: Please explicitly state that this is class agnostic.\n- Line 174: \"Due to there\" => \"Due to the fact that there\".\n- Line 180: \"offset.\" => \"offset:\".\n- Please read the following guide about writing equations: http://www.ai.mit.edu/courses/6.899/papers/mermin.pdf\n- Line 203: \"detect box.\" => \"detection box.\"\n- Eq 13: Please use different symbols for these hyper-parameters as you used them before.\n- L_ii should be explicitly written in the paper for completeness. (1) More justifications are require to call this approach human like.\n\n(2) More controlled experiments are required to justify the source of improvement over CornerNet. The method is neat, powerful, intelligible, bionic. It doesn't have any limitations." ]
[ -1, -1, -1, -1, 5, 5, 4 ]
[ -1, -1, -1, -1, 5, 5, 5 ]
[ "s-iJaWAViw5", "8iINixQygGc", "cQe9Q40DrUH", "HjQJvOG6Tq", "nips_2022__VF5QKgXoqt", "nips_2022__VF5QKgXoqt", "nips_2022__VF5QKgXoqt" ]
nips_2022_Gpqqm4p91Ez
Towards Lightweight Black-Box Attack Against Deep Neural Networks
Black-box attacks can generate adversarial examples without accessing the parameters of target model, largely exacerbating the threats of deployed deep neural networks (DNNs). However, previous works state that black-box attacks fail to mislead target models when their training data and outputs are inaccessible. In this work, we argue that black-box attacks can pose practical attacks in this extremely restrictive scenario where only several test samples are available. Specifically, we find that attacking the shallow layers of DNNs trained on a few test samples can generate powerful adversarial examples. As only a few samples are required, we refer to these attacks as lightweight black-box attacks. The main challenge to promoting lightweight attacks is to mitigate the adverse impact caused by the approximation error of shallow layers. As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks. Namely, ETF transforms the approximation error in the parameter space into a perturbation in the feature space and alleviates the error by disturbing features. In experiments, lightweight black-box attacks with the proposed ETF achieve surprising results. For example, even if only 1 sample per category available, the attack success rate in lightweight black-box attacks is only about 3% lower than that of the black-box attacks with complete training data.
Accept
The paper presents a new method for generating black-box attacks with very limited data, i.e., the no-box case. The attack is based on feature transformations and the paper proposes error transformers (ETF) to alleviate issues with approximation errors. The reviewers believe the paper is technically solid and raised issues mainly to do with clarity and experiments. The authors provided a rebuttal and updated paper that thoroughly addressed those issues. All the reviewers raised their scores (including one that said to do so but did not update it). A good contribution to the field of adversarial learning.
train
[ "czCF6MCk8PL", "Habl8C7tkZR", "om_ke3pRLwQ", "2-AqQIZxYVv", "Dv2wL6MXxhO", "LWjDxYtXngZ", "TlZIPoj8NnvM", "kN7AQTRhHS0", "tj2HEHBedvQ", "Rb-9-tYThQP", "K5IHiMFYZBl", "h8UMB04qJa", "iecSuyuVjp", "-8oOWTl4aT", "VJOfeStDB60", "CCkcYNQjDA8", "PIG1oEVpaOL", "QRzmO9mqp2l", "nAaY9LKs4PQ", "HgYG224Hn9J", "34ajuvTGUsU", "yQUqMFZWfMd", "Hdlih3dePeW", "-Jjj-KeaDj8", "tp0TnbL1DHj", "kALk0MdYBA" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer LA1D:\n\nThank you again for your valuable comments and constructive suggestions on our work. Would you mind checking the response to confirm whether it addresses your questions/concerns?\n\nSince we list many points in the response and the window for discussion is closing, we want to summarize our response here quickly. We hope this can help you quickly go through all feedback.\n\nIn addition to clarifying the approach and experiment details, following your kind suggestions, we have added five experiments in respect to your constructive suggestions. Specifically:\n - highlight the difference between the feature and weight space min-max strategy;\n - verify the effectiveness using CIFAR10;\n - test the robustness of ETF on RobustBench; \n - evaluate the approach with $\\ell_2$-norm perturbation;\n - study the impact of data augmentation.\n\nAll experiments have been added to the revision. Besides, we have revised the paper to improve the clarity and readability following your valuable suggestions.\n\nWe'd be glad to answer any outstanding questions and look forward to any further discussions.\n\nBest regards,\n\nAuthors of #565", " Dear Reviewer kZtY,\n\nWould you mind re-checking the system to confirm whether you have received the latest version of our paper? \n\nWe revised the paper following your constructive suggestions, making the paper more readable and solid. Thus, we look forward to your further outstanding questions/comments. \n\nIf you find that the latest revision meets the bar to change your score, would you mind raising the score? Your support for a novel, simple, and initial attempt to think of the potential threats of black-box attacks is critical, and we sincerely appreciate it!\n\nBest regards,\n\nAuthors of #565", " Dear reviewers,\n\nWe have revised our paper following the insightful suggestions/comments from all the reviewers. The revision is in blue color:\n\n**For the related works** We added explanations to claim the differences between ours and existing works, especially the min-max strategy.\n\n**Further description of approach details** Following the reviewers' suggestion in the method details section, we made changes in Sec. 4 regarding the use of formulae, notation of symbols, and clarification of misunderstandings.\n\n**Addition of experimental details** We add more descriptions related to the experimental details in Sec. 5.\n\n**Visualization of Adversarial Examples** We add adversarial examples generated by our method in Sec 5.3.\n\nWe provide a link of our codes for reproducing the results: https://anonymous.4open.science/r/Error_TransFormer-7495/README.md\n\nBest regards,\n\nAuthors of #565\n\n", " Dear reviewer RUwr,\n\nThank you again for your time in reviewing our paper and your constructive comments on our work. We’d be grateful if you can confirm whether our response has addressed your concerns. Here is a short summary:\n\nInspired by your constructive questions and suggestions, we have added two tables in the revision for verifying the effectiveness of ETF under a) smaller $\\epsilon$ and b) more test images. In addition, we revised our paper to improve the clarity and readability following your valuable suggestions.\n\n\nWe’d be glad to answer any outstanding questions and look forward to any further discussions.\n\nBest regards,\n\nAuthors of #565", " Dear Reviewer kZtY,\n\nWe are so glad that our response clarified your questions and concerns. Thanks for your kind reminder for uploading the revision, where we carefully revised the paper according to reviewers' constructive suggestions and comments. The revised version has been uploaded and the modified content was marked in blue.\n\nBest regards,\n\nAuthors of #565", " Thank you for the response. I think my questions and concerns were addressed and solved.\nHowever, I can't find the revision version of the paper, have the authors uploaded the revision yet?", " Dear Reviewer SEqv,\n\nGlad to hear that your concerns/questions are addressed well. Thank you for raising the score.\n\nBest regards,\n\nAuthors of #565", " Dear Authors,\n\nThanks for addressing/answering all my questions, and I am satisfied with the responses. I have updated my scores to reflect the same.\n", " **Response to Reviewer LA1D:** \n\nWe sincerely thank for your constructive comments and positive feedback about our work! Please see our detailed responses to your comments and suggestions below.\n\n**Response to [Weakness]**\n\n> **W1**: \"The novelty is somewhat limited: a) min-max objective similar to ETF in weight space has for example been explored; b) surrogate training of a shallow model also appeared in previous papers.\" \n\n**R1**: \nThanks for your valuable comments. We have added explanations to the revision to highlight our contribution and the difference between our work and previous works [1,2].\n\na) The min-max strategy is proposed to flatten the loss landscape in the weight space [1], but our work proposes performing feature space min-max optimization for approximation error minimization. We conduct experiments using the weight space min-max optimization to highlight the difference further in [TABLE 1-1]. \n\nThe results in Table [TABLE 1-1] demonstrate the superiority of feature space optimization. We suspect that the performance gain results from the fact that performing the min-max strategy in the feature space is more appropriate than the weight space optimization for the no-box threat model. This is because we know which perturbations are preferred in the feature space, e.g., towards features of guide images, but we have no idea about which perturbations are preferred in the weight space, i.e., no \"guide models\". \n\nb) Surrogate training of a shallow model is widely used in self-supervised learning [2], and recent work [3] demonstrates that models learned from a few images can approximate the shallow layers of models trained on millions of images. However, these works did not focus on adversarial attacks and did not explore an appropriate strategy for crafting compelling adversarial examples.\n\n**TABLE 1-1**:This experiment is conducted in 1000 samples randomly selected from ImageNet validation and evaluates the attack performance on seven different pre-trained models(loaded by torchvision). The structure of the lightweight model is Resnet18[13]. Feature_space refers to our ETF method, and Weight_space is achieved by the min-max strategy in the weight space[1]. $\\varepsilon \\leq 0.1$, $\\ell_\\infty$-norm. The best results are in bold.\n\n| Model | VGG19[11] | Inception_v3[12] | RN152[13] | DenseNet161[14] | SENet[15] | WRN[16] | MobileNet[17] | Avg |\n|---------------|------------|--------------|------------|-------------|-----------|------------|------------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| Weight_space | 29.43% | 32.44% | 40.11% | 41.88% | 10.12% | 35.41% | 19.27% | 29.81% |\n| Feature_space | **14.11%** | **20.22%** | **24.20%** | **24.74%** | **6.96%** | **20.73%** | **10.66%** | **17.37%** |\n\n", " > W2: \"It might be interesting to evaluate the performance of the method using CIFAR10 and the most robust models for example from CIFAR10 RobustBench.\"\n\n**R2:** \nWe conducted extensive experiments in Section 5 and Appendix B, and the results verified the power of our ETF. However, your kind suggestion is also great for further solidifying our evaluation. Thus, we conduct the experiments on the CIFAR10 dataset, see [TABLE 1-2], and evaluate the robustness of models downloaded from RobustBench [4], see [TABLE 1-3]. The conclusion drawn from [TABLE 1-2] and [TABLE 1-3] is consistent with that drawn from Table 1 (in the paper) evaluating on ImageNet dataset.\n\n**TABLE 1-2**: Evaluate the performances of different attacks on CIFAR10. Here, experiments of \"Deep-, Shallow-, ETF-\" are conducted in the no-box threat model. \"Deep*\" means the black-box setting where the surrogate models are trained on the training data the same as the seven target models. \"PGD[24], MI[25], DI[26], TI[27]\" is applied to the different settings and methods. Auto-attack[23] is used for testing the robustness of the target models, so it adopts the white-box setting to mount the seven target model. $\\varepsilon \\leq 0.1$ in $\\ell_\\infty$-norm.\n\n| Model | VGG19[11] | RN56[13] | MobileNet[17] | ShuffleNet[18] | Avg |\n|---------------|------------|------------|------------|-------------|------------|\n| clean | 93.91% | 94.37% | 93.72% | 92.98% | 93.74% |\n| Deep-PGD | 59.45 $\\pm$ 0.34 % | 57.58 $\\pm$ 0.46 % | 45.21 $\\pm$ 0.27% | 52.32 $\\pm$ 0.37 % | 53.64 $\\pm$ 0.78 %|\n| Deep-MI | 53.44 $\\pm$ 0.75 % | 52.17 $\\pm$ 0.65 % | 44.25 $\\pm$ 0.34 % | 49.80 $\\pm$ 0.35 % | 49.91 $\\pm$ 0.58 %|\n| Deep-DI | 60.24 $\\pm$ 0.19 % | 58.63 $\\pm$ 0.34 % | 47.67 $\\pm$ 0.31 % | 54.34 $\\pm$ 0.62 % | 55.22 $\\pm$ 0.52 % |\n| Deep-TI | 64.51 $\\pm$ 0.38 % | 59.85 $\\pm$ 0.60 % | 48.80 $\\pm$ 0.59 % | 56.88 $\\pm$ 0.44 % | 57.51 $\\pm$ 0.42 % |\n| Shallow-PGD | 27.17 $\\pm$ 0.74 % | 31.06 $\\pm$ 0.55 % | 22.83 $\\pm$ 0.66 % | 28.14 $\\pm$ 0.76 % | 27.30 $\\pm$ 0.81% |\n| Shallow-MI | 32.43 $\\pm$ 0.98 % | 36.42 $\\pm$ 1.01 % | 31.84 $\\pm$ 0.79 % | 30.76 $\\pm$ 0.94 % | 32.86 $\\pm$ 0.94 % |\n| Shallow-DI | 25.65 $\\pm$ 0.56 % | 30.27 $\\pm$ 0.51 % | 22.61 $\\pm$ 0.38 % | 27.22 $\\pm$ 0.55 % | 26.43 $\\pm$ 0.45 % |\n| Shallow-TI | 28.66 $\\pm$ 0.45 % | 31.35 $\\pm$ 0.33 % | 27.20 $\\pm$ 0.44 % | 29.48 $\\pm$ 0.63 % | 29.17 $\\pm$ 0.56 % |\n| ETF-PGD | 21.27 $\\pm$ 0.27 % | 25.85 $\\pm$ 0.84 % | **20.03** $\\pm$ 0.65 % | 22.37 $\\pm$ 0.44 % | 22.38 $\\pm$ 0.53 % |\n| ETF-MI | **20.75** $\\pm$ 0.55 % | **24.36** $\\pm$ 0.35 % | 20.51 $\\pm$ 0.34 % | **19.68** $\\pm$ 0.23 % | **21.32** $\\pm$ 0.42 % |\n| ETF-DI | 21.37 $\\pm$ 0.37 % | 26.46 $\\pm$ 0.27 % | 21.11 $\\pm$ 0.69 % | 23.14 $\\pm$ 0.36 % | 23.02 $\\pm$ 0.55 % |\n| ETF-TI | 25.48 $\\pm$ 0.41 % | 30.26 $\\pm$ 0.23 % | 23.37 $\\pm$ 0.51 % | 26.34 $\\pm$ 0.25 %| 26.36 $\\pm$ 0.39 % |\n| Deep*-PGD | 4.63 $\\pm$ 0.54 % | 0.81 $\\pm$ 0.74 % | 3.79 $\\pm$ 0.28 % | 3.21 $\\pm$ 0.32 % | 3.11 $\\pm$ 0.47 % |\n| Deep*-MI | 4.72 $\\pm$ 0.20 % | 0.96 $\\pm$ 0.36 % | 4.36 $\\pm$ 0.12 % | 3.78 $\\pm$ 0.25 % | 3.45 $\\pm$ 0.33 % |\n| Deep*-DI | 4.63 $\\pm$ 0.17 % | 0.81 $\\pm$ 0.67 % | 2.38 $\\pm$ 0.53 % | 3.34 $\\pm$ 0.43 % | 2.79 $\\pm$ 0.47 % |\n| Deep*-TI | 4.66 $\\pm$ 0.18 % | 0.84 $\\pm$ 0.25 % | 3.78 $\\pm$ 0.46 % | 3.67 $\\pm$ 0.31 % | 3.23 $\\pm$ 0.32 % |\n| Auto-attack[23] | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |\n\n\n**TABLE 1-3**: The attacks on the most robust models from CIFAR10 RobustBench. The robustness model is trained by the different adversarial defense method,$\\varepsilon \\leq 0.1$ in $\\ell_\\infty$-norm. \n\n| Setting | Model | Gowal2021[19] | Kang2021[20] | Pang2022[21] | Sehwag2021[22] | Avg |\n|:---------:|:-----------:|:----------:|:---------:|:---------:|:-----------:|:--------:|\n| | clean | 89.99% | 92.50% | 87.56% | 86.55% | 89.15% |\n| No-box | ETF-PGD | 72.01% | 72.86% | 72.50% | 67.44% | 71.20% |\n| Black-box | Deep*-PGD | 83.53% | 88.06% | 83.17% | 79.44% | 83.55% |\n| White-box | Auto-attack[23] | 8.05% | 21.13% | 7.46% | 6.53% | 10.79% |\n", " \n> W3: \"In Table 1-3, their should be another row for the ground truth robust accuracy in a white-box setting.\"\n\n**R3:** \nThanks for your valuable suggestion. To make the table clear, we add one raw in all mentioned tables to show the white-box attack performance using AutoAttack [5]. In the revision, we have replaced tables in the old version with the new ones.\n\n> W4: \"It would be easier to understand the general message of the table, which clearly is that ETF is the best method in the no-box setting, by some form of highlighting.\"\n\n**R4:**\nThank you for pointing out the potentially confusing problem. We have highlighted ETF in tables and replaced the confusing tables with new ones.\n\n**Response to [Questions]**\n\n> Q1: \"The paper only considers L-inf perturbations. While they are the most common, does the method work for L0, L1 and L2 too?\"\n\n**A1:** \nWe mainly conduct experiments with L-inf perturbation since it is widely adopted in many previous works [8,9,10]. To further demonstrate the power of our ETF, we follow your kind suggestion in testing $\\ell_2$-norm perturbation as an example. The results are reported in [TABLE 1-4], which further demonstrate the effectiveness of our proposal. Considering that $\\ell_1$ and $\\ell_0$ perturbations require careful design [6,7], it is beyond the scope of our work, so we leave it as our future work. \n\n**TABLE 1-4**: The classification accuracy evaluation on **$\\ell_2$-norm** attacks. The experiment is conducted on the ImageNet validation. Following the previous work[25] about **$\\ell_2$-norm** attacks, the maximum disturbance $\\varepsilon$ is set to 16 $\\sqrt[2]{N}$ where N is the dimension of input to attacks.\n\n| Model | VGG19[11] | Inception_v3[12] | RN152[13] | DenseNet161[14] | SENet[15] | WRN[16] | MobileNet[17] | Avg |\n|----------------|------------|--------------|------------|-------------|-----------|------------|------------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| Deep-PGD | 37.73% | 42.75% | 51.04% | 51.96% | 17.48% | 50.61% | 31.07% | 40.38% |\n| Shallow-PGD | 25.73% | 31.51% | 44.96% | 43.72% | 8.58% | 40.62% | 18.73% | 30.55% |\n| ETF-PGD | 22.16% | 27.03% | 34.87% | 37.94% | 11.28% | 29.63% | 16.17% | 25.58% |\n| Deep*-PGD | 7.65% | 22.88% | 11.44% | 11.23% | 4.56% | 9.69% | 8.03% | 10.78% |\n| Autoattack[23] | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |\n\n\n\n>Q2: \"Which loss (Eq (1) vs (2)) is used for the actual training of the surrogate model?\"\n\n**A2:** \nThank you for pointing out the potentially confusing problem. Eq. (1) is used in most experiments, i.e., Table 1, 2, and 3 (in the paper), as label information is usually available. Eq. (2) is a promising candidate, especially for the scenarios where the adversary cannot access the label information. Thus, we also report the results in Table 4 (in the paper, termed as Unsupervised) to show that we can generate powerful adversarial examples in the no-box threat model, even if the label information is unavailable. We have added the above clarification into the revision. \n\n>Q3: \"You use heavy data augmentation, what is the impact of that and did you experiment with different ones?\"\n\n**A3:**\nWe follow the empirical conclusion suggested in [3], where heavy data augmentation is vital for training appropriate shallow models. Because appropriate shallow models are necessary for mounting lightweight black-box attacks, data augmentation plays a crucial role and is heavily used in our experiments. This is supported by results shown in [TABLE 1-5], where we report the performance of lightweight black-box attacks with and without data augmentation. The results and conclusion have been added to the revision.\n\n \n**TABLE 1-5**:The impact of augmentation to ETF attacks. \"No-Aug\" means the effect of the attack on the ETF using the surrogate model without augmentation for training. This experiment is conducted on the ImageNet validation. The best results are in bold.\n\n| Model | VGG19[11] | Inception_v3[12] | RN152[13] | DenseNet161[14] | SENet[15] | WRN[16] | MobileNet[17] | Avg |\n|--------|------------|--------------|------------|-------------|-----------|------------|------------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| No-Aug | 34.58% | 39.17% | 46.25% | 50.06% | 10.42% | 45.10% | 22.92% | 35.50% |\n| Aug | **14.11%** | **20.22%** | **24.20%** | **24.74%** | **6.96%** | **20.73%** | **10.66%** | **17.37%** |\n\n", " \n**References** \n\n[1] Adversarial Weight Perturbation Helps Robust Generalization. Wu et al. NeurIPS 2020.\n\n[2] A Simple Framework for Contrastive Learning of Visual Representations. Chen et al. ICML 2020.\n\n[3] A critical analysis of self-supervision, or what we can learn from a single image. Asano et al. ICLR 2022.\n\n[4] RobustBench: a standardized adversarial robustness benchmark. Croce et al. NeurIPS 2021.\n\n[5] Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks. Croce et al. ICML 2020.\n\n[6] EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. Chen et al. AAAI 2018.\n\n[7] One Pixel Attack for Fooling Deep Neural Networks. Su et al. IEEE Trans on Evolutionary Computation, 2019.\n\n[8] Improving transferability of adversarial examples with input diversity. Xie et al. CVPR 2019.\n\n[9] Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. Dong et al. CVPR 2019.\n\n[10] Black-box Adversarial Attacks with Limited Queries and Information. Ilyas et al. ICML 2018.\n\n\n[11] Very deep convolutional networks for large-scale image recognition. Simonyan et al. ICLR 2015.\n\n[12] Rethinking the inception architecture for computer vision. Szegedy et al. CVPR 2016.\n\n[13] Deep residual learning for image recognition. He et al. CVPR 2016.\n\n[14] Densely connected convolutional networks. Huang et al. CVPR 2017.\n\n[15] Squeeze-and-excitation networks. Hu et al. CVPR 2018.\n\n[16] Wide residual networks. Zagoruyko et al. BMVC 2016. \n\n[17]Mobilenetv2: Inverted residuals and linear bottlenecks. Sandler et al. CVPR 2018.\n \n[18] ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. Ma et al. ECCV 2018.\n\n[19] Improving Robustness using Generated Data. Gowal et al. NeurIPS 2021.\n\n[20] Stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks. Kang et al. NeurIPS 2021.\n \n[21] Robustness and Accuracy Could Be Reconcilable by (Proper) Definition. Pang et al. ICML 2022.\n \n[22] Robust learning meets generative models: Can proxy distributions improve adversarial robustness? Sehwag et al. ICLR 2022.\n\n[23] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Croce et al. ICML 2022.\n \n[24] Towards deep learning models resistant to adversarial attacks. Madry et al. ICLR 2018.\n\n[25] Boosting adversarial attacks with momentum. Dong et al. CVPR 2018.\n\n[26] Improving transferability of adversarial examples with input diversity. Xie et al. CVPR 2019.", " **Response to Reviewer SEqv**\n\nWe sincerely thank you for your constructive comments and positive feedback about our work! Please see our detailed responses to your comments and suggestions below.\n\n**Response to [Weakness]**\n\n> W1: \"The paper is a bit hard to follow, and the writing needs improvement.\"\n\n**R1**: Sorry for our unclear description, we will definitely improve our writing and paper organization in our revision. \n\n> W2: \"One vital information authors leverage is that we can generate powerful adversarial examples by attacking shallow layers. I would encourage the authors to add appropriate references.\"\n\n**R2:** \nFollowing your valuable suggestion, we have highlighted the reference in the revision to support the statement. According to the conclusion drawn from [1], adversarial examples can be generated by attacking shallow layers, i.e., perturbing representations at shallow layers of deep neural networks. \n\n> W3: \"A vital piece of the technique is missing, like how is the optimization problem in Eq. 6 solved. I would encourage the authors to include such details in the main paper.\"\n\n**R3:** \nThank you for pointing out the confusing problem. We have added the following description to the revision following your valuable suggestion. Given the strength of perturbations, we perform a min-max optimization to generate adversarial examples. Specifically, we solve the inner maximization problem by generating perturbations in the feature space, given a perturbed adversarial example. This step aims to mitigate the approximation error. The outer minimization problem is solved by finding adversarial perturbations in the input space, the same as the adversarial example generation. After iterative generation of perturbations, we can consider that adversarial examples are generated by attacking a model with a reduced approximation error.\n\n\n**Responses to [Questions]**\n\n> Q1: \"How imperceptible are the examples generated by the proposed technique?\"\n\n**A1:** \nThank you for your kind suggestion, and we find that the resultant perturbations are truly imperceptible for our ETF. Please refer to Figure 2 (in the revision) for the visualization with deep*-PGD attack (using training images), deep-PGD attack (using test images), and lightweight black-box attack. We have added the figure and analysis to the revision.\n\n> Q2:\"How good are examples generated by the proposed technique in evading the recent class of adversarial example detection methods?\"\n\n**A2:** \nThanks for your constructive suggestion. We employ a recent detection method [7,8] to detect adversarial examples generated by different attack methods, e.g., FGSM, PGD, BIM, and ETF. All settings are the same as that used in the paper, and the results are reported in [TABLE 2-1]. We can see that ETF performs better than the baselines, i.e., having a high probability of evading detection methods.\n\nTABLE 2-1: Performance of adversarial detection against four attacks, metric to evaluate the detection performance can be found in [7,8].\n\n| Mahalanobis[8] | | | | | |\n|---------------|-----------|-----------|-----------|-----------|-----------|\n| Method | TNR | AUROC | DTACC | AUIN | AUOUT |\n| BIM[9] | 99.99% | 99.99% | 99.86% | 99.86% | 99.71% |\n| FGSM[10] | 98.89% | 99.88% | 98.89% | 99.66% | 99.24% |\n| Deep*-PGD | 97.22% | 99.58% | 97.92% | 99.64% | 99.05% |\n| ETF | **96.67%** | **98.73%** | **96.94%** | **98.75%** | **97.98%** |\n\n| LID[7] | | | | | |\n|-----------|-----------|-----------|-----------|-----------|-----------|\n| Method | TNR | AUROC | DTACC | AUIN | AUOUT |\n| BIM[9] | 99.99% | **98.81%** | 98.33% | 99.77% | 99.33% |\n| FGSM[10] | 99.99% | 99.99% | 99.99% | 99.72% | 99.44% |\n| Deep*-PGD | 99.99% | 99.99% | 99.99% | 99.86% | 99.72% |\n| ETF | **97.78%** | 99.58% | **97.22%** | **99.51%** | **98.68%** |\n", " > Q3: \"Does surrogate model architecture impact the success rate of the proposed technique?\"\n\n**A3:** \nThank you for your kind suggestions, and we add the following results and analysis in our revision, where we instantiate the shallow layers with different model architectures containing ResNet, VGG, and SENet. The results are reported in [TABLE 2-2], demonstrating that our EFT is powerful across various model architectures. \n\nTABLE 2-2: Model accuracy under ETF attack with different architectures, containing SENet, VGG11, and ResNet18. \n| Model | VGG19[11] | Inception_v3[12] | RN152[13] | DenseNet161[14] | SENet[15] | WRN[16] | MobileNet[17] | Avg |\n|---------|------------|--------------|------------|-------------|-----------|------------|------------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| SENet[15] | 23.44% | 28.42% | 35.07% | 31.64% | 6.73% | 28.19% | 11.80% | 23.61% |\n| VGG11[11] | 18.20% | 22.65% | 27.24% | 26.33% | **6.47%** | 23.16% | 12.69% | 19.53% |\n| ResNet18[13] | **14.11%** | **20.22%** | **24.20%** | **24.74%** | 6.96% | **20.73%** | **10.66%** | **17.37%** |\n\n> Q4: \"Does the technique also work in other domains like NLP?\"\n\n**A4**: \nFollowing much of previous works [3,4,5,6], we conduct experiments in the area of image classification. We also believe that it is an exciting problem to study the effectiveness of ETF in the field of NLP, but it remains challenging to use ETF for NLP. For instance, it is unclear in the NLP domain whether critical differences exist between those models learned from a few data and those learned from extensive training data, which is beyond the scope of this work. We sincerely appreciate your comment and will explore such an interesting problem in the future. \n\n> Q5: \"The surrogate model is trained in a contrastive manner. Can other self-supervision tasks like rotation be used to train it?(ref. Unsupervised Representation Learning by Predicting Image Rotations)\"\n \n**A5:** \nThanks for your insightful question, we agree that exploring different strategies to train the shallow model is exciting for further improvement of the performance of lightweight black-box attacks, as shallow layers play an important role in lightweight black-box attacks. Thus, we generate adversarial examples using EFT with shallow layers trained with a rotation prediction task [2] and report the results in [TABLE 2-3]. We can see that shallow layers trained with the rotation prediction task is slightly worse than using the contrastive strategy, but the performance can also reduce the model accuracy significantly.\n\n**TABLE 2-3:** The unsupervised representation Learning [2] applies to training the lightweight surrogate model. The experiment is conducted on the ImageNet validation set. \"Classification\" means the result of using cross-entropy loss to train lightweight surrogate models with label information to mount ETF attacks. \n\n| Model | VGG19[11] | Inception_v3[12] | RN152[13] | DenseNet161[14] | SENet[15] | WRN[16] | MobileNet[17] | Avg |\n|----------------|------------|--------------|------------|-------------|-----------|------------|------------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| SimCLR[18] | 15.32% | **18.54%** | 25.81% | 24.77% | **6.64%** | 22.90% | 11.34% | 17.91% |\n| Rotation[2] | 19.07% | 21.79% | 27.30% | 28.85% | 7.66% | 23.94% | 12.51% | 20.16% |\n| Classification | **14.11%** | 20.22% | **24.20%** | **24.74%** | 6.96% | **20.73%** | **10.66%** | **17.37%** |\n\n> Q6:\"For table 1, were the surrogate model trained using labels or in a contrastive manner?\"\n\n**A6:** \nThanks for pointing out the potentially confusing problem, we have fixed it in the revision. All surrogate models except those used in Table 4 (in the paper) are trained via an instance discrimination task, i.e., using labels.", " > Q7: \"How useful are the examples generated by ETF in improving the robustness of the models?\"\n\n**A7:** \nFollowing your kind suggestion, we perform adversarial training using ETF to exploit whether examples generated by ETF can improve model robustness. Results are reported in [TABLE 2-4]. Unfortunately, adversarial training with examples generated by ETF attack cannot enhance robustness. \n\n\n**TABLE 2-4**: The robust accuracy of the normally trained model and the model trained with ETF adversarial examples.\n\n| Model | clean | FGSM[10] | PGD | Autoattack[19] |\n|-----------------------------------|--------|--------|-------|-------------|\n| Adversarial trained model | 88.80% | 15.66% | 0.10% | 0.00% |\n| Normal model | 94.02% | 15.12% | 0.00% | 0.00% |\n\n\n>Q8:\"In addition, as the performance of the surrogate is tied to the samples used for training it, I would encourage the authors to run the experiments on different sample sets and report the variance.\"\n\n**A8:** \nThanks for your valuable suggestion. We have reported the variance in the revision and conducted the experiments on the CIFAR10 dataset, see [TABLE 2-5]. The results demonstrate that ETF is relatively robust to a different set of samples.\n\n**TABLE 2-5:** The accuracy of 4 target models normally trained on the CIFAR10 dataset and evaluated on 1,000 adversarial examples generated by lightweight black-box attacks or existing black-box attacks, under $\\epsilon \\leq 0.1$. The Shallow-(PGD, MI, DI, TI) mean applying PGD, MI, DI and TI to the shallow layers of the model. Deep-(PGD, MI, DI and TI) mean applying PGD, MI, DI and TI to the model’s output. EFT-(PGD, MI, DI and TI) mean applying ETF combined with PGD, MI, DI or TI to the shallow layers. \"Deep*\" means the black-box setting where the surrogate models are trained on the training data the same as the seven target models.\n| Model | VGG19[11] | RN56[13] | MobileNet[17] | ShuffleNet[20] | Avg |\n|---------------|------------|------------|------------|-------------|------------|\n| clean | 93.91% | 94.37% | 93.72% | 92.98% | 93.74% |\n| Deep-PGD | 59.45 $\\pm$ 0.34 % | 57.58 $\\pm$ 0.46 % | 45.21 $\\pm$ 0.27% | 52.32 $\\pm$ 0.37 % | 53.64 $\\pm$ 0.78 %|\n| Deep-MI | 53.44 $\\pm$ 0.75 % | 52.17 $\\pm$ 0.65 % | 44.25 $\\pm$ 0.34 % | 49.80 $\\pm$ 0.35 % | 49.91 $\\pm$ 0.58 %|\n| Deep-DI | 60.24 $\\pm$ 0.19 % | 58.63 $\\pm$ 0.34 % | 47.67 $\\pm$ 0.31 % | 54.34 $\\pm$ 0.62 % | 55.22 $\\pm$ 0.52 % |\n| Deep-TI | 64.51 $\\pm$ 0.38 % | 59.85 $\\pm$ 0.60 % | 48.80 $\\pm$ 0.59 % | 56.88 $\\pm$ 0.44 % | 57.51 $\\pm$ 0.42 % |\n| Shallow-PGD | 27.17 $\\pm$ 0.74 % | 31.06 $\\pm$ 0.55 % | 22.83 $\\pm$ 0.66 % | 28.14 $\\pm$ 0.76 % | 27.30 $\\pm$ 0.81 % |\n| Shallow-MI | 32.43 $\\pm$ 0.98 % | 36.42 $\\pm$ 1.01 % | 31.84 $\\pm$ 0.79 % | 30.76 $\\pm$ 0.94 % | 32.86 $\\pm$ 0.94 % |\n| Shallow-DI | 25.65 $\\pm$ 0.56 % | 30.27 $\\pm$ 0.51 % | 22.61 $\\pm$ 0.38 % | 27.22 $\\pm$ 0.55 % | 26.43 $\\pm$ 0.45 % |\n| Shallow-TI | 28.66 $\\pm$ 0.45 % | 31.35 $\\pm$ 0.33 % | 27.20 $\\pm$ 0.44 % | 29.48 $\\pm$ 0.63 % | 29.17 $\\pm$ 0.56 % |\n| ETF-PGD | 21.27 $\\pm$ 0.27 % | 25.85 $\\pm$ 0.84 % | **20.03** $\\pm$ 0.65 % | 22.37 $\\pm$ 0.44 % | 22.38 $\\pm$ 0.53 % |\n| ETF-MI | **20.75** $\\pm$ 0.55 % | **24.36** $\\pm$ 0.35 % | 20.51 $\\pm$ 0.34 % | **19.68** $\\pm$ 0.23 % | **21.32** $\\pm$ 0.42 % |\n| ETF-DI | 21.37 $\\pm$ 0.37 % | 26.46 $\\pm$ 0.27 % | 21.11 $\\pm$ 0.69 % | 23.14 $\\pm$ 0.36 % | 23.02 $\\pm$ 0.55 % |\n| ETF-TI | 25.48 $\\pm$ 0.41 % | 30.26 $\\pm$ 0.23 % | 23.37 $\\pm$ 0.51 % | 26.34 $\\pm$ 0.25 %| 26.36 $\\pm$ 0.39 % |\n| Deep*-PGD | 4.63 $\\pm$ 0.54 % | 0.81 $\\pm$ 0.74 % | 3.79 $\\pm$ 0.28 % | 3.21 $\\pm$ 0.32 % | 3.11 $\\pm$ 0.47 % |\n| Deep*-MI | 4.72 $\\pm$ 0.20 % | 0.96 $\\pm$ 0.36 % | 4.36 $\\pm$ 0.12 % | 3.78 $\\pm$ 0.25 % | 3.45 $\\pm$ 0.33 % |\n| Deep*-DI | 4.63 $\\pm$ 0.17 % | 0.81 $\\pm$ 0.67 % | 2.38 $\\pm$ 0.53 % | 3.34 $\\pm$ 0.43 % | 2.79 $\\pm$ 0.47 % |\n| Deep*-TI | 4.66 $\\pm$ 0.18 % | 0.84 $\\pm$ 0.25 % | 3.78 $\\pm$ 0.46 % | 3.67 $\\pm$ 0.31 % | 3.23 $\\pm$ 0.32 % |\n| Auto-attack[19] | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |", " \n**References**\n\n\n[1] Adversarial Manipulation of Deep Representations. Sabour et al. ICLR 2016\n\n[2] Unsupervised Representation Learning by Predicting Image Rotations. Gidaris et al. ICLR 2018\n\n[3] Towards deep learning models resistant to adversarial attacks. Madry et al. ICLR 2018\n\n[4] Boosting adversarial attacks with momentum. Dong et al. CVPR 2018\n\n[5] Improving transferability of adversarial examples with input diversity. Xie et al. CVPR 2019\n\n[6] Black-box Adversarial Attacks with Limited Queries and Information. Ilyas et al. ICML 2018\n\n[7] Characterizing adversarial subspaces using local intrinsic dimensionality. Ma et al. In ICLR, 2018\n\n[8] A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. Lee et al. NeurIPS 2018\n \n[9] Adversarial examples in the physical world. Kurakin et al. ICLR 2016.\n\n[10] Explaining and harnessing adversarial examples. Goodfellow et al. ICLR 2015.\n\n[11] Very deep convolutional networks for large-scale image recognition. Simonyan et al. ICLR 2015.\n\n[12] Rethinking the inception architecture for computer vision. Szegedy et al. CVPR 2016.\n\n[13] Deep residual learning for image recognition. He et al. CVPR 2016.\n\n[14] Densely connected convolutional networks. Huang et al. CVPR 2017.\n\n[15] Squeeze-and-excitation networks. Hu et al. CVPR 2018.\n\n[16] Wide residual networks. Zagoruyko et al. BMVC 2016. \n\n[17] Mobilenetv2: Inverted residuals and linear bottlenecks. Sandler et al. CVPR 2018.\n\n[18] A simple framework for contrastive learning of visual representations. Chen et al. ICML 2020.\n\n[19] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Croce et al. ICML 2022.\n\n[20] ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. Ma et al. ECCV 2018.", " \n**Response to Reviewer RUwr**\n\nWe sincerely thank you for taking the time to review our paper carefully and for your constructive comments and positive feedback about our work. Please find our responses below.\n\n**Response to [Weakness]**\n\n> W1: \"Errors about distance and captions. \"Hope the authors check the paper carefully and fix these errors.\"\n\n**R1:** \nSorry for our unclear description, we will definitely improve our writing and paper organization in our revision.\n\n> W2: \"it is more convincing that perform some experiments under smaller $\\epsilon$.\"\n\n**R2:** \nFollowing your kind suggestion, we conduct experiments under smaller $\\epsilon$, i.e., $\\epsilon=0.05$. The results are given in [TABLE 3-1], demonstrating that ETF can generate powerful adversarial examples even with meeting more strict constraints, i.e., smaller $\\epsilon$. The results and corresponding analysis have been added to the revision.\n\n\n**TABLE 3-1:** The accuracy of 7 normally trained target models evaluated on 1,000 adversarial examples generated by lightweight black-box attacks or existing black-box attacks, under $\\epsilon \\leq 0.05$. The Shallow-(PGD, MI, DI, TI) mean applying PGD, MI, DI and TI to the shallow layers of the model. Deep-(PGD, MI, DI and TI) mean applying PGD, MI, DI and TI to the model’s output. EFT-(PGD, MI, DI and TI) mean applying ETF combined with PGD, MI, DI or TI to the shallow layers. \n\n| Model | VGG19[1] | Inception_v3[2] | RN152[3] | DenseNet161[4] | SENet[5] | WRN[6] | MobileNet[7] | Avg |\n|--------------|------------|--------------|------------|-------------|------------|------------|------------|-------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| Deep-PGD | 61.14% | 63.05% | 65.78% | 62.31% | 34.50% | 68.17% | 56.65% | 58.8% |\n| Shallow-PGD | 46.55% | 49.13% | 56.78% | 58.34% | 28.50% | 55.82% | 37.94% | 47.58% |\n| ETF-PGD | **41.76%** | **46.74%** | **48.55%** | **50.79%** | **24.68%** | **53.11%** | **32.65%** | **42.61%** |\n| Deep*-PGD | 16.23% | 36.71% | 25.36% | 24.62% | 18.16% | 31.42% | 13.34% | 23.69% |\n| Autoattack[8] | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |\n", " > W3: \"It is more convincing that perform experiments on more examples since the size of only 1,000 examples is too small.\"\n\n**R3:** \nFollowing your kind suggestion, we evaluate different methods using more samples, i.e., 5000 images, and report the results in [TABLE 3-2]. The conclusion drawn from [TABLE 3-2] is consistent with that drawn from Table 1 (in the paper), e.g., EFT outperforms \"shallow'' attack methods, demonstrating that ETF can generate powerful adversarial examples under various scenarios.\n\n\n**TABLE 3-2:** The accuracy of 7 normally trained target models evaluated on 5,000 adversarial examples generated by lightweight black-box attacks or existing black-box attacks, under $\\epsilon \\leq 0.1$. The Shallow-(PGD, MI, DI, TI) mean applying PGD, MI, DI and TI to the shallow layers of the model. Deep-(PGD, MI, DI and TI) mean applying PGD, MI, DI and TI to the model’s output. EFT-(PGD, MI, DI and TI) mean applying ETF combined with PGD, MI, DI or TI to the shallow layers. Auto-attack[23] is used for testing the robustness of the target models, so it adopts the white-box setting to mount the target models.\n\n| Model | VGG19[1] | Inception_v3[2] | RN152[3] | DenseNet161[4] | SENet[5] | WRN[6] | MobileNet[7] | Avg |\n|---------------|------------|--------------|------------|-------------|-----------|------------|-----------|------------|\n| clean | 67.43% | 64.35 % | 74.21 % | 73.34% | 51.28% | 73.22% | 65.06% | 66.99% |\n| Deep-PGD | 55.86% | 56.08% | 64.48% | 65.44% | 35.92% | 63.54% | 51.10% | 56.06% |\n| Deep-MI | 38.02% | 44.70% | 52.56% | 52.98% | 13.22% | 49.74% | 28.92% | 40.02% |\n| Deep-DI | 51.32% | 51.10% | 61.44% | 61.60% | 33.34% | 60.36% | 47.70% | 52.41% |\n| Deep-TI | 55.00% | 54.94% | 64.60% | 64.48% | 36.80% | 63.86% | 51.50% | 55.88% |\n| Shallow-PGD | 19.42% | 25.12% | 31.04% | 31.70% | 9.28% | 29.16% | 16.64% | 23.19% |\n| Shallow-MI | 22.47% | 28.14% | 34.69% | 35.76% | 11.42% | 31.65% | 17.13% | 25.89% |\n| Shallow-DI | 19.68% | 24.62% | 30.26% | 32.17% | 10.02% | 28.24% | 16.08% | 23.01% |\n| Shallow-TI | 20.40% | 23.96% | 29.00% | 31.04% | 9.82% | 28.26% | 17.08% | 22.79% |\n| ETF-PGD | 13.56% | 17.66% | 23.68% | 24.60% | **4.54%** | 20.68% | 9.42% | 16.31% |\n| ETF-MI | 15.94% | 20.32% | 26.28% | 26.74% | 5.52% | 22.72% | 9.70% | 18.17% |\n| ETF-DI | 13.16% | 25.72% | 22.32% | 22.76% | 4.68% | 19.84% | **8.58%** | 15.29% |\n| ETF-TI | **13.30%** | **14.60%** | **20.48%** | **22.38%** | 5.22% | **19.06%** | 9.50% | **14.93%** |\n| Deep*-PGD | 12.43% | 28.15% | 16.54% | 12.61% | 7.09% | 13.33% | 9.64% | 14.25% |\n| Deep*-MI | 11.77% | 25.14% | 18.10% | 13.72% | 4.26% | 14.61% | 8.30% | 13.70% |\n| Deep*-DI | 7.61% | 18.17% | 8.23% | 9.90% | 6.66% | 9.72% | 7.91% | 9.74% |\n| Deep*-TI | 9.55% | 23.48% | 13.51% | 10.63% | 6.46% | 10.92% | 9.55% | 12.01% |\n| Autoattack[8] | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% |\n\n\n> W4: \"It is better bold the best results in all tables.\"\n\n**R4:** \nFollowing your suggestion, we have bolded the best results in all tables in the revision.\n\n> W5: \"The experiment setup should be more detailed.\"\n\n**R5:** \nSorry for the confusion. Following your kind suggestion, we provide detailed explanations and descriptions of the experimental settings in the revision.\n\n**Reference**\n\n[1] Very deep convolutional networks for large-scale image recognition. Simonyan et al. ICLR 2015.\n\n[2] Rethinking the inception architecture for computer vision. Szegedy et al. CVPR 2016.\n\n[3] Deep residual learning for image recognition. He et al. CVPR 2016.\n\n[4] Densely connected convolutional networks. Huang et al. CVPR 2017.\n\n[5] Squeeze-and-excitation networks. Hu et al. CVPR 2018.\n\n[6] Wide residual networks. Zagoruyko et al. BMVC 2016. \n\n[7] Mobilenetv2: Inverted residuals and linear bottlenecks. Sandler et al. CVPR 2018.\n\n[8] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Croce et al. ICML 2022.", " \n**Response to Reviewer kZtY**\n\nWe sincerely thank you for taking the time to review our paper carefully and for your constructive comments and positive feedback about our work. Please find our responses below.\n\n**Response to [Weakness]**\n\n> W1: \"I suggest the authors can show at least one visual example/noise generated by their method.\"\n\n**R1:** \nThanks for your valuable suggestion. We have added a figure to the revision, where adversarial examples and perturbations are generated by different methods, containing deep*-PGD attack using training images, deep-PGD attack using test images, and lightweight black-box attack.\n\n> W2: \"I suggest the authors add more explanations to their equations to make them easier to follow. \"\n\n**R2:** \nFollowing your suggestion, we have provided more details to explain and describe all equations in the revision. \n\n\n**Response to [Questions]**\n\n> Q1: \"According to eq. 4, is the Error Transformer simply learning one more transformation matrix $A$? If it is, then is \n$\\Delta_s=Ax$? Or does it shows that the feature space perturbation at the first layer is equivalent to data space perturbation (when using $Ax$ as noise)?\"\n\n**A1:** \nYour understanding is correct. The implicitly learned matrix $A$ connects the feature space perturbation and weight space perturbation, so that we can transform the approximation error ($wA$) in the weight space to the feature (or input) space, i.e., $\\Delta_s=Ax$. Thus, the data space perturbation is equivalent to perturbing the first layer parameters, i.e., correcting the first layer parameters by adding perturbations in the data space.\n\n> Q2: \"In Section 4.3, line 185, the author said, \"Therefore, connecting the parameter space ...\" is there a more intuitive explanation for it? It is clear that eq. 4 shows it, but I don't think it is intuitive with the description in lines 183-184.\"\n\n**A2:** \nThanks for your insightful question! The question makes us aware that exploring which kind of feature perturbations are more preferred is exciting and interesting, which can benefit the attack success rate of lightweight black-box attacks. Therefore, we have added the following description to our revision.\n\nTo alleviate the approximation error of shallow models, we propose transforming the parameter space's approximation error as the feature space's perturbation. The inspiration is borrowed from the feature space attack. Specifically, we have little knowledge to determine which perturbations can point (from the surrogate model) to the target model, making it challenging to alleviate the approximation error in the weight space. In contrast, we have the prior that samples with different labels should have distinguishable representations/features. Thus, we can leverage the prior knowledge to select preferred perturbations in the feature space, i.e., we prefer perturbations that can make representations/features of samples with different labels indistinguishable. Therefore, we design a min-max optimization to identify the \"worst'' model, and then make different image features obtained by the worst model indistinguishable. Consequently, we select a guide image for each source image and generate adversarial examples by perturbing the source image to make the guide and source images have the same/similar representation/features. \n\nInspired by the question, we are aware that how selecting a guide image is an exciting direction to improve the performance of lightweight black-box attacks further.", " \n> Q3: \"the author assumes there exists a transformation matrix. How strong is the assumption here? What if the target and surrogate models have different dimensions at the first layer?\"\n\n**A3**:\nYour valuable questions make us aware that the assumption here lacks the necessary explanation, so we have added the following into the revision. \n\nTaking the first layer as an example, let $w^1_t$ and $w^1$ stand for the parameters of the target and surrogate models, respectively. In many practical scenarios, $w^1_t$ and $w^1$ usually have different dimensions, leading to intractable parameters’ discrepancy alleviation. Fortunately, we can find an appropriate low-rank approximation for parameters of deep neural networks [1,2,3]. Specifically, we can approximate either $w^1_t$ or $w^1$ to make these two matrices have the same dimensions, so we can consider that the dimensions of the two models are the same. Consequently, we can find a transformation matrix $A$ such that the approximation error is minimized, i.e.,\n$A = \\arg\\min\\_{\\tilde{A}}|w^1\\_t -w^1-w^1\\tilde{A}|\\_{F}$ , where $|\\cdot|_{F}$ is the Frobenius norm. In this paper, we assume the approximation error is infinitesimal, i.e., $|w^1\\_t - w^1 - w^1A|\\_{F} = 0$. Then, we leverage $w^1$ and $A$ to represent the target model, i.e., $w^1\\_t = w^1 + w^1 A$.\n\n> Q4: \"In the eq. 6 and 9, what does $\\Delta_s$ means? I am unsure if there is a difference between $\\Delta_s$ and $\\Delta_t$.\"\n\n**A4**: \nSorry for the typo. We have fixed it in the revision. $\\Delta_s$ and $\\Delta_g$ stand for the data space perturbation applied to the source and guide images, respectively. These perturbations are designed to mitigate the approximation error in parameter space. \n\n> Q5: \"Where is the $x'$ defined in the paper? It looks like the perturbed version of $x$ in eq. 3, and it looks like source $x$ in eq. 6.\"\n\n**A5**: \nThank you for pointing out the confusing problem. We have fixed it in the revision. $x'$ denotes the perturbed version of source image $x$ in Eq. (3) and Eq. (6). In Eq. (6), $x' + \\Delta_s$ stands for the perturbing operation applied to the perturbed input $x'$, where $x'$ is obtained by perturbing $x$ for optimizing the adversarial loss while $x'$ is perturbed by $\\Delta_s$ to mitigate the approximation error. \n\n**Reference**\n\n[1] Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation. NeurIPS 2014.\n\n[2] Accelerating Very Deep Convolutional Networks for Classification and Detection. TPAMI 2015.\n\n[3] Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications. ICLR 2016.", " The paper targets improved black-box attacks on neural networks, which could be used to attack deployed models in safety-critical settings. As the technical review pointed out, the main issue here is that the paper only addresses the social impact of this work briefly in the appendix. The concern could be addressed by adding the noted considerations to the main paper, and expanding to briefly discuss how more advanced black-box attacks could be misused. ", " Largely misuse concerns that are addressed in Appendix C Appendix C is not referenced in the main paper body and took some digging to find. The social impact section in Appendix C needs to be referenced in some form in the main paper body, or there should be an explicit section of the paper that addresses misuse. Ideally the social impact section can be fleshed out more and encourage defense method development.", " The paper looks at the interesting task of no-box attacks. This is a setting that is even harder than black-box attacks, where typically the attacker has access to either the training data and can construct a surrogate model or the model outputs to estimate gradients. In the no-box setting considered in this paper, the attacker only has access to a small number of samples (1k on ImageNet) that are correctly classified by the target model. The attack is based on feature space perturbations, as only the shallow layers of DNNs can be approximated with this little data and the authors propose the Error TransFormer to alleviate issues caused by approximations errors. \n\nThey first train a Shallow DNN on the limited data pool. As the shallow layers do not yield a label output, instead of the standard adversarial objective used for FGSM/PGD attacks, their attack objective transforms the feature representation of an image into that of a given image that comes from a different class. To mitigate the difference between the target model and the surrogate model, they employ a min-max strategy that combines feature and parameter space. \nThey evaluate their approach on various ImageNet source models. \n\n Strengths:\n\n- The paper is well written and easy to understand. \n\n- The setting is very interesting. Assumptions from the standard black-box setting are often not realistic and the no-box setting is more applicable in real world scenarios. \n\n- Their method is simple and easy to implement.\n\n- The evaluation is clear. Deep models can not be trained in the no-box setting, shallow models work much better but can be improved by their ETF approach.\n\nWeakenesses:\n\n- The novelty is somewhat limited. A similar min-max objective similar to ETF in weight space has for example been explored already in \"Adversarial Weight Perturbation Helps Robust Generalization\" by Wu et al. Most other ideas like surrogate training of a shallow model also appeared in previous papers. \n\n- The evaluation focuses on Imagenet only. While this is certainly an interesting setting, it might be worth to also add some experiments on other datasets. For example, while CIFAR10 is a much easier dataset, there also exist much more robust models and it might be interesting to evaluate the performance of the method to the most robust models for example from RobustBench.\n\n\n- In Table 1-3, their should be another row for the ground truth robust accuracy in a white-box setting. While this is not computable exactly, AutoAttack should yield a fairly accurate estimate. \n\n- Table 1 is somewhat hard to read. I understand that boldfacing the single best result per column is not optimal as it does not make sense to compare different attacks, but it would be easier to understand the general message of the table, which clearly is that ETF is the best method in the no-box setting, by some form of highlighting. - The paper only considers L-inf perturbations. While they are the most common, does the method work for L0, L1 and L2 too? \n\n- In 4.1 you say that the model has access to label information on the limited data pool. From the insufficient supervision information ablation, I also take that this is used during training. However 4.1 also motivates that for shallow attacks we only need to train shallow nets with an unlabeled contrastive loss. Which loss (Eq (1) vs (2)) is used for the actual training of the surrogate model? \n\n- You use heavy data augmentation, what is the impact of that and did you experiment with different ones? Obviously, attack methods can in general be used in a malicious way, especially if we go to more realistic scenarios. I think adding a sentence about that in C would make sense. ", " Recently, there has been a lot of focus on techniques for the black-box generation of adversarial attacks. However, in practice, an adversary might not have unrestricted access to the predictive interface of the target model or unlimited data to generate adversarial examples. In this paper, the authors propose Error TransFormer (ETF) for generating adversarial examples, which works even when the adversary doesn't have access to training data and model outputs. ETF works by constructing a surrogate model with the limited available data and then perturbing the features obtained from its shallow layers. Strengths:\n- There are relatively few techniques that can work under the threat model considered in the paper.\n- Even though the technique appears simple, it performs pretty well in practice.\n\n\nWeaknesses:\n- The paper is a bit hard to follow, and the writing needs improvement.\n- I found the experiments a bit lacking, and please refer to the next section for a list of questions I would like to be answered.\n\nOne vital information authors leverage is that we can generate powerful adversarial examples by attacking shallow layers. I would encourage the authors to add appropriate references. \nA vital piece of the technique is missing, like how is the optimization problem in Eq. 6 solved. I would encourage the authors to include such details in the main paper. - How imperceptible are the examples generated by the proposed technique? For instance, even though most black-box adversarial techniques have higher success rates, they generally forgo imperceptibility. \n- How good are examples generated by the proposed technique in evading the recent class of adversarial example detection methods?\n- Does surrogate model architecture impact the success rate of the proposed technique?\n- Does the technique also work in other domains like NLP?\n- The surrogate model is trained in a contrastive manner. Can other self-supervision tasks like rotation be used to train it? (ref. Unsupervised Representation Learning by Predicting Image Rotations)\n- For table 1, were the surrogate model trained using labels or in a contrastive manner?\n- How useful are the examples generated by ETF in improving the robustness of the models?\n- In addition, as the performance of the surrogate is tied to the samples used for training it, I would encourage the authors to run the experiments on different sample sets and report the variance. I would encourage the authors to list the limitations of the proposed approach.", " The authors propose a \"lightweight\" black-box attack method that uses limited data called Error TransFormer (ETF). Experimental results show that the proposed method achieves decent performance compared to other black-box attacks with few available examples. Strengths:\n+ The method is novel to me, and technically sound. \n+ The performance is good. It is interesting that only 2 training images can still drop large accuracy.\n\nWeakness:\n- The writing should be carefully checked. I notice there are many errors that may lead the reader to significant confusion, for instance, the distance should be *maximize* instead of *minimize* for generating adversarial examples in line 173, the similarity should be *decrease* in line 234, the first row in Table 1 has some error, and it should be $\\epsilon=0.1$ in the caption of Table 1. Hope the authors check the paper carefully and fix these errors.\n- The authors only test under $\\epsilon = 0.1$, it is more convincing that perform some experiments under smaller $\\epsilon$, such as $\\epsilon=0.05$.\n- It is more convincing that perform experiments on more examples since the size of only 1,000 examples is too small.\n- It is better bold the best results in *all tables*.\n- The experiment setup should be more detailed. I have no questions for now. I didn't find any potential negative societal impact.", " This paper studies lightweight black-box attacks with limited knowledge of the target model and its output.\nThe author proposed an Error Transformer(ETF) to alleviate the approximation error between the surrogate model and the target model.\nThe experiment results show that using ETF + limited samples is only 3% attack success rate lower than the black-box attack with full access of the training data. Strengths:\n\n 1. The experiment results are compelling; this paper shows some interesting results that a black-box attack can be performed without using the training set.\n 2. To my best understanding, using the shallow layer & ETF for approximation of the target model is novel to the ML security field.\n 3. The motivation of the proposed method and the lightweight black-box attack are well illustrated.\n 4. The author conducted a thorough analysis and ablation study in section 5.\n\nWeaknesses:\n 1. I suggest the authors can show at least one visual example/noise generated by their method.\n 2. Please refer to the questions.\n \nI suggest the authors add more explanations to their equations to make them easier to follow. Overall, I think the experiment results are good and the motivation is clear. Therefore, I tend to vote for acceptance of this paper and am willing to change my score after the author clarifies my questions. 1. According to eq. 4, is the Error Transformer simply learning one more transformation matrix $A$? If it is, then is $\\Delta_s = Ax$? Or does it shows that the feature space perturbation at the first layer is equivalent to data space perturbation(when using $Ax$ as noise)?\n2. In Section 4.3, line 185, the author said, \"Therefore, connecting the parameter space ...\" is there a more intuitive explanation for it? It is clear that eq. 4 shows it, but I don't think it is intuitive with the description in lines 183~184.\n3. In section 4.3, lines 199~200, the author assumes there exists a transformation matrix $A$, $s.t.$ $w_t^1 = w^1 + w^1A$. How strong is the assumption here? What if the target and surrogate models have different dimensions at the first layer?\n4. In the eq. 6 and 9, what does $\\Delta_s$ means? I am unsure if there is a difference between $\\Delta_t$ and $\\Delta_s$.\n5. Where is the $x^{\\prime}$ defined in the paper? It looks like the perturbed version of $x$ in eq. 3, and it looks like source $x$ in eq. 6. The author addressed the social impact in the appendix." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "Hdlih3dePeW", "LWjDxYtXngZ", "nips_2022_Gpqqm4p91Ez", "tp0TnbL1DHj", "LWjDxYtXngZ", "HgYG224Hn9J", "kN7AQTRhHS0", "CCkcYNQjDA8", "Hdlih3dePeW", "Hdlih3dePeW", "Hdlih3dePeW", "Hdlih3dePeW", "-Jjj-KeaDj8", "-Jjj-KeaDj8", "-Jjj-KeaDj8", "-Jjj-KeaDj8", "tp0TnbL1DHj", "tp0TnbL1DHj", "kALk0MdYBA", "kALk0MdYBA", "nips_2022_Gpqqm4p91Ez", "nips_2022_Gpqqm4p91Ez", "nips_2022_Gpqqm4p91Ez", "nips_2022_Gpqqm4p91Ez", "nips_2022_Gpqqm4p91Ez", "nips_2022_Gpqqm4p91Ez" ]
nips_2022_BZ92dxDS3tO
OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models
We propose a new method for object pose estimation without CAD models. The previous feature-matching-based method OnePose has shown promising results under a one-shot setting which eliminates the need for CAD models or object-specific training. However, OnePose relies on detecting repeatable image keypoints and is thus prone to failure on low-textured objects. We propose a keypoint-free pose estimation pipeline to remove the need for repeatable keypoint detection. Built upon the detector-free feature matching method LoFTR, we devise a new keypoint-free SfM method to reconstruct a semi-dense point-cloud model for the object. Given a query image for object pose estimation, a 2D-3D matching network directly establishes 2D-3D correspondences between the query image and the reconstructed point-cloud model without first detecting keypoints in the image. Experiments show that the proposed pipeline outperforms existing one-shot CAD-model-free methods by a large margin and is comparable to CAD-model-based methods on LINEMOD even for low-textured objects. We also collect a new dataset composed of 80 sequences of 40 low-textured objects to facilitate future research on one-shot object pose estimation. The supplementary material, code and dataset are available on the project page: https://zju3dv.github.io/onepose_plus_plus.
Accept
This paper originally received slightly positive reviews overall, except for one review, which was plenty of requests of specific clarifications and comments. Main issues regarded just the need of clarifying some parts of the method and put better in context of the state of the art and former evaluations. Unclear novelty was another raised problem, as well as the need of off-the-shelf 2D object detector and real images for training, which might affect the general applicability of the method while weakening the "one/shot" claim of the work. A lot of concerns were also raised about missing baselines and prior work discussion. Authors provided detailed answers to the comments, also engaging in long discussions, especially with the most critical reviewer. In the end, the most positive reviewers seem to be satisfied of the answers to their comments, maintaining the original positive ratings, and also the critical reviewer resulted convinced of the discussion with authors, raising his/her score to weak accept. Overall, assuming that the comments and discussions could be included in the final version, this paper can be considered acceptable for NeurIPS 2022 publication.
val
[ "tsi6AjpEVys", "iYiTEpz2Bde", "mIZZrjxryt", "U3mMiBBhh3D", "KHKY75jO7vu", "k6hTyESR8f4", "gvsHS3HJQdW", "x4p0nN_K--z", "2EYVgYc0E64", "t9b298mKdGs", "0UrqmCnxjU", "WQc0ZgE92MW", "kQVgUrUfsuR", "ZphKQxqijC", "-HwdPtmJMvw", "8HujlfSeJqc", "_tyeP4Sljo1y", "i1x3fLn9X2O", "L-BU2a6v9k9", "Mt2SG4bBbry", "QEJXgCgfevt", "q-Xb_zHOvBJ", "w-rI0KbKqIW", "dmZVK7RrOZ", "OSrNdlttdmZ", "XUE5-0wKa9D" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the detailed description of the relation to prior work.\n\n> Our idea is to directly disambiguate and augment features by encoding their spatial information and relations with others into features with the help of the attention module.\n\nThere is actually prior work on using 3D point relations (in terms of which points are covisible with each other) to disambiguate matches, e.g., see Sec. 4 in [11]. In essence, image retrieval / determining which matching points can be seen together encodes relations between 3D points for disambiguation.\n\n> Moreover, we eliminate the keypoint detection on the query image, which helps pose estimation of low-textured objects. \n\nThere is prior work on eliminating keypoint detection in the query image for the purpose of better handling challenging conditions where features cannot be reliably re-detected. For example, [Germain et al., Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization, 3DV 2019] and [Germain et al., S2DNet : Learning Image Features for Accurate Sparse-to-Dense Matching, ECCV 2020] match sparse features extracted from database images against dense features extracted from a query image.", " > Thank you very much for your comments. We clarify that our comment \"renovating the pipeline with a learning-based approach\" refers to the comparison with the previous SfM-based methods[1,2,3,4,5] in the area of 6D object pose estimation. \n\nThanks for the clarification. I misunderstand the previous statement as something more general pertaining to SfM-based pose estimation in general, which would also include the visual localization literature (where learning by now is a central part of the pipeline).\n\nStill, it might be better to not make this statement, as the approaches from the localization literature are also applicable to the object pose estimation case (see your experiments shown above). As such, claiming to renovate the pipeline with learning-based approaches seems still to strong to me.", " > The keypoint relocalization phase in Widya et al. doesn't leverage the two-view constraint.\n\nYou are right. I seem to have confused things with the two-view-based refinement from the InLoc paper [Taira et al., CVPR 2018], which implements the refinement strategy based on local matching discussed in the beginning of Sec. 3.3 of Widya et al.\n\n> SuperGlue+Patch2Pix is not a keypoint-free matcher\n\nThank you very much for the additional results. I think this rounds out the experiment.\n\nI would however argue that SuperGlue+Patch2Pix is a keypoint-free matcher as the Patch2Pix stage refines the SuperGlue matches in terms of their spatial positions. As a result, the matching pixel positions can differ from the original feature detections. ", " Thank you very much for the detailed answers. Please find my comments below.\n\n> Thank you very much for your comments. We will change 'generalizability' to 'no object-specific network training' in the revised version to avoid misleading. \n\nThis is indeed a better description of OnePose, OnePose++, hloc, etc. \n\nNote that not requiring \"object-specific network training\" is not a virtue in itself. For example, the baseline using NeuS in the discussion above does not require object-specific network training. But given the long reconstruction times, it would certainly be feasible to do object-specific network training in the same time. I don't think it matters whether an instance-level method requires network training or not. The important question to me seems to be the trade-off between pose accuracy and the time required to adapt an approach to a given object instance (e.g., training network parameters or building a 3D model).\n\n> The differences between PatchFlow are as follows.\n\nBased on the description, I would still argue that the proposed approach is a special case of PatchFlow:\n\n1. Refining matches between pairs of matches is a special case of computing a flow field (as only the flow for a single pixel in the patch around one match is computed) and is conceptually identical to the two-view case of PatchFlow (with the main technical difference that a transformer is used, but I would not consider this too novel).\n2. \"We keep the selected reference node fixed and search around each query node for the fine-level match.\": In essence, this corresponds to the chaining of refined matches discussed in the PatchFlow paper, with the special structure that the resulting graph has a star-like structure. Since potential constraints between query nodes are not taken into account, the resulting graph is a sub-graph of the one used by PatchFlow. Hence, the proposed approach is a special case of the more general PatchFlow framework.\n\nThus, I see limited novelty in this part of the proposed approach.\n\n> It can be seen that the feature distance maps of PixSfM contain large or multiple minimal regions(blue region) incurred by the ambiguous CNN features in low-textured regions, which are not discriminative enough to find real optimal locations in feature-metric optimization. We attribute the accuracy improvement of our method to the discriminative features in fine-level matching. \n\nMy point was that based on the numbers provided above (point cloud and object pose accuracy), the difference between the proposed approach and PixSfM seems to be very small. The new visualizations indeed show a difference, but these do not seem to influence the quantitative results too much. Hence, the statement that PixSfM struggles with low-texture regions seems to be too strong given the similar numbers.\n\n**Q4**: Thank you very much, this answers my question.", " Thank you very much. That answers my question.", " Thank you very much for the answer and the additional experiment.\n\nI would not count this baseline as concurrent work. The idea of getting 3D point coordinates from a dense model instead of a sparse SfM point cloud predates the MeshLoc paper (I think the MeshLoc paper provides multiple references to such prior work). One example are methods evaluated on the InLoc dataset, where database images are very sparse and building a SfM model is thus hard. Since the dataset provides a depth map per database image, these depth maps are used to obtain the 3D points corresponding to 2D positions in the database images. The methods that I am aware of that evaluate on InLoc rely on image retrieval and run pose estimation separately for the 2D-3D matches obtained from a retrieved database image. Running a single pose estimation step over all 2D-3D matches seems like a minor modification to me.\n\nI am not convinced that the results show that the baseline is not feasible in practice. As mentioned in my previous comment, there are multi-view stereo approaches that are very efficient, e.g., Capturing Reality. NeuS does not seem to fall into this category.\n", " > **Q1:** I have a follow-up question on the OnePose++ dataset: How many training and testing images are there? I can't seem to find this information in the supp. material.\n\nThank you very much for your comments. As described in Line 247-249, the OnePose-HARD is an evaluation set to supplement the original OnePose dataset. We use all of the objects in the OnePose-HARD dataset for testing. The total number of images in the reference sequences is 35521, and the total number of images in the query sequences is 32477. We will add the missing information in the final version.", " > **Q7:** This is rather vague to me. What does \"renovating the pipeline with a learning-based approach\" mean? Prior work, e.g., SuperGlue, LoFTR, Patch2Pix, NC-Net, Sparse NC-Net, Dual RC-Net, DSAC (++, *), InLoc, D2-Net, has already integrated learning into SfM-based pose estimation. Some of these approaches also claim to better handle weakly textured regions (e.g., InLoc motivates dense matching to better handle such regions).\n\nThank you very much for your comments. We clarify that our comment \"renovating the pipeline with a learning-based approach\" refers to the comparison with the previous SfM-based methods[1,2,3,4,5] in the area of 6D object pose estimation. We further discuss the differences with these methods as follows.\n\nSome previous methods[2,3,5] extract keypoints on the query image firstly and perform matching with reference images or SfM model to obtain 2D-3D matches for pose estimation. Unlike them, which reject ambiguous matches by ratio test in matching, [4] proposes preserving ambiguous matches at the matching stage by vector quantizing and solving ambiguation by hypothesis testing at the outlier filter stage. [1] proposes the spatial feature clustering and multi-prioritized RANSAC to cope with repeated patterns for multiple instances detection.\n\nDifferent from these previous methods, our framework eliminates the keypoint detection for the query image by directly performing matching between the 2D feature map and the 3D model, which benefits pose estimation for low-textured objects. Moreover, we leverage the attention mechanism to disambigute 2D and 3D features for matching, while the direct feature disambiguation is not explored by these methods. The keypoint-free design and the attention mechansim in our 2D-3D matching network bring improvement on low-textured objects, which are challenging for these previous methods.\n\n**References:**\n\n[1] Fenzi, Michele, Ralf Dragon, Laura Leal-Taixé, Bodo Rosenhahn and Jörn Ostermann. “3D Object Recognition and Pose Estimation for Multiple Objects Using Multi-Prioritized RANSAC and Model Updating.” DAGM/OAGM Symposium (2012).\n\n[2] Martinez, Manuel, Alvaro Collet and Siddhartha S. Srinivasa. “MOPED: A scalable and low latency object recognition and pose estimation system.” 2010 IEEE International Conference on Robotics and Automation (2010): 2043-2049.\n\n[3] Gordon, Iryna and David G. Lowe. “What and Where: 3D Object Recognition with Accurate Pose.” Toward Category-Level Object Recognition (2006).\n\n[4] Hsiao, Edward, Alvaro Collet and Martial Hebert. “Making specific features less discriminative to improve point-based 3D object recognition.” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2010): 2653-2660.\n\n[5] Skrypnyk, Iryna and David G. Lowe. “Scene modelling, recognition and tracking with invariant image features.” Third IEEE and ACM International Symposium on Mixed and Augmented Reality (2004): 110-119.", " > **Q8:** The challenge at large scale is that feature descriptors become ambiguous as more and more locally similar structures need to be considered (see the work by Li et al., Svarm et al., and Zeisl et al.). The result is that some form of disambiguation is needed, as is the case for weakly texture regions, which also produce ambiguous matches. These works thus need to deal with a very similar problem. In my opinion, the differences need to be discussed in more detail.\n\nThank you very much for your comments. Previous visual localization methods based on direct 2D-3D matching improve efficiency, accuracy and cope with ambiguous matches in the 2D-3D matching and outlier filtering.\n\nMany previous methods [1,2,8] leverage priors for 2D-3D matching. They define the prioritization criteria, such as co-visibility for the 3D points, and matching is performed by order of descending priorities. This strategy improves efficiency but helps little in disambiguation. Some methods [1,4,11] compress the 3D model by quantizing features to improve matching efficiency. However, the quantization can further lead to ambiguous matches and they rely on outlier filtering for disambiguation. [9] regards 2D-3D matching as a classification problem, but it assumes the known pose prior. Our method also works on the 2D-3D matching phase but focuses on disambiguating features. Our idea is to directly disambiguate and augment features by encoding their spatial information and relations with others into features with the help of the attention module. In this way, both 2D and 3D features are provided the global receptive field and become discriminative for 2D-3D matching. Moreover, we eliminate the keypoint detection on the query image, which helps pose estimation of low-textured objects. Since our module operates on the features, we believe it can be combined with the previous prior-based methods to perform 2D-3D matching.\n\nA number of solutions [1,3,4,5,6,7,10,11] work on the outlier filter stage. Since the ambiguous matches may contain correct matches, some methods relax the matching threshold [3,5,6,7] or quantize features [1,4,11] to preserve ambiguous matches and reject wrong matches at the outlier filter stage. Many approaches[1,4,7,11] use co-visibility priors to filter outliers. The co-visibility encoded in the 3D model is used to select a subset of matches that is more likely to be correct from all putative matches. Some other methods[3,5,6,10] propose efficient geometric verification to filter large amounts of outliers. Since our work focus on the 2D-3D matching phase, these outlier filtering methods are orthogonal to our method, which can be further explored to integrate into our pipeline.\n\n**References**\n\n[1] Sattler, Torsten, B. Leibe and Leif P. Kobbelt. “Efficient & Effective Prioritized Matching for Large-Scale Image-Based Localization.” IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (2017): 1744-1756.\n\n[2] Li, Yunpeng, Noah Snavely and Daniel P. Huttenlocher. “Location Recognition Using Prioritized Feature Matching.” ECCV (2010).\n\n[3] Svärm, Linus, Olof Enqvist, Fredrik Kahl and Magnus Oskarsson. “City-Scale Localization for Cameras with Known Vertical Direction.” IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (2017): 1455-1461.\n\n[4] Liu, Liu, Hongdong Li and Yuchao Dai. “Efficient Global 2D-3D Matching for Camera Localization in a Large-Scale 3D Map.” 2017 IEEE International Conference on Computer Vision (ICCV) (2017): 2391-2400.\n\n[5] Camposeco, Federico, Torsten Sattler, Andrea Cohen, Andreas Geiger and Marc Pollefeys. “Toroidal Constraints for Two-Point Localization Under High Outlier Ratios.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 6700-6708.\n\n[6] Zeisl, Bernhard, Torsten Sattler and Marc Pollefeys. “Camera Pose Voting for Large-Scale Image-Based Localization.” 2015 IEEE International Conference on Computer Vision (ICCV) (2015): 2704-2712.\n\n[7] Li, Yunpeng, Noah Snavely, Daniel P. Huttenlocher and Pascal V. Fua. “Worldwide Pose Estimation Using 3D Point Clouds.” ECCV (2012).\n\n[8] Choudhary, Siddharth and P. J. Narayanan. “Visibility Probability Structure from SfM Datasets and Applications.” ECCV (2012).\n\n[9] Donoser, Michael and Dieter Schmalstieg. “Discriminative Feature-to-Point Matching in Image-Based Localization.” 2014 IEEE Conference on Computer Vision and Pattern Recognition (2014): 516-523.\n\n[10] Svärm, Linus, Olof Enqvist, Magnus Oskarsson and Fredrik Kahl. “Accurate Localization and Pose Estimation for Large 3D Models.” 2014 IEEE Conference on Computer Vision and Pattern Recognition (2014): 532-539.\n\n[11] Sattler, Torsten, Michal Havlena, Filip Radenović, Konrad Schindler and Marc Pollefeys. “Hyperpoints and Fine Vocabularies for Large-Scale Location Recognition.” 2015 IEEE International Conference on Computer Vision (ICCV) (2015): 2102-2110.", " > **Q5:** Widya et al. start with coarse matches that are then refined locally: given a match established using features extracted at one layer in the network, the refinement aims at finding more accurate coordinates locally in regions around the initial match (where the region size depends on the receptive field of the features). Isn't this a similar two-view constraint used by the proposed approach?\n\nThank you very much for your comments. The keypoint relocalization phase in Widya et al. doesn't leverage the two-view constraint. In its refinement phase, the keypoint relocalization still operates on a single view by the local feature patch instead of considering relations with other views' patches. It can be regarded as performing keypoint detection within the coarse match local region. In contrast, our refinement leverage two-view patches and transformer to find more accurate matches in the query view relative to the reference view.\n\n> **Q6:** Using Patch2Pix to refine matches found by SuperGlue (denoted as SuperGlue + Patch2Pix in the Patch2Pix paper) leads to state of the art results for the visual localization task. Unfortunately, this stronger baseline is missing.\n\nThank you very much for your comments. Since the SuperGlue+Patch2Pix is not a keypoint-free matcher, we did not include this baseline in the answer of the original Q3. We apologize for misunderstanding the question, and we add the evaluation of SuperGlue+Patch2Pix on the OnePose-HARD dataset as follows.\n\n||1cm1deg|3cm3deg|5cm5deg|\n|:-|:-|:-|:-|\n|Ours|**16.3**|**55.4**|**70.3**|\n|LoFTR(round)|15.4|43.7|53.4|\n|SPP+SPG+Patch2Pix|10.1|37.2|47.6|\n|SPP+SPG|13.8|36.1|42.2|\n|DRC-Net|11.3|37.0|47.8|\n|Patch2Pix|2.42|19.0|30.4|", " > **Q1:** If generalizability is defined as \"the property of eliminating object/category-specific training\", then I don't think that OnePose, HLoc, and OnePose++ are qualify as generalizable. They all need to build an object/category-specific scene representation, in the form of a 3D model, from the input images and their known poses. I don't why building these 3D models would not qualify as object/category-specific training as it involves optimizing an objective function and since these 3D models are fundamental parts of the object pose estimation stage.\n\nThank you very much for your comments. We will change 'generalizability' to 'no object-specific network training' in the revised version to avoid misleading. \n\n> **Q2:** Relation to PatchFlow[Dusmanu et al., Multi-View Optimization of Local Feature Geometry, ECCV 2020]\n\nThank you very much for your comments. The differences between PatchFlow are as follows.\n\n- We leverage the fine-level matching module with the transformer to refine matches instead of estimating the dense flow field between patches like PatchFlow. The advantages are accuracy and storage efficiency. Since the multiview refinement of PatchFlow requires flow field interpolation, the fine matching module is not adaptable for its framework.\n- We thus propose a simple yet effective strategy to achieve consistent matches for later 3D model refinement. We keep the selected reference node fixed and search around each query node for the fine-level match. The advantages are that our graph structure is significantly simpler than PatchFlow, which is efficient for matching. And we do not need to store and interpolate flow fields for optimization.\n\nNotably, our graph structure is not a sub-graph of the coarse feature track since it contains connections which not exist in the coarse feature track(e.g., the reference and the query node may not be directly matched in tentative matches).\n\n> **Q3:** Where can I see that PixSfM struggles in \"low-texture regions\"? The reported performance seems rather very similar to me.\n\nThe feature cost maps of the local patch around the coarse matches are visualized [here](https://sites.google.com/view/oneposeplusplus/%E9%A6%96%E9%A1%B5). The feature cost maps are calculated by the distance between the corresponding reference feature of another view and each element within the local patch, and the values are normalized to 0~1 for visualization.\n\nIt can be seen that the feature distance maps of PixSfM contain large or multiple minimal regions(blue region) incurred by the ambiguous CNN features in low-textured regions, which are not discriminative enough to find real optimal locations in feature-metric optimization. We attribute the accuracy improvement of our method to the discriminative features in fine-level matching.\n\n> **Q4:** Looking at Sec. 4.4 of the PixSfM paper (and its supplementary material), the high memory costs can be avoided by pre-computing and storing cost maps rather than the descriptors. This comes at a small loss in accuracy. In the provided table, are the 7.35GB required for storing descriptors or the cost maps?\n\nThe reported memory requirement is for storing descriptors. We also conduct the experiment for PixSfM with cost maps on the OnePose-HARD dataset. The results show that although the feature storage cost decreases significantly, the accuracy decreases accordingly.\n||1mm|3mm|5mm|Feature Storage Cost|\n|:-|:-|:-|:-|:-|\n|LoFTR coarse + Our refinement|**29.5**|**73.6**|**85.8**|-|\n|LoFTR coarse + PixSfM|27.6|71.2|84.4|7.35GB|\n|LoFTR coarse + PixSfM(cost map)|25.8|67.4|80.8|0.17GB|", " > **Q1:** Wouldn't the following be a suitable (and rather simple) baseline?...\n\nThank you very much for your comments. We observe the mentioned pipeline is similar to MeshLoc[1], which is a recent concurrent work. We follow the mentioned pipeline to conduct the evaluation on the OnePose-HARD dataset.\n\nWe use the current state-of-the-art object dense reconstruction method NeuS[2] to reconstruct object mesh for each object in the dataset, which takes ~10 hours per object. Then we render depth maps for reference images and estimate the object pose of the query image following the mentioned pipeline.\nResults are shown as follows.\n||1cm1deg|3cm3deg|5cm5deg| Reconstruction Time (per object) | Pose Estimation Time (per frame)|\n|:-|:-|:-|:-|:-|:-|\n|Ours|**16.3**|**55.4**|**70.3**| **347s**|**87ms**|\n|Neus + LoFTR|15.5|49.9|61.8| ~10 hours| 897ms|\n|Neus + Patch2Pix(SPG)|12.5|43.7|55.0|~10 hours| 936ms|\n\nThe results demonstrate that our method achieves higher accuracy, and both the reconstruction and pose estimation are significantly faster.\n\n**Reference:**\n\n[1] Pánek, Vojtěch, Zuzana Kukelova and Torsten Sattler. “MeshLoc: Mesh-Based Visual Localization.” ArXiv abs/2207.10762 (2022): n. pag.\n\n[2] Wang, Peng, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura and Wenping Wang. “NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction.” NeurIPS (2021).", " Thank you very much for the detailed answer. I have some follow-up questions and comments regarding **R2**:\n\n> The main reason is that there is no existing baseline that performs dense reconstruction on the given video and estimates object poses without object-specific training, i.e., identical to our setting.\n\nWouldn't the following be a suitable (and rather simple) baseline?\n* At training time, create a dense 3D model of the object, e.g., using MVS.\n* At test time:\n * Match features between the query image and the training images (as is done, e.g., by hloc) to obtain 2D-2D matches.\n * Rather than obtaining 2D-3D matches using 3D points (from SfM) associated with the features in the training images, corresponding 3D points can be obtained by rendering depth maps of the dense model (the same is done by localization methods evaluating on the InLoc dataset).\n * Do pose estimation with all 2D-3D matches.\n\n> Since our setting aims for efficient pose estimation with the given video, we believe the SfM-based sparse reconstruction is more suitable for the setting because it is more computationally efficient than dense reconstruction.\n\nI am not sure I understand why this argument holds. After all, the proposed approach is based on dense matching between images, as is the case for dense MVS. I don't see why dense MVS would thus be necessarily faster. E.g., according to the supp. mat., the proposed SfM method takes 347 seconds for 193 images at a resolution of 512x512 pixels. For comparison, starting with known extrinsics and intrinsics, Reality Capture, a state-of-the-art commercial 3D reconstruction system, takes 30 seconds to build a sparse point cloud from 392 images at size 800x600 for scan 65 of the DTU dataset (including feature extraction and matching). Dense reconstruction, including generating a mesh, then takes 394 seconds and computing per-vertex colors for the mesh takes about another 30 seconds. This shows that dense reconstruction is feasible in a comparable time. ", " I have a follow-up question on the OnePose++ dataset: How many training and testing images are there? I can't seem to find this information in the supp. material.", " Thank you very much for the answers. Please find my comments and concerns below.\n \n> Compared with previous works on SfM-based pose estimation, our pipeline can be regarded as \"renovating the pipeline with a learning-based approach\"[30]. The main contribution of our method is the keypoint-free framework to eliminate the pipeline's reliance on detected keypoints. Thus our method achieves improvements on low-textured scenarios, which are challenging for previous methods.\n\nThis is rather vague to me. What does \"renovating the pipeline with a learning-based approach\" mean? Prior work, e.g., SuperGlue, LoFTR, Patch2Pix, NC-Net, Sparse NC-Net, Dual RC-Net, DSAC (++, *), InLoc, D2-Net, has already integrated learning into SfM-based pose estimation. Some of these approaches also claim to better handle weakly textured regions (e.g., InLoc motivates dense matching to better handle such regions).\n\n> The previous visual localization methods based on the direct 2D-3D matching focus on handling the large-scale problem, while the main challenge in our task is how to match the query image with the low-textured 3D model for object pose estimation. In our 2D-3D matching network, we eliminate the keypoint detection on the query image and leverage the attention module to provide the global receptive field and yield the discriminative features for 2D-3D matching.\n\nThe challenge at large scale is that feature descriptors become ambiguous as more and more locally similar structures need to be considered (see the work by Li et al., Svarm et al., and Zeisl et al.). The result is that some form of disambiguation is needed, as is the case for weakly texture regions, which also produce ambiguous matches. These works thus need to deal with a very similar problem. In my opinion, the differences need to be discussed in more detail.\n", " > The main difference in refinement is that [Widya et al.] only leverages the local information from each matched point to relocalize points. Since the lack of two-view or multiview constraints, the keypoint detection noise exists in its relocalization phase.\n\nWidya et al. start with coarse matches that are then refined locally: given a match established using features extracted at one layer in the network, the refinement aims at finding more accurate coordinates locally in regions around the initial match (where the region size depends on the receptive field of the features). Isn't this a similar two-view constraint used by the proposed approach?\n\n> **R3**: The main reason is that these methods are not state-of-the-art regarding their performance on both two-view matching and visual localization. \n\nUsing Patch2Pix to refine matches found by SuperGlue (denoted as SuperGlue + Patch2Pix in the Patch2Pix paper) leads to state of the art results for the visual localization task (results from visuallocalization.net) (higher is better):\n\n| Method | Aachen Day-Night v1.1 | InLoc |\n|----------|----------------------------|--------|\n| LoFTR | day: 88.7 / 95.6 / 99.0, night: 78.5 / 90.6 / 99.0 | duc1: 47.5 / 72.2 / 84.8, duc2: 54.2 / 74.8 / 85.5 |\n| SuperGlue + Patch2Pix | day: 89.3 / 95.8 / 99.2, night: 78.0 / 90.6 / 99.0 | duc1: 50.0 / 68.2 / 81.8, duc2: 57.3 / 77.9 / 80.2 |\n| Patch2Pix | day: 86.4 / 93.0 / 97.5, night: 72.3 / 88.5 / 97.9 | duc1: 44.4 / 66.7 / 78.3, duc2: 49.6 / 64.9 / 72.5 |\n\nUnfortunately, this stronger baseline is missing.", " Thank you very much for the detailed feedback. Please find my comments and follow-up questions below.\n\n> **R1**: Thank you very much for your comments. We clarify that the 'generalizable' baselines in our paper include OnePose, HLoc, Gen6D, which are given the same input and share exactly the same setting as ours. Therefore, the comparison in our experiments is fair and substantial. We follow the naming of previous methods OnePose[30] and Gen6D[19], which denote the property of eliminating object/category-specific training as ‘generalizability’.\n\nIf generalizability is defined as \"the property of eliminating object/category-specific training\", then I don't think that OnePose, HLoc, and OnePose++ are qualify as generalizable. They all need to build an object/category-specific scene representation, in the form of a 3D model, from the input images and their known poses. I don't why building these 3D models would not qualify as object/category-specific training as it involves optimizing an objective function and since these 3D models are fundamental parts of the object pose estimation stage. \n\n> **R2**: We clarify the difference with previous works as follows and promise to add the discussion and reference of these previous methods in the final version.\n\n\n**Relation to [Dusmanu et al., Multi-View Optimization of Local Feature Geometry, ECCV 2020]**\n\nThe closer I look at the proposed refinement, the more it seems a special case of the approach of Dusmanu et al. In their work, Dusmanu et al. deal with refining keypoint positions for feature matches. For the two-view case, they estimate the flow from one keypoint to a position in a patch around the other matching keypoint by matching features and regressing the flow. This seems conceptually the same as the fine matching stage (with probably the main difference being that Dusmanu et al. did not use a transformer). For the multi-view case, Dusmanu et al. state that \"Firstly, since corresponding features are generally observed from different viewpoints and looking at non-planar scene structures, the computed displacement vector is only valid for the central pixel and not constant within the patch [...]. Thus, when refining keypoint locations u, v, w, . . . over multiple views, consistent results can only be produced by forming displacement chains (e.g., du→v + d(v+du→v)→w + . . .) without loops. However, such an approach does not consider all possible edges in the graph and quickly accumulate errors along the chain.\" Rather than computing the offset / flow for a single pixel (the original feature position) to a patch, Dusmanu et al. thus compute flow fields between patches and use these fields to jointly refine all keypoint positions (after fixing one keypoint position in one of the images), using as many pairwise matches as possible. The proposed approach is a special case in the sense that (1) it uses a sub-graph of the pairwise matching graph that connects the reference node with the other nodes, but does not include any connections between the other nodes, and (2) only computes a single offset per pair and not a full flow field.\n\nUnless I am overlooking something, I believe that the claim that a novel keypoint-less SfM approach is proposed needs to be adjusted. \n\n**Compare with PixSfM[18]:**\n\nThank you very much for the detailed answer. I have two comments / questions:\n\n> Accuracy. The capability of the two-view transformer module in fine-level matching can be leveraged by our refinement to find accurate matches at low-texture regions, where the CNN feature map used by PixSfM struggles.\n\nWhere can I see that PixSfM struggles in \"low-texture regions\"? The reported performance seems rather very similar to me.\n\n> Storage Efficiency. We do not need to extract and store dense local features around each 2D point and keep them in memory to perform feature-metric optimization like PixSfM. Therefore the storage and memory peak during refinement is low.\n\nLooking at Sec. 4.4 of the PixSfM paper (and its supplementary material), the high memory costs can be avoided by pre-computing and storing cost maps rather than the descriptors. This comes at a small loss in accuracy. In the provided table, are the 7.35GB required for storing descriptors or the cost maps?", " We thank the reviewers for the insightful suggestions. We address the major concerns below:\n\n>**Q1:** The description of the proposed OnePose-HARD dataset is quite abstract. Example images of the new dataset are not included in the paper. Please consider including example images of the OnePose-HARD dataset in the paper.\n\n**R1:** Thank you very much for the suggestions. In fact, we describe the details of the OnePose-HARD dataset, and the example images are included in the supplementary L62-78. We will provide more detailed information and example images in the revised paper.\n\n>**Q2:** In the evaluation, it is not always clear where the results of the other methods come from. Please clarify the source of the results of other methods that you compare your method to.\n\n**R2:** We thank the reviewer for pointing out this missing information. We clarify as follows and promise to add to the final paper.\n\nFor the evaluation of the OnePose dataset, because of the limited space for writing, we report the overall metric by average over the whole OnePose evaluation set. The overall metric results come from OnePose's supplementary material(https://zju3dv.github.io/onepose/files/onepose_supp.pdf, the first row of results in Tab.2). We believe the overall metric won't affect the comparison.\n\nAdditionally, we use underlines to denote the second place results while using bold to denote first place results in Tab.1. We will add the illustrations of these symbols in the caption.\n\nThe results of PVNet on the OnePose-HARD dataset are obtained by running their open-source code. Details are located at L267-268.\nFor the experiments on the LINEMOD dataset, since OnePose contains no such evaluation, we evaluate the OnePose by running their open-source code and using their pre-trained model. As described in L274-275, the results of other baselines, including PVNet, CDPN, Gen6D, are from their original paper.\n\n>**Q3:** The authors claim that their method is \"~10× faster\" [line 290] than other methods, but the actual runtimes of the other methods are not listed in the paper. Please consider including the runtimes of your method and the methods you compare to, in order to support the claim that your method is 10× faster.\n\n**R3:** We thank the reviewer for pointing out this missing information. We provide the runtimes of generalizable pose estimators as follows and will add them to Tab.1\n| Ours| OnePose| HLoc(LoFTR)| HLoc(SPP+SPG)|\n|:- |:- |:- |:- |\n| 87ms | 66ms | 909ms | 835ms |\n\nThe runtimes are evaluated on the same server described in L236. As described in L288-291, our method runs ~10× faster than HLoc-based methods. Since we use more 3D points to perform matching with query feature maps in a coarse-to-fine manner, our method is a little slower than OnePose.", " We thank the reviewers for the insightful suggestions. We address the major concerns below:\n>**Q1:** Both OnePose and OnePose++ are misclassified as being generalizable object pose estimation approaches. It is not too surprising that it outperforms the generalizable baselines as it is able to train (in the form of building a SfM model) per object.\n\n**R1:** Thank you very much for your comments. We clarify that the 'generalizable' baselines in our paper include OnePose, HLoc, Gen6D, which are given the same input and share exactly the same setting as ours. Therefore, the comparison in our experiments is fair and substantial. We follow the naming of previous methods OnePose[30] and Gen6D[19], which denote the property of eliminating object/category-specific training as ‘generalizability’.\n\n>**Q2:** Please describe in detail how the proposed keypoint-free SfM approach differs from prior work in this area.\n\n**R2:** We clarify the difference with previous works as follows and promise to add the discussion and reference of these previous methods in the final version.\n\n**Compare with PixSfM[18]:**\nThe main difference in refinement is that we leverage fine-level matching with Transformer to refine the 2D locations of coarse feature tracks and then optimize the 3D model with geometric error, while PixSfM uses pre-stored dense feature maps and feature-metric BA to refine the 3D model and 2D keypoints globally.\nThe advantages of our refinement are\n\n- Accuracy. The capability of the two-view transformer module in fine-level matching can be leveraged by our refinement to find accurate matches at low-texture regions, where the CNN feature map used by PixSfM struggles.\n- Storage Efficiency. We do not need to extract and store dense local features around each 2D point and keep them in memory to perform feature-metric optimization like PixSfM. Therefore the storage and memory peak during refinement is low.\n\nWe report the point cloud accuracy evaluated on OnePose-HARD scanned objects as follows. The results demonstrate that the 3D models reconstructed by our refinement achieve higher accuracy. Our refinement is also more storage efficient in terms of dense features storage cost. Notably, the image resolution in the dataset is 512×512. With image resolution increase, the storage cost of PixSfM will rise significantly since keypoint-free matchers will yield much more matches.\n||1mm|3mm|5mm|Feature Storage Cost|\n|:-|:-|:-|:-|:-|\n|LoFTR coarse + Our refinement|**29.5**|**73.6**|**85.8**|-|\n|LoFTR coarse + PixSfM|27.6|71.2|84.4|7.35GB|\n|LoFTR coarse (no refinement)|25.6|68.9|83.6|-|\n\nThe following results evaluated on the OnePose dataset illustrate that our refinement also brings improvement for the object pose estimation.\n||1cm1deg|3cm3deg|5cm5deg|\n|:-|:-|:-|:-|\n|LoFTR coarse + Our refinement|**50.7**|**80.0**|**87.0**|\n|LoFTR coarse + PixSfM|48.9|79.3|86.4|\n|LoFTR coarse (no refinement)|45.5|78.6|86.0|\n\n**Compare with SfM approaches provided by keypoint-free descriptors:**\n\nThe main difference is that all these approaches face the trade-off between point accuracy and repeatability. I.e., they scarface the sub-pixel match accuracy by rounding matches to grid level or merging matches within a grid to obtain repeatable 'keypoints' for SfM. On the contrary, our SfM obtains repeatable features while preserving the sub-pixel matching accuracy by the refinement phase.\n\n**Compare with [Widya et al., Structure from motion using dense CNN features with keypoint relocalization]:**\n\n[Widya et al.] and OnePose++ share a similar pipeline in terms of SfM, which firstly strikes for repeatable matches with low-res dense feature grids, then refines matching positions for higher accuracy.\n\nThe main difference in refinement is that [Widya et al.] only leverages the local information from each matched point to relocalize points. Since the lack of two-view or multiview constraints, the keypoint detection noise exists in its relocalization phase.\nIn contrast, our refinement phase performs multiple two-view dense matching in the local regions based on the coarse feature tracks to refine 2D point locations. Therefore, the detection error is avoided, and the capability of keypoint-free matchers' fine level matching is leveraged to boost the performance on low-textured objects.\n\n>**Q3:** Why were Patch2Pix, Dual RCNet, etc., not considered as baselines?\n\n**R3:** The main reason is that these methods are not state-of-the-art regarding their performance on both two-view matching and visual localization. We also conduct experiments on the OnePose-HARD dataset to compare our pipeline with these methods based on their SfM and localization methods. The results show that our method outperforms them by a large margin. We will add the results and references in the final version.\n\n||1cm1deg|3cm3deg|5cm5deg|\n|:-|:-|:-|:-|\n|Ours|**16.3**|**55.4**|**70.3**|\n|LoFTR(round)|15.4|43.7|53.4|\n|DRC-Net|11.3|37.0|47.8|\n|Patch2Pix|2.42|19.0|30.4|", " >**Q4:** The relation of OnePose++ to prior work on SfM-based object pose estimation and visual localization.\n\n**R4:** We thank the reviewer for pointing out the missing discussions and the references of the prior works. We describe the relations as follows, and we promise to add them to the final version.\n\n**Discussion with prior works on SfM-based pose estimation**\n\nCompared with previous works on SfM-based pose estimation, our pipeline can be regarded as \"renovating the pipeline with a learning-based approach\"[30]. The main contribution of our method is the keypoint-free framework to eliminate the pipeline's reliance on detected keypoints. Thus our method achieves improvements on low-textured scenarios, which are challenging for previous methods.\n\n**Discussion with prior works on visual localization**\n\nThe previous visual localization methods based on the direct 2D-3D matching focus on handling the large-scale problem, while the main challenge in our task is how to match the query image with the low-textured 3D model for object pose estimation. In our 2D-3D matching network, we eliminate the keypoint detection on the query image and leverage the attention module to provide the global receptive field and yield the discriminative features for 2D-3D matching.\n\nThe visualizations in Fig. 5 and discussions in Sec. 4.5 demonstrate the attention module plays a critical role in the 2D-3D matching and pose estimation of low-textured objects.\n\n> **Q5:** References for RANSAC and the PnP solver used are missing.\n\n**R5:** We thank the reviewer for pointing out the missing references. We promise to add them in the final version.", " We thank the reviewers for the insightful suggestions. We address the major concerns below:\n>**Q1:** How is the current method's dense reconstruction different from Multi-View Stereo paradigms? The major advantage or the difference between the proposed network and the previous baseline (onepose [30]) is the dense object reconstruction of the objects from videos. The contribution in the dense reconstruction from videos is well studied in the Multi-View Stereo frameworks. So I feel that the method doesn't have a lot of novelty in terms of reconstruction.\n\n**R1:** The comment 'the current method's dense reconstruction' may be a misunderstanding. As described in the paper, our reconstruction part is still an SfM-based method instead of Multi-View Stereo based, and we denote our SfM point cloud as semi-dense since we adapt the keypoint-free image matcher LoFTR, which performs semi-dense matching, to the SfM framework. Therefore our SfM-based pipeline should still be categorized into sparse reconstruction methods, and the reconstructed point cloud is significantly sparser than the dense reconstruction since we do not perform pixel-wise depth estimation such as PatchMatch or PlaneSweep.\n\nCompared with OnePose, our contribution in the reconstruction part is our SfM design to adapt keypoint-free feature matching methods to the SfM. As discussed in L38-42, the keypoint-free matcher LoFTR cannot be directly used for SfM since the inconsistent matches. Our keypoint-free SfM framework solves this problem and yields more complete 3D point clouds compared with the previous keypoint-based SfM framework, which benefits pose estimation.\n\n>**Q2:** Why are evaluations not compared to these paradigms to show the accuracy improvement if the dense reconstruction is very good?\n\n**R2:** The main reason is that there is no existing baseline that performs dense reconstruction on the given video and estimates object poses without object-specific training, i.e., identical to our setting. \n\nSince our setting aims for efficient pose estimation with the given video, we believe the SfM-based sparse reconstruction is more suitable for the setting because it is more computationally efficient than dense reconstruction.\n\nMoreover, the reconstructed SfM point clouds are more compact than the dense MVS point clouds because they are sparser and contain mainly informative 3D points, thus more suitable for storing 3D point features and efficient for performing direct 2D-3D matching in our pipeline.\n\nWe believe incorporating the dense reconstruction methods for object pose estimation in our setting can be explored as a direction for future works.\n\n>**Q3:** The paper assumes that the object videos are given aprior, so one-shot object pose estimation might be a misleading term as the method will fail if the objects videos are not provided beforehand.\n\n**R3:** Thank you very much for your comments. We clarify that the 'one-shot' naming indicates the setting that given one video shot of the object with annotated poses, our method can estimate its poses in arbitrary environments without additional pose estimator training.\n\nThis is similar to the \"one-shot\" setting in 2D detection and segmentation [1,2,3], which assumes \"given an example image of a novel, previously unknown object category (the reference), find and segment all objects of this category within a complex scene (the query image)\"[2]\n\n>**Q4:** Literature review of Multi-view stereo needs to be well studied and the difference between these methods and the proposed keypoint-free methods need to be well established. \n\n**R4:** Thank you very much for your comments. We will add the review of Multi-View Stereo methods and the discussion with our keypoint-free SfM framework in the final version.\n\n>**Q5:** The evaluations are not substantial as comparisons to other methods like CAD-model based pose estimation have not been well studied.\n\n**R5:** The comment \"comparisons to other methods like CAD-model based pose estimation have not been well studied\" may be a misunderstanding. In fact, we compare the proposed method with CAD-model-based baselines PVNet and CDPN on multiple datasets, as pointed out in Line 16-17, shown in Tab 2, 3 and discussed in Sec 4.3, 4.4.\n\nThe results demonstrate that our method achieves comparable results with CAD-model-based pose estimation methods, which are trained for each object with the given CAD model.\n\n**References**\n\n[1] Li, Xiang, Lin Zhang, Yau Pun Chen, Yu-Wing Tai and Chi-Keung Tang. “One-Shot Object Detection without Fine-Tuning.” ArXiv abs/2005.03819 (2020): n. pag.\n\n[2] Michaelis, Claudio, Ivan Ustyuzhaninov, Matthias Bethge and Alexander S. Ecker. “One-Shot Instance Segmentation.” ArXiv abs/1811.11507 (2018): n. pag.\n\n[3] Caelles, Sergi, Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Laura Leal-Taixé, Daniel Cremers and Luc Van Gool. “One-Shot Video Object Segmentation.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017): 5320-5329.", " We thank the reviewers for the insightful suggestions. We address the major concerns below:\n>**Q1:** The proposed method needs off-the-shelf 2D object detector.\n\n**R1:** In practice, the need for an off-the-shelf 2D object detector can be eliminated by leveraging 2D-2D feature matching. This issue has been addressed in Section 2 in the supplementary material of OnePose[30] (https://zju3dv.github.io/onepose/files/onepose_supp.pdf).\n\nFollowing OnePose, We first perform multiple 2D-2D feature matching between reference-query image pairs, and then select the image pair with the most inliers to estimate 2D affine transformation. The region of interest(RoI) in the query image is then detected by transforming the corner of RoI in the reference image with the estimated transformation.\n\nTo validate the effectiveness of this method, we present the evaluation results on the OnePose dataset with the feature-matching-based 2D detector below. The results demonstrate that the performance of the proposed method does not degrade significantly.\n\n| | 1cm1deg | 3cm3deg | 5cm5deg |\n|:- |:- |:- |:- |\n| Ours use GT bounding box (reported in the paper) | **50.4** | 80.0 | 87.0 |\n| Ours use bounding box from feature matching 2D detector | 49.6 | **80.4** | **87.2**|\n\n>**Q2:** The proposed method needs real images for training.\n\n**R2:** Our 2D-3D matching network is trained on real images but can be generalized to novel objects. Besides, our method needs an object video for building an SfM model, but we don't think this is a disadvantage, as in most scenarios capturing a video of an object is much easier than acquiring its CAD model or doing object-specific training.\n\n>**Q3:** CAD model or its equivalent can be reconstructed from the movie of target object with surrounded AR markers. This might decrease the advantage of the proposed method.\n\n**R3:** Thank you very much for your comments. Leveraging AR markers can only help solve the camera poses, while dense reconstruction itself requires other modules, and the quality of the dense reconstruction is not guaranteed, especially for low-textured objects. Moreover, dense reconstruction also requires more computation than our SfM-based pipeline. Therefore we believe it is not ideal for our one-shot scenario.\n\n>**Q4:** How about processing time for training and testing?\n\n**R4:** As detailed in L236-238, our 2D-3D matching network is trained on the OnePose training set. The training takes about 20 hours with a batch size of 32 on 8 NVIDIA-V100 GPUs. At test time, our matching module runs at 87ms for a 512 × 512 query image on a single V100 GPU.\n\n>**Q5:** Is the pose estimation network trained for each object?\n\n**R5:** No. As described in L28-29, our method eliminates the need for per-object training and CAD models. Therefore, it is more applicable for AR scenarios.", " The authors proposed 6-DoF object pose estimation algorithms which does not require CAD models of target objects. They require only video sequence of the target object with camera pose. The keypoint-free SfM build 3D models in training and the keypoint-free 2D-3D matching network can estimate the correspondences between the models and image query. The proposed method handles low-textured objects and achieved state-of-the-art accuracy on public dataset. Strengths:\n- The proposed algorithm can handle low-textured objects and does not require CAD model for training, those are useful for real applications.\n- The ablation study shows the effectiveness of each component.\n\nWeaknesses:\n- The proposed method needs off-the-shelf 2D object detector and real images for training. - How about processing time for training and testing?\n- Is the pose estimation network trained for each object? - CAD model or its equivalent can be reconstructed from the movie of target object with surrounded AR markers. This might decrease the advantage of the proposed method.", " The paper looks at the problem of one-shot object pose estimation on textureless objects where previous keypoint-based methods fail to perform well. The main contribution is using the keypoint-free SFM pipeline to create a repeatable semi-dense point cloud, which automatically helps improve 3D-2D correspondence to estimate the object pose. Comparisons show that the method performs better than a previous method which uses sparse point cloud reconstructions. Strengths:\n- Moving away from keypoint-based methods helps in automatic object pose estimation of objects in the wild.\n- The semi-dense reconstruction of the objects seems to be very helpful in the pose estimation due to better 2D-3D correpondences.\n- The results show that the method is more robust to occlusions than previous methods and a live demo helps show the method's accuracy.\n- Ablation study shows the advantages of the refinement step and the attention module in the pose estimation network.\n- Openpose-Hard dataset is useful for research in pose estimation of textureless objects.\n\nWeaknesses:\n- The major advantage or the difference between the proposed network and the previous baseline (onepose [30]) is the dense object reconstruction of the objects from videos. The contribution in the dense reconstruction from videos is well studied in the Multi-View Stereo frameworks. So I feel that the method doesn't have a lot of novelty in terms of reconstruction.\n- The paper assumes that the object videos are given aprior, so one-shot object pose estimation might be a misleading term as the method will fail if the objects videos are not provided beforehand.\n- Literature review of Multi-view stereo needs to be well studied and the difference between these methods and the proposed keypoint-free methods need to be well established.\n- The evaluations are not substantial as comparisons to other methods like CAD-model based pose estimation have not been well studied. How is the current method's dense reconstruction different from Multi-View Stereo paradigms. Why are evaluations not compared to these paradigms to show the accuracy improvement if the dense reconstruction is very good?\n\n The limitations and potential negative impact are well studied in the paper. ", " The paper considers the problem of object pose estimation in scenarios where CAD model of the objects are not available. The paper describes OnePose++, a variant of the recently proposed OnePose approach that uses densely extracted descriptors (via LoFTR) rather than the SuperPoint keypoints used by OnePose. LoFTR provides matches between pairs of images, where the 2D positions of matching points vary depending on the image pair. As the resulting 2D positions are not repeatable, the paper uses a SfM approach designed to handle this scenario in order to build the 3D model of the object used for pose estimation. 2D-3D matches between a query image and the SfM model are established by directly matching descriptors against the 3D model in a coarse-to-fine manner. In addition to the OnePose++ method, the paper introduces a harder variant of the OnePose dataset, named OnePose-HARD. Experimental results show that OnePose++ outperforms most baselines by a wide margin (in particular, OnePose++ consistently outperforms OnePose). The paper has multiple strengths:\nS1) The proposed OnePose++ approach is a natural extension of OnePose that shows how to swap keypoint-based features with keypoint-free features. Since the latter have shown promise in challenging scenes, e.g., for weakly textured objects or under strong illumination conditions, this is interesting.\n\nS2) OnePose++ clearly outperforms OnePose and also most of the baseline methods. The strong results are a strength of the paper.\n\nS3) The paper provides a detailed ablation study that analyzes the impact of the individual components of OnePose++.\n\nS4) The proposed dataset, OnePose-HARD, seems very challenging and thus has the potential to drive research in the field. It will be of interest to the community.\n\nOn the negative side, there are also multiple weaknesses:\nW1) Both OnePose and OnePose++ are misclassified as being generalizable object pose estimation approaches. It is true that the underlying LoFTR features and the coarse and fine matching stages generalize beyond the data they were trained on. Yet, OnePose++ requires that \"a video sequence with annotated poses is available for each object\" is available and builds a SfM model for this particular type of object. I don't see how this is not an instance-level method. The SfM model for one object might give reasonable results for another object if both objects have very similar shapes (and textures). But a single model for one particular type of an object, e.g., a particular chair, will not generalize over the full class (e.g., all potential chairs).\nGiven that OnePose++ is an instance-level method, it should be more closely compared to other instance-level methods. It is not too surprising that it outperforms the generalizable baselines as it is able to train (in the form of building a SfM model) per object.\n\nW2) The paper claims proposing a \"keypoint-free SfM framework for accurate and complete semi-dense reconstruction\" as one of its main contributions. It is not clear to my how the described framework contributes novelty to the literature:\na) Besides LoFTR, there are other keypoint-free descriptors, e.g., Patch2Pix [Zhou et al., Patch2Pix: Epipolar-Guided Pixel-Level Correspondences, CVPR'21], Sparse NCNet [Rocco et al., Efficient ´neighbourhood consensus networks via submanifold sparse\nconvolutions, ECCV'20], and Dual RCNet [Li et al., Dual-Resolution Correspondence Networks, NeurIPS'20], that are evaluated in a visual localization setting that requires an underlying SfM model. They thus also provide approaches for keypoint-free SfM. All of them are inherently applicable to object pose estimation (since they do not make any assumption on the type of scenes). Similarly, LoFTR also uses a keypoint-free SfM approach (based on Dual RCNet) (see https://github.com/zju3dv/LoFTR/issues/9). Another approach to keypoint-free SfM based on dense feature matching between images is [Widya et al., Structure from motion using dense CNN features with keypoint relocalization, IPSJ Transactions on Computer Vision and Applications 2018]. Yet, this prior work is not discussed. The differences between this prior work and the proposed approach should be clearly described. Furthermore, comparisons with other keypoint-free approaches, e.g., Patch2Pix or Dual RCNet, are missing.\nb) As far as I can see, the coarse reconstruction stage for keypoint-free SfM is the same as for LoFTR (based on the description provided here: https://github.com/zju3dv/LoFTR/issues/9). The refinement stage seems identical to [18]. The paper states that \"Note that our keypoint-free SfM framework is also related to PixSfM [18] but comes with different motivations. PixSfM improves keypoint-based SfM for more accurate 3D reconstructions by refining inaccurately-detected sparse local features with dense feature maps. Different from PixSfM, we aim to adapt the keypoint-free method LoFTR [29] to SfM for object pose estimation.\" However, I disagree with this statement. As stated in the paper, \"Every pixel in the downsampled image can be regarded as a “keypoint” in the original image.\" The goal of the refinement stage is to \"refine the object point cloud with sub-pixel correspondences.\" In other words, the motivation for the refinement stage is to obtain a more accurate 3D model by refining initially inaccurate keypoint positions. This is achieved using dense feature maps to detect more accurate keypoint positions.\n\nW3) As in the case of keypoint-free SfM, there are other directions of highly related work that omitted:\na) Work on object pose estimation using SfM rather than CAD models certainly predates OnePose. Examples include [Gordon & Lowe, What and Where: 3D Object Recognition with Accurate Pose, Toward Category-Level Object Recognition, 2006], [Rothganger et al., 3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints, 3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints, IJCV 2006], [Hsiao et al., Making specific features less discriminative to improve point-based 3D object recognition, CVPR 2010[, [Bhat et al., Visual words for 3D reconstruction and pose computation, 3DIM/3DPVT 2011], and [Fenzi et al., 3D Object Recognition and Pose Estimation for Multiple Objects using Multi-Prioritized RANSAC and Model Updating, DAGM/OAGM 2012]. This prior work should be properly acknowledged.\nb) The paper states that \"HLoc is slow during pose estimation because it depends on multiple 2D-2D image matchings as the proxy for building 2D-3D correspondences.\" Yet, there is quite some literature on visual localization algorithms that do not use image retrieval but directly match 2D features against 3D points via associated feature descriptors. Examples include: [Arth et al., Wide Area Localization on\nMobile Phones. ISMAR 2009], [Li et al., Location Recognition using Prioritized Feature Matching, ECCV 2010], [Li et al., Worldwide Pose Estimation Using 3D Point Clouds, ECCV 2012], [Choudhary & Narayanan, Visibility probability structure from sfm datasets and applications, ECCV 2012], [Donoser & Schmalstieg, Discriminative featureto-point matching in image-based localization, CVPR 2014], [Cao & Snavely, Minimal scene descriptions from structure from motion models, CVPR 2014], [Lim et al., Real-time monocular image-based 6-dof localization, IJRR 2015], [Lynen et al., Get out of my lab: Largescale, real-time visual-inertial localization, RSS 2015], [Zeisl et al., Camera Pose Voting for Large-Scale Image-Based Localization, ICCV 2015], [Camposeco et al., Toroidal Constraints for TwoPoint Localization under High Outlier Ratios, CVPR 2017], [DuToit et al., Consistent map-based 3d localization on mobile devices, ICRA 2017], [Liu et al., Efficient Global 2D-3D Matching for Camera Localization in a Large-Scale 3D Map, ICCV 2017], [Sattler et al., Efficient & effective prioritized matching for large-scale image-based localization, PAMI 2017], [Svarm et al., City-scale localization for cameras with known vertical direction, PAMI 2017], and [Lynen et al., Large-scale, real-time visual-inertial localization revisited, IJRR 2019]. Many of these approaches are directly applicable to the object pose estimation setting based on SfM models and should thus be discussed.\n\nThe following additional comments did not affect my recommendation:\n* References for RANSAC and the PnP solver used are missing. In order to consider raising my score, I would like to see the following points addressed in a rebuttal:\nQ1) Please describe in detail how the proposed keypoint-free SfM approach differs from prior work in this area.\nQ2) Why were Patch2Pix, Dual RCNet, etc. not considered as baselines?\nQ3) Please describe the relation of OnePose++ to prior work on SfM-based object pose estimation and visual localization (see W3 above). The paper adequately discusses limitations and potential negative social impact.", " The paper proposes an improvement of the one-shot pose estimation system OnePose to better estimate the pose of low-texture objects. In particular, the keypoint-based matching component of OnePose is replaced with the key-point free matching method LoFTR. For the evaluation of the improved functionality, the authors propose the new dataset OnePose-HARD, which contains low-texture objects along with their pose annotations.\n Originality:\n+ The idea of replacing keypoint-based matching with keypoint-free matching for improved low-texture performance is straightforward and a logical extension of the prior work.\n\nQuality:\n+ The paper is well written and structured.\n+ The paper precisely adheres to the formatting requirements and the page limit.\n+ Related work is described sufficiently and it is made clear how the proposed method is positioned in the existing landscape.\n+ The proposed method is evaluated on appropriate datasets and compared to relevant state-of-the-art methods.\n\nClarity:\n+ The language is clear and easy to follow.\n+ Methods used in the paper, e.g., LoFTR, are explained briefly, but well understandable.\n- The description of the proposed OnePose-HARD dataset is quite abstract. Example images of the new dataset are not included in the paper.\n- In the evaluation, it is not always clear where the results of the other methods come from. E.g., in Table 1, the results of the OnePose method on the OnePose dataset is given as an overall single value for the whole dataset, while the original OnePose paper lists the results of the three categories \"large\", \"medium\", and \"small\" separately, but no overall single value. Similarly, in Table 3, the paper gives results for OnePose on the LINEMOD dataset, but the original OnePose paper contains no such evaluation.\n- The authors claim that their method is \"~10× faster\" [line 290] than other methods, but the actual runtimes of the other methods are not listed in the paper.\n\nSignificance:\n+ The addressed issue with low-texture objects is relevant in practice.\n+ The results of the proposed method seem to be significantly better than comparable methods, particularly in low-texture scenarios. \n- Please consider including example images of the OnePose-HARD dataset in the paper.\n- Please clarify the source of the results of other methods that you compare your method to: Where are these numbers taken from or did you evaluate the methods yourself? In particular, the results of OnePose on the OnePose dataset and on LINEMOD, as described above. Please also describe why some results are underlined in Table 1.\n- Please consider including the runtimes of your method and the methods you compare to, in order to support the claim that your method is 10× faster.\n\n\nAdditional remarks (typos, suggestions etc.), no need to address in the rebuttal:\n- line 49: \"establish\" -> \"establishes\"\n- line 70: \"OnePose-HARD ,\" -> \"OnePose-HARD,\"\n- line 104: \"keypoints .\" -> \"keypoints.\"\n- Figure 2, caption: \"a reference image sequences\" -> \"a reference image sequence\"\n- Figure 2, caption: \"point cloud which are\" -> \"point cloud which is\"\n- line 131: \"build\" -> \"builds\"\n- line 295: \"number(~ 5000)\" -> \"number (~ 5000)\"\n- On the naming of the proposed dataset: The suffix \"HARD\" is very generic and does not tell what the difficulties actually are. Please consider a more informative suffix, such as \"LowTexture\".\n Yes, limitations and impact were discussed where applicable." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "2EYVgYc0E64", "x4p0nN_K--z", "t9b298mKdGs", "0UrqmCnxjU", "gvsHS3HJQdW", "WQc0ZgE92MW", "ZphKQxqijC", "-HwdPtmJMvw", "-HwdPtmJMvw", "8HujlfSeJqc", "_tyeP4Sljo1y", "kQVgUrUfsuR", "QEJXgCgfevt", "i1x3fLn9X2O", "Mt2SG4bBbry", "_tyeP4Sljo1y", "L-BU2a6v9k9", "XUE5-0wKa9D", "OSrNdlttdmZ", "OSrNdlttdmZ", "dmZVK7RrOZ", "w-rI0KbKqIW", "nips_2022_BZ92dxDS3tO", "nips_2022_BZ92dxDS3tO", "nips_2022_BZ92dxDS3tO", "nips_2022_BZ92dxDS3tO" ]
nips_2022_rY2wXCSruO
DeepInteraction: 3D Object Detection via Modality Interaction
Existing top-performance 3D object detectors typically rely on the multi-modal fusion strategy. This design is however fundamentally restricted due to overlooking the modality-specific useful information and finally hampering the model performance. To address this limitation, in this work we introduce a novel modality interaction strategy where individual per-modality representations are learned and maintained throughout for enabling their unique characteristics to be exploited during object detection. To realize this proposed strategy, we design a DeepInteraction architecture characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder. Experiments on the large-scale nuScenes dataset show that our proposed method surpasses all prior arts often by a large margin. Crucially, our method is ranked at the first position at the highly competitive nuScenes object detection leaderboard.
Accept
This work was overall positively technically evaluated with some concerns mainly related to limited experimental validation, the need to some additional justifications and explanation, and the missing computational cost analysis. The provided rebuttal responded sufficiently well to these concerns and the overall evaluation is positive.
val
[ "aZXN_Ygp2S0", "T2lWuWxWpGY", "afMsveYEh7r", "fERc0MnS7Dw", "SCb34geObeO", "DV9l9IR0_Nn", "nttt4noTrg", "7CQ3GATLyZn", "I3ixA8LORn0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper describes a 3D object detection architecture for 3D LiDAR point clouds. The question at hand is whether or not the authors violated double-blind reviewing.\n\nHere are the facts:\n - The authors reference their position and test set results on a public leaderboard.\n -The leaderboard is the nuScenes detection task leaderboard [1], where the authors are de-identified. The entry is date-stamped the day before the NeurIPS deadline.\n - The relevant sections of the leaderboard are reproduced and de-identified in the paper. The website link for the leaderboard is not linked anywhere in the paper.\n - Reviewer F1dJ has raised the issue and provided the link in question\n\nBecause I am not familiar with this particular subfield, I have the following questions:\n\n1. How do other teams in this field and on these kinds of tasks handle the de-identification? By visual inspection 40-50% of the leaderboard entries are anonymized. Can teams submit results anonymously and then de-identify after the review period is over?\n\n2. Does NeurIPS have an official policy for authors to not post on leaderboards and mention them in the paper directly? Would it be different if the authors had merely described \"an un-named detection task\" and provided test-set results against several known baselines?\n\n3. Do the NeurIPS reviewers receive instructions to not knowingly seek out and try to de-identify the authors? Again, the leaderboard is not linked in the paper anywhere. The relevant portions of the leaderboard are reproduced in Table 1 in the paper. I can see an argument that this is similar to posting a Github repo in advance where the NeurIPS reviewer could find the Github repo with names if they searched for the exact paper title, but they may be instructed not to. Alternatively, the conference could instruct authors not to post ANY identifiable information on the internet.\n\nIn light of these questions and my own understanding of the situation, I'm inclined to say that there is no double-blind reviewing policy violation, but I am curious about the thoughts from the other reviewers. I particularly want to hear about how other researchers in the field operate with these anonymity constraints as well as how conferences (including NeurIPS!) have set regulations around this.\n\n[1] https://www.nuscenes.org/object-detection?externalData=all&mapData=all&modalities=Any\n See above for open questions about whether or not there are ethical issues. In light of these questions and my own understanding of the situation, I'm inclined to say that there is no double-blind reviewing policy violation, but I am curious about the thoughts from the other reviewers. I particularly want to hear about how other researchers in the field operate with these anonymity constraints as well as how conferences (including NeurIPS!) have set regulations around this.", " Thanks for your updates! After reading other reviews, I'm happy to keep my current rating. ", " A reviewer noted “Research Integrity Issues (e.g., plagiarism), Responsible Research Practice (e.g., IRB, documentation, research ethics)”. However, they did not provide additional information in their comment. Another reviewer noted: “Potential negative societal impact is not discussed (there is a section in the supplementary titled as such but discusses only research impact).” Authors acknowledge there may be increased computing resources to run multimodal algorithms. That may contribute to longer term environmental or sustainability issues. The authors might consider including information about potential impact (improvement?) to safety in the supplementary materials. \n", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Our response to the reviewer’s comments is below:\n\n**Q1: The comparison of speed and model size.**\n\nThanks for this valuable suggestion. \n\nWe have presented the latency of DeepInteraction in Table 3d in the main text. As suggested, we have further compared the inference latency and model size between our DeepInteraction and a number of representative multimodal 3D detection methods. This test was conducted on a Tesla V100 GPU. The latency and parameter number of MVP [2] are obtained with its virtual point generation algorithm and object detection algorithm considered. The mAP and NDS of PointAugmentation [1] and MVP [2] are taken from the original papers. We tested latency, FPS, and parameter numbers using their open source code and their published configuration file. The table below shows that our method runs the fastest among the multimodal fusion methods in the table below. The same is observed for the number of parameters. Please note in this work we do not focus on the model efficiency.\n\nTable 1: The comparison of speed and model size.\n| Method |mAP(%)↑|NDS(%)↑|Latency(ms)↓|Pamras(M)↓|FPS↑|\n|:----|:----:|:----:|:----:|:----:|:----:|\n|PointAugmenting [1]|66.8|71.0|455|**28.2**|2.2|\n|MVP [2]|66.4|70.5|830|124.2|1.2|\n|DeepInteraction|**70.8**|**73.4**|**357**|59.4|**2.8**|\n\n**Q2: More visualizations.**\n\nGreat suggestion. We have now provided extra visualization examples in Figure 3 of our revised supplementary material. In particular, with multimodal fusion by our DeepInteraction, the model can successfully identify those objects that are difficult to recall in the LiDAR point cloud only. \n\n\n>[1] Wang, Chunwei, et al. Pointaugmenting: Cross-modal augmentation for 3d object detection. *CVPR* 2021.\n\n>[2] Yin, et al. Multimodal virtual point 3d detection. *NeurIPS* 2021.", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Our response to the reviewer’s comments is below:\n\n**Q1: Ablation on the bilateral feature fusion *vs* unilateral fusion.**\n\nAs clearly presented in Table 3c of the main paper, the bilateral modality interaction we introduce in this paper is a key performance contributor. Specifically, the first row of Table 3c gives the result (only 66.4% mAP) **without** the bilateral interaction encoder. This is clearly inferior to the variants (other rows in Table 3c) equipped with our proposed bilateral interaction encoders with various layers. \n\nTo more precisely demonstrate the superiority of bilateral feature interaction, we have now compared our bilateral interaction (naive version of our DeepInteraction) with the classical unilateral fusion alternative Transfusion [1]. Here we limit our DeepInteraction using the same number of encoder layers as Transfusion for a fair comparison. The table below shows that our bilateral interaction is clearly more effective for modality fusion.\n\nTable 1: Ablation on the bilateral feature fusion *vs* unilateral fusion. \n|Method|mAP|NDS|\n|:----|:----:|:----:|\n|Transfusion w/ unilateral|67.5|71.3|\n|Transfusion w/ bilateral|**68.7**|**71.9**|\n\n**Q2: The improvements come from the stronger backbone which seems to have the biggest impact according to Table (e)?**\n\nNo.\n\n**First**, we adopt the widely used Resnet-50 and Voxlnet as our image and Lidar backbone without bells and whistles.\n\n**Second**, as clearly presented in Lines 237-244 and Table 3e, with the same backbone, our DeepInteraction is clearly superior to the previous state-of-the-art method Transfusion [1] and this advantage is generic to the backbone selected (*e.g.*, PointPillars or VoxelNet). It is worthy to note that backbone is orthogonal to our model design novelty (*e.g.*, modality interaction) and not comparable in an apple-to-apple manner (a very basic principle). \n\n**Third**, our DeepInteraction with a ResNet-50 backbone can outperform the latest concurrent work BEVFusion [3] with the stronger Tiny-Swin backbone on the nuScenes dataset, further demonstrating the advantage of our modality fusion strategy rather than the backbone.\n\n**Q3:Results on other datasets?**\n\nAs suggested, during the rebuttal period, we have further evaluated our DeepInteraction on the Waymo and KITTI benchmark. Due to limited time and computational resources, we can not well tune the hyperparameters for optimal performance.\n\n(1) Waymo open dataset\n\nFor the Waymo Open dataset, we used the Transfusion-L [1] trained on Waymo as our LiDAR-only baseline and used the ResNet-50 from the cascade mask RCNN pretrained on the nuImage instance segmentation task as our image backbone (same as on nuScenes in the main paper).\nThe L2 mAPH of our approach are listed below.\n\nTable 2: LEVEL_2 APH of the Waymo validation set (%).\n|Method|APH/L2@Vehicle|APH/L2@Pedestrian|\n|:----|:----:|:----:|\n|PointAugmenting [2]|62.2|64.6|\n|Transfusion [1]|65.1|64.0|\n|DeepInteraction|**65.4**|**64.9**|\n\nFrom this table, we can see that our model can achieve superior performance over the latest alternative approaches, with the biggest gain in the pedestrian category with small size.\n\n(2) KITTI dataset\n\nThe experiment on the KITTI dataset follows the same recipe as on Waymo. The table below presents the comparison of 3D AP on the moderate split of the KITTI validation set. Beyond the LiDAR-only baseline Transfusion-L, we also reproduced Transfusion-LC on KITTI for comparison. We observe from the table below that our method can achieve the best results for both car and pedestrian. This demonstrates that the proposed fusion strategy can consistently bring in benefits for LiDAR with more beams.\n\nTable 3: The performance on the KITTI validation set (%).\n|Method|3D AP@Vehicle|3D AP@Pedestian|\n|:----|:----:|:----:|\n|Transfusion-L [1]| 69.8 |51.9|\n|Transfusion-LC [1]|70.0|52.6|\n|DeepInteraction|**70.2**|**53.5**|\n\n**Q4: Missing related work.**\n\nThanks. The related work of our first submission mainly focuses on previous works for outdoor scenarios. We have now added the suggested work although with less relevance in the revised paper.\n\n**Q5: Typos**\n\nThanks, we have corrected them in the revised paper.\n\n**Q6: Additional discussion of social impacts and limitations**\n\nThanks. We have added more detailed discussion on the limitations and potential impacts in the revised supplementary.\n\n>[1] Bai, Xuyang, et al. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. *CVPR* 2022.\n\n>[2] Chunwei Wang, et al. PointAugmenting: Cross-modal augmentation for 3d object detection. *CVPR* 2021.\n\n>[3] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation, 2022\n", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Our response to the reviewer’s comments is below:\n\n**Q1: Results on the other datasets.**\n\nThanks. As suggested, we have further evaluated our DeepInteraction on the Waymo benchmark. Due to limited time and computational resources, we have not exhaustively tuned the hyperparameters.\nWe used the Transfusion-L [1] trained on Waymo as our LiDAR-only baseline and used the ResNet-50 from the cascade mask RCNN pretrained on the nuImage instance segmentation task as our image backbone (same as on nuScenes in the main paper).\n\nTable 1: The LEVEL_2 APH results on the Waymo validation set.\n\n| Method | APH/L2@Vehicle| APH/L2@Pedestrian|\n|:----|:----:|:----:|\n|PointAugmenting [2]|62.2|64.6|\n|Transfusion [1]|65.1|64.0|\n|DeepInteraction|**65.4**|**64.9**|\n\nFrom this table, we can see that our model can achieve slightly superior performance compared to the latest alternative approaches, with the biggest gain on the category with small size (*e.g.*, pedestrian).\n\n**Q2: Generality of tiny objects containing only several Lidar points.**\n\nGood question. To understand whether tiny objects with a few pixels in the Lidar points are general in other datasets, we present statistics of the number of points within ground truth bounding boxes in the validation split of each dataset.\n\nTable 2: The frequency (%) of objects with different numbers of LiDAR data. \n|Number of LiDAR point|0~20|20~40|40~60|60~80|80+|\n|:----|:----:|:----:|:----:|:----:|:----:|\n|nuScenes|69.3|10.1|4.4|2.7|13.5|\n|Waymo|24.1|14.5|9.0|6.3|46.1|\n\nFrom the above table, we can observe that tiny objects (*e.g.*, < 20 points) are highly general across different scenarios/benchmarks at varying degrees. Specifically, there are 69.3% and 24.1% of objects containing less than 20 points in nuScenes and Waymo respectively.\n\n\n>[1] Bai, Xuyang, et al. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. *CVPR* 2022.\n\n>[2] Chunwei Wang, et al. PointAugmenting: Cross-modal augmentation for 3d object detection. *CVPR* 2021.\n\n\n\n\n\n\n", " The paper introduces DeepInteraction, a new approach to fuse rgb and depth input to do 3D object detection. Prior works only add rgb features to the Lidar point cloud, while DeepInteraction treat them equally and fuse them in the transformer layers. Experiments are conducted on nuScenes benchmark and DeepInteraction achieves state-of-the-art performance.\n Strengths:\n\n- DeepInteraction achieves state-of-art performance on nuScenes benchmark. Under the same evaluation setting (without ensemble), it generally improves mAP by 2 points whether test-time augmentation is used or not.\n- The experiments on nuScenes are extensive. It's interesting to see DeepInteraction has reasonable performance whether the 3D backbone is PointPillars or VoxelNet, and VoxelNet works better.\n\nWeaknesses:\n\n- How does the approach work on other automonous driving datasets? I'm convinced DeepInteraction helps recover tiny objects which are just several pixels in the Lidar points. But are such cases really general in other benchmarks? Or, are there any reasons it cannot be run on other benchmarks?\n See weaknesses. Limitations are discussed in the paper.", " This work tackles the task of 3D object detection in autonomous driving scenarios. The claimed key contribution is the proper combination of two input modalities: RGB camera images and point clouds from LiDAR scanners (unlike prior methods which treat one of the modalities as auxiliary). The proposed method outperforms prior methods on the nuScenes benchmark. The method is analyzed on the same dataset. - The paper makes a strong point that the bilateral fusion between camera and lidar modalities is the key part that contributes to the strong performance of the proposed model. From the ablation study alone it is not directly clear if the improvements come from the change from unilateral to bilateral modality information flow or from stronger backbones which seem to have the biggest impact according to Table (e).\n\n- The method is evaluated only on a single dataset. There are multiple other popular 3D object detection benchmarks, e.g., [KITTI](http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d).\nEvaluation on more datasets would show the generalization capability of the model.\n\n- Important related work on image depth fusion and transformer-based 3D object detection appears to be missing, e.g., [ImVoteNet](https://arxiv.org/pdf/2001.10692.pdf), [3DETR](https://arxiv.org/abs/2109.08141) [Misra et al. ICCV 2021]. (1) Which experiment in the ablation study shows that the bilateral feature fusion is indeed the key component necessary for the improved scores?\n\n(2) How does the model perform on other datasets e.g. KITTI 3D object detection?\n\n**Minor Details**\n\n- Fig.2: inconsistent notation \\phi <-> \\varphi in (a) and (b).\n- l.59 ‘preform’ → ‘perform’\n- reference [5] and [6] are the same\n Potential negative societal impact is not discussed (there is a section in the supplementary titled as such but discusses only research impact). Limitations are only discussed on an abstract meta level. Overall, the two sections (E, F) in the supplementary are not adding any real value to the paper and seem to exist only to tick the required boxes.", " (1) The paper reveals and gives an analysis of the unilateral limitation of feature aggregation on different modalities for existing methods.\n(2) The paper proposes a new framework for bilateral feature interaction and association on different modalities.\n(3) The paper achieves state-of-the-art results on nuScenes benchmark. Strengths:\n(1) The paper is well-written and easy to follow, especially for demonstrating its motivation in the introduction session.\n(2) The experiments are sufficient and convincing for both comparing with previous methods and ablation studies.\n\nWeaknesses:\n(1) The paper lacks the comparison of speed and memory between different methods. For the autonomous driving scenarios whether the model can run in real-time is also important. I'm curious about whether the DETR-based framework can run as fast as, at least not slower too much compared with previous SOTAs.\n(2) Although the proposed bilateral system is reasonable, it would be great to show some visualizations of the heat map of the features from the two modalities to help readers get a deeper understanding on how the visual information from the two sources is well selected and utilized. \n Authors are encouraged to provide additional experiments for the two concerns I list in the weakness part:\n\n(1) The comparison of speed and memory.\n(2) Some visual results(e.g., heatmap of two sources) to convince the effect of the proposed bilateral framework. The authors have presented the potential societal impacts and limitations in their supplementary material. Overall, they look good to me." ]
[ -1, -1, -1, -1, -1, -1, 6, 1, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "nips_2022_rY2wXCSruO", "DV9l9IR0_Nn", "nips_2022_rY2wXCSruO", "I3ixA8LORn0", "7CQ3GATLyZn", "nttt4noTrg", "nips_2022_rY2wXCSruO", "nips_2022_rY2wXCSruO", "nips_2022_rY2wXCSruO" ]
nips_2022_Qb-AoSw4Jnm
MoVQ: Modulating Quantized Vectors for High-Fidelity Image Generation
Although two-stage Vector Quantized (VQ) generative models allow for synthesizing high-fidelity and high-resolution images, their quantization operator encodes similar patches within an image into the same index, resulting in a repeated artifact for similar adjacent regions using existing decoder architectures. To address this issue, we propose to incorporate the spatially conditional normalization to modulate the quantized vectors so as to insert spatially variant information to the embedded index maps, encouraging the decoder to generate more photorealistic images. Moreover, we use multichannel quantization to increase the recombination capability of the discrete codes without increasing the cost of model and codebook. Additionally, to generate discrete tokens at the second stage, we adopt a Masked Generative Image Transformer (MaskGIT) to learn an underlying prior distribution in the compressed latent space, which is much faster than the conventional autoregressive model. Experiments on two benchmark datasets demonstrate that our proposed modulated VQGAN is able to greatly improve the reconstructed image quality as well as provide high-fidelity image generation.
Accept
The three reviewers had significantly diverging final opinions (strong accept, borderline accept and weak reject). The authors addressed many of the concerns in their rebuttal. I read the paper carefully, and I agree with the concerns from one reviewer about why the improvements in stage-1 do not lead to significant improvements in stage-2. I think this concern needs to be properly addressed, because otherwise it is unclear what the benefit of this approach would be for real applications. While previous work has shown that improved stage-1 performance leads to improved stage-2 performance, why was it not replicated in this situation? I also found the analysis of why the spatially conditioned normalization improves reconstruction to be lacking. If the "jagged" structures are addressed by this work, then understanding why with simple examples would have shed more insight into the technical contribution. However, in summary, I think this paper is slightly above the acceptance bar, and addressing the above concerns is recommended for the final version.
test
[ "WlETZSG0I8x", "VLLP3zzno17", "cHtiZr8dowOS", "2yZd9pMfOHKZ", "wCX7KXajPC1", "l35Xob3uDP6", "WlNdWjaJ6BG" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nWe first thank you again for your valuable comments and detailed suggestions. In the previous replies, we have tried our best to address your questions and revised the manuscript based on the suggestions.\n\nWe are looking forward to your reply to our responses, and we are open to any discussions to improve this work.\n\nBest wishes!", " Thank you for your constructive and detailed comments. We have revised the manuscript according to the suggestions and elaborated all concerns below.\n\n- **Q1: “...improve many applications…”**\n\nThanks for the suggestion of applying our modulating quantized vectors to many other downstream applications, which are definitely our future work. However, as the first step, we want to focus on the reconstruction and the generation, which is also a common scope for the existing works such as VQ-VAE, VQ-VAE-2, RQ-VAE, ViT-VQGAN. The reconstruction is the cornerstone to learn the compact and expressive representation for downstream tasks. As demonstrated in RQ-VAE (CVPR’22) and ViT-VQGAN (ICLR’22), a better reconstruction naturally results in better editing applications **under the same compression ratio**. In addition, **in the appendix, we show the excellent image completion results with huge masked regions (90% masked ratio)** using our proposed simple and efficient quantizer MoVQ. Our proposed module is flexible to be integrated into other VQ-based architectures and be applied for other applications, and we leave it to the research community for future explorations.\n\n- **Q2: Motivation and methodology**\n\n**a). Motivation**: In most cases, the repeated indexes result in **\"jaggies\" artifacts** in the original VQGAN decoder, which can be observed in many VQGAN visual results. To further elaborate, we encoded 10,000 images on ImageNet using VQGAN (16384 entries), where there are averagely 27.21% repeated indexes. However, this repeated **\"jaggies\" artifact** is difficult to be measured in synthesized images for patch similarity. \n\n**b). Significance**: While the concurrent work RQ-VAE (CVPR’22, unpublished during the submission) mitigates this issue using a residual representation in a recursive way, it requires much more embedding times. Another concurrent work ViT-VQGAN (ICLR’22, unpublished during the submission) applies larger ViT model for the embedding. Compared to these work, our key motivation is to modulate the spatial information for discrete index and preserve the tokens’ special information as in SPADE [27]. \n\n**c). Conciseness**. To achieve the goal, we built our model and code upon the VQGAN baseline by simply replacing the group normalization with the proposed normalization layer, along with different types of initial spatial features in the decoder. \n\n**d). Efficiency**. Our model significantly improves the reconstruction quality than the VQGAN baseline, and the reconstruction performance is even better than the concurrent works RQ-VAE and ViT-VQGAN, suggesting the reconstructed images are closer to the original inputs, which can contribute to many downstream image generation tasks.\n\n**e). Flexibility**. As the proposed normalization layer is simple, efficient and yet effective for preserving the spatial information on discrete representation, we believe this plug-in module would be useful to the community. It can be easily reproduced and integrated into other VQ frameworks. \n\n- **Q3: Significance in terms of the generation results compared to SOTA**\n\na). Please see our response to **Reviewer D1KP Q2** regarding the generation results. \n\nb). **Our better compact representation is not due to a larger latent space**. Our model used **the same number of token in latent space ($16\\times16\\times4$ (ours) vs $8\\times8\\times16$ (RQ-VAE) vs $32\\times32$ (ViT-VQGAN)) and even a smaller codebook (1024 (ours) vs 16483 (RQ-VAE) vs 8192 (ViT-VQGAN)) than the concurrent work RQ-VAE and ViT-VQGAN**, and yet achieved much better reconstruction quality. Besides, without a special design for the stage-2, our model achieved better performance than RQ-VAE (a larger model) on image generation.\n\n- **Q4: “Improving only the stage-1 makes the paper less interesting….”**\n\nThe overall VQ-based image synthesis quality depends on both stage-1 quantizer and the stage-2 probability model. Therefore, some works (VQ-VAE-2, RQ-VAE) focus on the stage-1, and some works (VQ-DDM, MaskGiT) focus on the stage-2, and others (VQ-GAN, ImageBART, ViT-VQGAN) address both. In this paper we focus on improving the stage-1 quantizer of the VQ-based image synthesis pipeline. We show that the proposed module leads to better reconstruction quality in stage-1 (Figures 3 and 4, and Table 1) under the same compression ratio (RQ-VAE and ViT-VQGAN), which leads to a better generation quality in stage-2 (Tables 2 and 3) than the baseline VQGAN and the concurrent work RQ-VAE under the similar training setting. \n", " Thanks for your constructive comments and detailed suggestions.\n\n- **Q1:”The novelty is somewhat limited…”**\n\na). Please see our response to **Reviewer hVkj Q2**. \n\nb). Regarding MaskGIT, first of all, we did not claim the adoption of MaskGIT as a contribution. **MaskGIT is only used for the faster sampling and our performance gain is not due to MaskGIT**. We believe it is important to facilitate researchers in this area to do fair comparisons efficiently, for which generating examples faster is critical, since the previous way of autoregressive sampling is too slow. To demonstrate our improvements mainly come from our special design in the stage-1, we additionally report the quantitative results with autoregressive sampling like VQGAN in revision. The autoregressive sampling achieves a slight better FID score (8.52 vs 8.78 on FFHQ, and 7.13 vs 7.22 on ImageNet), compared to the MaskGIT sampling. However, to sample 60,000 examples, the autoregressive sampling takes 10 days with a batch size of 12 in 4 V100 GPU, while MaskGIT takes about 3 hours. \n\n- **Q2: “The performance on image synthesis is not good enough…”**\n\na). It is hard to give a fair comparison with MaskGIT (CVPR’22) and VIT-VQGAN (ICLR’22). MaskGIT is trained with 16 CloudTPUv4 with batch size of 256 for 300 epochs training in 4 days, and VIT-VQGAN is trained with 128 ColudTPUv4 with batch size of 256 for 500,000 steps training in 36 hours. We can only access 4 shared Tesla V100 GPUs, which cannot handle such a large batch size with such long training steps. However, our proposed spatially conditional normalization is a plug-in module that can be easily integrated to those architectures to further improve their performance.\n\nb). During the submission, these excellent works VIT-VQGAN and MaskGIT are not officially published, and the corresponding training codes are not publically released. Thus, we also cannot retrain them according to our setting for a fair comparison. We listed them as baselines simply to report the latest concurrent works.\n\nc). As claimed in Line 206-243, **our network architecture, code and hype-parameters are built upon the VQGAN baseline (CVPR’21)**. For MaskGIT, we just adopted their parallel sampling strategy. Compared to the VQGAN baseline, our model significantly improves the performance on image reconstruction and generation. Even compared with the concurrent work RQ-VAE (CVPR’22, unpublished during the submission), our model achieves better generation quality using smaller codebook size with same numbers of tokens. Our model significantly improves the reconstruction quality, which can be very helpful for other downstream tasks such as image inpainting, interpolation and editing.\n\n- **Q3: “The evaluation is somewhat limited…”**\n\nThe scores in Figure 6(c) are for **different channels corresponding to different numbers of tokens**. It is naturally challenging to predict more tokens for the probability models in the stage-2. However, **under the same compression ratio, i.e. the same number of tokens**, a better rFID score indicates a better compact representation, leading to a better generation score. This is verified in the table below. Besides, the concurrent works VIT-VQGAN (ICLR’22) and RQ-VAE (CVPR’22) also claimed that a better quantizer is definitely helpful for the generation task. Moreover, thanks to the suggestion, we added the ablation FID results with or without the proposed spatially conditional normalization for generation (see the table below as well as the updated Fig.6(a)), which clearly demonstrates the superiority of our proposed module.\n\n| | Methods | rFID | FID|\n| -- | -- | -- | -- |\n| $\\mathbb{A}$ | Baseline VQGAN | 4.42 | 11.4 |\n| $\\mathbb{B}$ | + multichannel x4 | 3.78 | 10.6 |\n| | w/ sinusiods | 3.52 | 9.17 |\n| $\\mathbb{C}$ | learned constants | 3.48 | 8.86 | \n| | w/ Fourier features | 2.26 | 8.78 | \n\n- **Q4: Regarding questions**\n\nMost of the questions have been addressed above. As for why the proposed normalization is only used in the first 3 blocks of the decoder, it is simply because features in the first 3 blocks hold the same resolution in VQGAN baseline. We directly apply the conditional discrete map ($16\\times16\\times4$) into these layers. If applying the normalization layer in other blocks, we will need to learn multiscale representations or upsample the discrete map to different resolutions, resulting in a complex predication for stage-2.\n\n- **Q5: Regarding limitations**\n\nThanks for this insightful comment. We indeed have some initial experiments on simultaneously masking and sampling multichannels tokens, but the observed performance is slightly worse. We guess this may because some token in some special channels might be significant, which needs to be sampled first. For instance, we might first generate an eye token and then predict its color. As the stage-2 is not the focus of this paper, we leave it for our future investigation. \n", " Thank you for your constructive comments and recognition of our MoVQ model.\n- **Q1: “...not always careful in its comparison to other methods…”**\n\nThanks for the suggestion. We have reported the codebook size and the latent size in Table 1 for a fair comparison on all models. The same configuration is used in Figure 3. We have added this detail into the revised version. Compared to the latest state-of-the-art RQVAE [24] (CVPR’22, unpublished during this submission), our model significantly improves the image quality in the first stage under the same compression ratio, while using much smaller codebook size (1024 entries vs 16384 entries).\n\n- **Q2: “...generalized normalization functions…”**\n\nThanks for this insight comment and interesting discussion. Different from the traditional normalization, which is for better statistical learning, our proposed spatially conditional normalization is designed to improve the current VQ based generation framework, which often embeds similar neighboring patches into the same quantization index and leads to repeated artifact patterns in the generated images. Specifically, the proposed normalization layers provide different scale and shift values according to the different quantized values, as well as adapt to different spatial locations. Currently, this spatial normalization is specific to the discrete token index map. For the conventional network architecture (such as MaskGIT), we need to design the specific conditional map as the spatially-variant input.\n\n- **Q3: Regarding limitations**\n\nThanks for the insightful comment. The observation to the child image in Fig.4 is interesting. Zooming into this particular example, we do agree that the constructed image does have some perceived identity or attribute difference from the original image. This might be due to the FFHQ dataset contains much more young people (34,654 (age:20-40)) than children (9,873 (age:0-10)). On the other hand, we did provide detailed quantitative comparisons in all popular metrics including PSNR (pixel level), SSIM (patch level), LPIPS (feature level), rFID (dataset level) in Table 1. Compared to the existing VQ-based methods, the proposed method has significantly improved the reconstruction quality under the same compression ratio. As for traditional compression techniques, they can indeed be faithful to each individual image, but are unable to utilize the power dataset priors like the data-driven approaches. Thus, their compression ratios are limited, and the compressed images may not be photorealistic.\n", " The author's introduce a new VQ-GAN model with three improvements, spatial normalization of the quantized vectors, MaskGIT for quicker autoregressive reconstruction, and multichannel feature quantization. They show better reconstruction performance with similar code size to other models and methods. This is obviously of interest to the NeurIPS community and the results are impressive. The implication that normalization the quantized code vectors adds substantial improvement is interesting, and definitely opens up interesting areas to follow up on. The use of MASKGit seems to me to be not a large contribution of this work, and while interesting, can be downplayed a bit in comparison to the normalization and use of multichannel quantization.\n\nThe weaknesses are that the paper is not always careful in its comparison to other methods (figure 3 doesn't show the code size or latent size in comparison between several different methods so it is fairly difficult to compare across methods). More careful use of common tools from compression literature (on rate-distortion) would help clarify some of the comparisons across methods. Other papers have found that generalized normalization functions can learn as powerful representations as much deeper traditional neural network architectures without normalization. Do the author's think this may be part of the power of the addition of normalization as they state it? Or is it something specific to normalization before the MASKGit? The authors address stock concerns about implications but I believe that larger implications are raised by the image of the child in Figure 4 (4th column). The child's face has changed significantly from the original, but unlike traditional encoding techniques, does not betray any indication to the downstream user that the image has lost information or is in any sense \"uncertain\". Traditional artifacts in simpler compression techniques may look bad to the eye but they at least faithfully convey to the user when information has been lost. I think not enough concern is paid in this paper and in this literature to technologies that produce confident and clear images that are not what was captured and encoded on the other side, and that may fool the end user otherwise. ", " The paper presents a new VQ-based image synthesis method. Based on MaskGIT, the paper proposes spatially conditional normalization and uses multichannel representation to improve the reconstructed image quality of the tokenization stage. The proposed spatially conditional normalization modulates the quantized vectors for a better reconstruction performance by inserting spatially variant information to the VQ decoder. The multichannel representation subdivides the encoded continuous latent along the channel dimension into multiple chunks and quantizes them with a shared codebook, which further improves the reconstruction performance by increasing the latent size. For the generation stage, the paper modifies MaskGIT to sample the multichannel latent. Experimental results on two benchmark datasets show the proposed image synthesis method is efficient and effective for generating diverse and high-quality images. Strengths:\n+ The paper is technically sound.\n+ The paper is well structured.\n+ The citations are extensive.\n\nWeaknesses:\n- The novelty is somewhat limited. The idea to improve the reconstruction ability of the tokenization stage is original and interesting. However, the proposed method heavily relies on existing techniques such as multichannel representation and MaskGIT.\n- The performance on image synthesis is not good enough. As shown in Table 3, the proposed Mo-VQGAN performs worse than MaskGIT for class-conditional image generation on ImageNet in terms of complexity and quality metrics. It weakens the contribution since the proposed Mo-VQGAN is based on MaskGIT.\n- The evaluation is somewhat limited. The main contribution of the paper is incorporate the spatially conditional normalization to modulate the quantized vectors. Figure 6(a) has shown that spatially conditional normalization can improve the reconstruction performance (rFID), especially for Fourier features. However, improving the reconstruction performance (rFID) does not necessarily improve the generation performance (FID), as shown in Figure 6(c). In my opinion, the experimental evaluation should report FID with or without the proposed spatially conditional normalization.\n 1.Section 4.3 says that MaskGIT and VIT-VQGAN performs better on ImageNet because they “use more GPUs for longer training”. I’m interested in the difference of training budget. Can the proposed Mo-VQGAN surpass MaskGIT given the same training budget?\n2.It would be better to conduct more ablation experiments on the generation performance. Which initialization of the spatially conditional normalization is better for the generation? Why is the spatially conditional normalization only used in the first three blocks of the VQ decoder?\n The paper discusses an interesting limitation of the proposed method. The model sometimes generates images with a high-frequency appearance without the structure information, which may be attributed to the generation of multichannel representation. I think the choice of mask scheduling function in MaskGIT may be not optimal for the multichannel representation. Maybe the multi-channels of the same location should be masked and generated together, and the sampling can rely on the probability product of the multi-channels.", " The paper discusses a method for generating, reconstructing images using quantized representations. The key difference wrt prior work is that they modulate the quantized representations, that is, they propose to use AdaIN-like modulation of the quantized features. They claim that without this, the results are often repetitive. A further smaller contribution is to use several channels of quantized features, ie when the image is encoded they split features into 4 blocks along the channel dimension and quantize them with the same dictionary. \n\nThey show better reconstruction results and comparable or worse results on image synthesis. The paper is well written and addresses a challenging, very competitive problem. The key contribution---modulating quantized vectors---is interesting although perhaps not sufficient on its own right. Improving stage-1 training is a very important problem, which potentially can improve many applications, including image synthesis, in- and out-painting, text-to-image, video synthesis. The paper, however, needs to be improved to be able to claim that their contribution lead to these improvements.\n\n**Motivation**: The paper is motivated by the claim that quantization results into repetitive structures. I wonder if the authors could support this statement somehow, since it's not easily observable in images if that's the case. Furthermore, if that's one of the motivations, it should be supported by numerical experiments, especially to show that the contributions improve the situation. It's not totally clear how do to this, but a method based on image autocorrelation might do the trick. The only support for this claim in the paper is fig 3. However, I cannot say it shows any repetitive content, besides, this image *can* be repetitive. Finally, even if there is repetitive content, is it due to the exactly the same tokens (a visualization of token indices will help here), or because the decoder collapses. Currently, the support behind the claim is not sufficient. Modulation can \"just\" improve expressivity of the the tokens, allowing the generator to have shorter tokens. \n\n**Methodology**: from a technical standpoint modulation of the tokens is a simple extension of the original quantized schema. The proposed multi-channel quantization is interesting too, but I cannot say that both of these contributions combined together bring the paper above the bar.\n\n**Significance**: Improving only the first stage makes the paper less interesting. Why did previous papers gain attention? I believe because they had good encoder-decoder frameworks, which offered rich latent spaces. These latent spaces can be used to solve a variety of generative tasks, such as reconstruction, generation, completion, outpainting, text-to-image and even text-to-video. If the current paper improves stage-1 it can perhaps also show better results at stage-2? In terms of stage-2 the paper reports image synthesis comparisons, in which the numbers are either on par or worse than the state-of-the-art. The authors say that their model is smaller, but the model size is not the main contribution of the work. I believe, there are many ways of making the model even smaller. If you make the model larger will it be better? In the supplement they show class-to-image generation results. According to table 3, there is no improvement over MaskGIT in terms of all of the scores. MaskGIT is even a smaller model. So in terms of significance of the proposed contributions it's hard to tell whether stage-1 improvements lead to improvements in downstream tasks. The numbers show the opposite. An intuition could be that the proposed framework reconstructs images better than others, because it provides larger, more expressive latent space, at the cost of poorer structure. Somehow the authors admit this in the limitations paragraph. It would be great if the paper could prove otherwise. Please see above It would be great to better understand what the proposed stage-1 can do and what it cannot do. Like can it do outpainting, superresolution and other downstream applications. Reconstruction per se is a less interesting application." ]
[ -1, -1, -1, -1, 7, 5, 4 ]
[ -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2022_Qb-AoSw4Jnm", "WlNdWjaJ6BG", "l35Xob3uDP6", "wCX7KXajPC1", "nips_2022_Qb-AoSw4Jnm", "nips_2022_Qb-AoSw4Jnm", "nips_2022_Qb-AoSw4Jnm" ]
nips_2022_A1yGs_SWiIi
TransTab: Learning Transferable Tabular Transformers Across Tables
Tabular data (or tables) are the most widely used data format in machine learning (ML). However, ML models often assume the table structure keeps fixed in training and testing. Before ML modeling, heavy data cleaning is required to merge disparate tables with different columns. This preprocessing often incurs significant data waste (e.g., removing unmatched columns and samples). How to learn ML models from multiple tables with partially overlapping columns? How to incrementally update ML models as more columns become available over time? Can we leverage model pretraining on multiple distinct tables? How to train an ML model which can predict on an unseen table? To answer all those questions, we propose to relax fixed table structures by introducing a Transferable Tabular Transformer (TransTab) for tables. The goal of TransTab is to convert each sample (a row in the table) to a generalizable embedding vector, and then apply stacked transformers for feature encoding. One methodology insight is combining column description and table cells as the raw input to a gated transformer model. The other insight is to introduce supervised and self-supervised pretraining to improve model performance. We compare TransTab with multiple baseline methods on diverse benchmark datasets and five oncology clinical trial datasets. Overall, TransTab ranks 1.00, 1.00, 1.78 out of 12 methods in supervised learning, incremental feature learning, and transfer learning scenarios, respectively; and the proposed pretraining leads to 2.3\% AUC lift on average over the supervised learning.
Accept
This work introduces and evaluates a general scheme to feature-ize tabular data, and methods for (self-supervised) pre-training over the same, with a focus is on learning transferable representations. Reviewers were unanimous that the approach proposed constitutes a flexible, practical approach that borrows and brings together existing SOTA techniques. Some questions about the specific settings concerned in the evaluation (and distinctions between them) were sufficiently addressed during the response period. Empirical results show consistent gains over the baselines on the tasks considered. An additional suggestion: one might naively anticipate that transfer learning for tables is not particularly promising given the very different semantics two arbitrary tables might have. However, the scenarios considered here involve settings in which transfer seems a priori reasonable; I might suggest the authors address this upfront, and explicitly outline the conditions under which transfer learning for tables is anticipated to work (and what assumptions are necessary for such cases), and where it is not.
train
[ "c_QwlRb8o_A", "r8pkZwHATj", "A-fU7Ly1dRj", "aLqS_gDTdo", "GnGa9cc7wyy", "xCtMBogBz4I", "0kjRg9OQ2Ey", "LnZ9YFu_mmi", "bXi2zHlJhWY", "e6ZzIaB8AkE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Most of my queries/doubts/concerns are answered and I also increased the rating to 7.", " Thanks for addressing my questions and concerns and I am happy to raise my score to 7.", " We thank Reviewer THBY for the helpful feedback. Besides our general response above, please see our specific response below.\n\n**Line 109 claim**\n\n> Incorrect claim in line 109: E is not contextualized. \n\nIt is correct that $E$ is not contextualized in terms of feature level when applying interactions like attention. However, *TransTab* treats each feature in two parts: column name and cell value, e.g., value *20* under the column *age*. Here, when we concatenate *20* with *age*, the value is contextualized. Thanks for the suggestion and we rephrase this part in the new version.\n\n\n\n**Feature incremental learning setting**\n\n> Feature incremental learning setting is unclear.\n\nWe clarified the settings in the new version. Please refer to the response to **Reviewer teon**: **Feature incremental learning v.s. transfer learning**, **Line 214-215**, **Experiments regarding feature incremental learning**, and **Establishment of subsets**. We have updated the experiment descriptions in the new version.\n\n\n\n**Baselines in zeroshot learning**\n\n> No baseline results (e.g. VIME and SCARF) for zero-shot setting.\n\nAlthough VIME and SCARF are all self-supervised methods, they are unable to deal with zero-shot prediction because they are trained either with reconstruction or with the contrastive objective. After pretraining, these models need an additional classification head and further supervised fine-tuning. These methods do not handle the case when we only have limited labeled samples because pretraining and supervised learning are infeasible. Moreover, they need to be retrained every time when table structure varies.\n\n\n\n**Related papers in NLP**\n\n> This paper assumes tables are matrix-like and column types are given, which hiders its transferability. Many papers in the NLP community have explored to process tables under a more flexible setting:\n\nThank for providing the literature on table retrieval and table-to-text generation. We will add them to the related works. Nonetheless, these works are on quite different tasks from ours. And it is unclear if they contribute to superior tabular prediction performances. Specifically, both table-text generation and table retrieval methods encode the whole table to an embedding while *TransTab* encodes each row of tables. We agree it is interesting future work to extend *TransTab* to more settings, e.g., nested table.\n\n\n\n**Line 98 binary feature**\n\n> line 98: Why not assign an embedding vector when $x_b \\neq 1$? Do you compare them?\n\n We did consider the implementation that adds an indicator embedding for True or False in binary features. We compared it with the current method and found that did not achieve better results. Moreover, our current method significantly reduces computational cost for high-dimensional sparse tables because all False binary columns are not included in encoding.\n\n\n\n\n**Line 105 value limit**\n\n> line 105: There is no value limit for $x_u$. Do you encounter any problem without scaling $E_u$?\n\nIn practice, we scale $x_u$ by standardization or normalization such that $E_u$ is numerically stable. We added this detail in our new version on Line 105.\n\n\n\n**Line 214 typo**\n\n>line 214: (5) -> (2) in Fig1?\n\nThanks, we fixed this typo in the new version.\n\n\n\n**More baselines**\n\n> It would be meaningful to compare the methods focusing on numerical tables with those focusing on text tables.\n\nThis is an interesting future work, which probably needs additional methods to handle different data types and their semantic relations.\nWe could draw ideas from the literature on table generation, table retrieval, and table semantic parsing to tackle this new setting. But it is probably beyond the scope of this paper.\n", " We thank Reviewer teon for the helpful feedback. Besides our general response at the beginning, we provided additional details to address some specific comments.\n\n\n\n**Feature incremental learning v.s. transfer learning**\n\n> Setting for Feature incremental learning and Transfer learning seems very similar.\n\nIt is true that both applications are conceptually similar. However, they are still different:\n\n- For *feature incremental learning* when we have set1, set2, and set3 which built incrementally (set2 includes all columns in set1, set3 include all in set2), the object is to involve all three in the same round of training (w/ supervised loss) and enhance the prediction for set3. \n- For *transfer learning* when we have three equal-sized subsets (columns are assigned randomly into the three sets), the training process has two stages: pretraining + finetuning. In the first stage, the model is trained on set1+set2 using contrastive pretraining, namely VPCL in our paper (supervised or self-supervised); in the second, the model is trained on set3 only (supervised).\n\nWe discriminate between these two settings because they apply to different scenarios. Transfer learning based on VPCL is for learning from a wide range of data to build a foundation model good for adapting to downstream tasks; Feature incremental learning is for making the best of all data from the same domain.\n\n\n\n**Line 214-215**\n\n> line 214-215 is confusing (incomplete)\n\nWe rephrased and extended that part on the new version on Line 213-216. We split the raw dataset into three subsets: set1, 2, and 3. Baseline methods apply to two scenarios: (1) learning from all data that only have features of set1 and (2) learning from data from set3 only. We report the best of the two. *Transtab* applies to learning from all three subsets.\n\n\n\n**Experiments regarding feature incremental learning**\n\n> In Feature incremental learning, no comparisons on how the performance on set1 after training on set1+set2; set1, set2, set1+set2 after training on set1+set2+set3.\n\nIt is feasible to add those additional experiments, but the primary object of feature incremental learning is to enhance the performance on set3. We do not train *TransTab* stepwise on set1, 2, and then 3. Instead, all three sets are used in one training round simultaneously. The reviewer's proposal fits the target of transfer learning: improving performance on each dataset by learning across datasets. We refer the reviewer to Table 4 in the paper which illustrates the results of transfer learning and Table 6 that shows the average improvement led by contrastive pretraining on all datasets.\n\n\n\n**Establishment of subsets**\n\n> How the partitions of set 1,2,3 are created in transfer learning, zero-shot inference, and feature incremental learning settings? Are these sets created by randomly partitioning for every seed? Or the partition is fixed for all seeds? Are subsets in feature incremental learning and zero-shot inference settings the same?\n\n\nFor experiments in Sec. 3.2, 3.3, 3.4, we create subsets randomly with a fixed seed, respectively. That is, the subsets vary across these sections.\n\n- Feature incremental learning. The columns are split into three distinct parts ${v_1,v_2,v_3}$. Set1 contains $v_1$, set2 contains $v_1,v_2$, and set3 has $v_1,v_2,v_3$. Three sets have an equal number of samples.\n- Transfer learning. The columns are split into two parts $v_1,v_2$ where $v_1$ and $v_2$ have 50% of elements overlapped. Two sets have an equal number of samples.\n- Zeroshot learning. The columns are split into three distinct parts ${v_1,v_2,v_3}$. Set1 contains $v_1$, set2 contains $v_2$, set3 contains $v_3$. Three sets have an equal number of samples.\n\nWe add this explanation to appendix D of the new version.", " **Ablation on self-supervised pretraining**\n\n> How does the self-supervised pretraining affect the performance? I would like to see an ablation study where you only training the model with the direct supervision signals.\n\nIn Section 3.1, our method uses the direct supervision signals without a pretraining step, which might be the requested ablation study. We further discussed the benefit of pretraining in Section 3.5. In particular, the proposed VPCL pretraining generally leads to better performance on clinical trial mortality prediction datasets. We clarified these settings in the experiment section, on Line 203.\n\n\n\n**Clarity**\n\n> I think section 2.4 could be better explained: what's the definition of vik in line 133? How do you compute ψ in equation 4?\n\nAs defined in Line 133, $v$ are vertical partitions of the table, e.g., $v_1$ are rows under column 1, $v_2$ are rows under column2, and so on. In Line 137, $\\psi$ is the cosine similarity of two vectors. We improved Sec 2.4 to address these writing issues in the new version.\n", " We thank Reviewer MTvW for the helpful feedback to our work. We addressed most of Reviewer MTvW's comments in our general response above. And we will provide additional responses to specific comments below.\n\n\n\n**Justification of phrases**\n\n> In the paper \"existing works only cover vanilla supervised learning and fixed-table pretraining due to the fixed-column assumption.\" In my point of view, this is overclaiming. Not all existing works only cover vanilla supervised learning.\n\nWe agree that our original sentence may cause misunderstanding. We intend to illustrate that due to the fixed-column assumption, most existing works only handle supervised learning or pretraining on the same-structure tables. We rephrased that in the new version.\n\n\n\n**Zeroshot prediction** \n\n> The zero-shot performance in Table 5 seems surprising to me. How do you split the table into three distinct sets? Do you do random split and how many random seeds have you tried? ... Moreover, can you try a setting that you manually control the number of categorical, binary, and numerical feature in both training and testing and see how does the model generalize?\n\nIn the zero-shot learning experiments, we split the table columns into three equal-sized subsets where columns are 50\\% mutually overlapped (e.g., set1 and set2 have 50\\% same columns, set2 and set3 have 50\\% same columns). Likewise, each subset has 1/3 number of samples. We follow the setting specified in Table 2 caption. All the results are averaged over 10 runs with different random seeds. During this process, the subset columns are fixed but train/test splits are changed. We add more experiments to dive deep into the zero-shot prediction capability of our method. To be specific, we split the dataset into two subsets with no sample overlaps and test zero-shot performance with column overlap ratio varying from 0 (non-overlap) to 1. \nThe corresponding figure is added in the new version (Fig. 6 in Appendix). Nonetheless, we agree with ZSL that the problem is very challenging when the training tables and testing tables are highly mismatched. \n\n\n| AUC\\overlap ratio | 0 | 0.2 | 0.5 | 0.8 | 1.0 |\n| ----------------- | ------ | ------ | ------ | ------ | ------ |\n| credit-g | 0.5584 | 0.5740 | 0.6241 | 0.6441 | 0.7612 |\n| credit-a | 0.8118 | 0.8128 | 0.8583 | 0.8621 | 0.8697 |\n| dress-s | 0.5640 | 0.5663 | 0.5847 | 0.5740 | 0.7011 |\n| cylinder-b | 0.5279 | 0.5461 | 0.6657 | 0.6509 | 0.6550 |\n\n\n\n**Qualitative analysis of the transferability**\n\n> In addition to the quantitative results, I would also like to see some qualitative analysis of the transferability of the model. \n\nWe added some case studies and discussed why transfer learning and zero-shot prediction across them are feasible. Two clinical trial datasets are illustrated below. We observe several shared columns across the two datasets. Moreover, there are columns named differently but sharing similar meanings, e.g., \"adverse effect: infection\" and \"adverse effect: infection without neutropenia(specify)\". *TransTab* can be effective in both scenarios. \n\n| | adverse effect: nausea | adverse effect: vomiting | adverse effect: asthenia | adverse effect: infection |\n| ---- | ---------------------- | ------------------------ | ------------------------ | ------------------------- |\n| 0 | 0 | 0 | 0 | 0 |\n| 1 | 0 | 0 | 0 | 0 |\n| 2 | 0 | 0 | 0 | 0 |\n\n| | adverse effect: febrile neutropenia | adverse effect: infection (documented clinically) | adverse effect: infection without neutropenia(specify) |\n| ---- | ----------------------------------: | ------------------------------------------------: | -----------------------------------------------------: |\n| 0 | 0 | 0 | 0 |\n| 1 | 0 | 0 | 0 |\n| 2 | 0 | 0 | 0 |\n\n\n", " We thank the reviewers for their thoughtful and constructive reviews. Most comments center around the settings and experiments of **feature incremental learning** and **zero-shot learning**. In response, we have updated our manuscript to include a detailed explanation of the experiment settings. We also ran additional experiments to test the sensitivity of zero-shot learning w.r.t. the overlapping ratio of table columns.\n\nHere is the change summary:\n\n- Sec. 2.2, L105: added a footnote to explain how to avoid the numerical issue when encoding numerical features.\n- Sec 2.2, L109: rephrased the concept w.r.t. the cell embedding contextualization to avoid misunderstandings.\n- Sec 2.4, L133: further explained $v_i^k$ in $x_i$.\n- Sec 2.4, L137: further explained the similarity function $\\psi$.\n- Sec 3.1, L203: further explained the supervised learning setting.\n- Sec 3.2, L213-L216: rephrased the experiment settings of feature incremental learning.\n- Sec 3.4, L242-L244: added an experiment to check the sensitivity of zero-shot learning w.r.t. the overlapping ratio of columns of two tables.\n- Sec 4, L287: added related works suggested by Reviewer THBY and discussed the difference to our method.\n- Appendix D, L592: added an introduction on how to build the subsets for three experiments (Sec 3.2, 3.3, 3.4).\n- Appendix, Fig.6: added figures for the sensitivity evaluation of zero-shot prediction.", " This paper proposed to relax fixed table structures by introducing a Transferable Tabular Transformer (TransTab) for tables. They basically convert each row into a feature vector and then apply stacked transformers for feature encoding. There are several advantages of this encoding: (1) it can deal with the tables that have different number of columns; (2) it is easier to transfer the knowledge learned from different columns. They conduct experiments on one clinical dataset and several public datasets under four different settings: supervised learning, feature incremental learning, transfer learning, and zero-shot learning. The empirical results show that the proposed approach outperform the baselines in the literature. They also showed that in the zero-shot learning scenario, they can almost match the performance of pretraining plus fine-tuning. - Originality: The main idea in this paper is a good combination of several ideas proposed in the literature. With modifications and adaptations, it worked and yield promising results on several datasets.\n\n- Quality: The proposed approach is technically sound and the experimental results showed that it outperformed several strong baselines for tabular data prediction. Although the results are impressive, I have several comments:\n - One of the main advantages advertised in the paper is that the proposed method could easily extend to feature incremental learning, pretraining+finetuning, and zero-shot inference. In the paper \"existing works only cover vanilla supervised learning and fixed-table pretraining due to the fixed-column assumption.\" In my point of view, this is overclaiming. Not all existing works only cover vanilla supervised learning. For example, those transformer-based architectures like TabTrans, FT-Trans, can be easily adapted to those settings. \n - The zero-shot performance in Table 5 seems surprising to me. How do you split the table into three distinct sets? Do you do random split and how many random seeds have you tried? I would imagine a split that during the training, the model mostly sees Categorical and Binary features while during test it mainly sees Numerical features. In this way, I don't think the model is able to do zero-shot transfer. Moreover, can you try a setting that you manually control the number of categorical, binary, and numerical feature in both training and testing and see how does the model generalize? \n - In addition to the quantitative results, I would also like to see some qualitative analysis of the transferability of the model. What does the data look like and why the model is able to do the transfer?\n - How does the self-supervised pretraining affect the performance? I would like to see an ablation study where you only training the model with the direct supervision signals. This could help understand how much of the improvement is from the architecture design and how much improvement is from the self-supervised pretraining.\n\n- Clarity: In general, this paper is well-organized and easy to follow. I think section 2.4 could be better explained: what's the definition of $v_i^k$ in line 133? How do you compute $\\psi$ in equation 4?\n\n- Significance: This paper achieved strong results across a range of different datasets. Although the experiments are not comprehensive enough for the readers to understand every aspect of the system, I think it still sets a strong baseline and a good reference for the future work in this direction. see strengths and weaknesses for details This paper has sufficiently addressed the limitations.", " This paper presents a tabular learning framework that covers transfer learning across tables, zero-shot inference, feature incremental learning, pre-training, and finetuning. This approach does not assume that columns in the table are fixed and work even with variable column tables. The authors propose two Contrastive Learning-based pre-training approaches by vertically partitioning the tables. This pre-training approach is feasible since the columns can vary across tables, making self-supervised and supervised pre-training possible. The transformer model proposed performs significantly better in all the claimed settings (transfer learning across tables, zero-shot inference, feature incremental learning, pre-training, and fine-tuning). In addition, the authors also introduce *clinical trial mortality prediction* tabular dataset. **Pros**\n* The proposed contrastive learning methods are computationally cheaper.\n* This variable column approach is really useful when the tables have too many columns and encoding them will be difficult in current existing transformers for tabular data (e.g. TaBERT).\n\n**Cons**\n* Setting for *Feature incremental learning* and *Transfer learning* seems very similar. (Dividing the dataset into three sets containing an equal number of columns and first training on set 1&2 then train on set 3 vs the transfer learning setting in the paper)\n* line 214-215 is confusing (incomplete)\n* In Feature incremental learning, no comparisons on how the performance on set1 after training on set1+set2; set1, set2, set1+set2 after training on set1+set2+set3. Will the performance on previous sets decrease? If yes, How to mitigate that? 1. How the partitions of set 1,2,3 are created in transfer learning, zero-shot inference, and feature incremental learning settings?\n2. Are these sets created by randomly partitioning for every seed? Or the partition is fixed for all seeds?\n3. Are subsets in feature incremental learning and zero-shot inference settings the same? NA", " This paper focuses on the transferability of tabular data classification methods. It proposes three novel settings to evaluate the model transferability in terms of columns: column overlapping, column increment, and zero-shot. It also proposed a novel method combining self-supervised and supervised pre-training. ### Strength\n* Three novel settings to evaluate the model transferability on tabular data classification. Transferability is an important research topic.\n* A novel method based on (self-)supervised pre-training for tabular data classification which is more accurate and transferable.\n\n### Weakness\n* Incorrect claim in line 109: $E$ is not contextualized. To get contextualized embedding, the input embeddings should interact with each other, but not simply concatenization.\n* Feature incremental learning setting is unclear.\n* No baseline results (e.g. VIME and SCARF) for zero-shot setting.\n* This paper assumes tables are matrix-like and column types are given, which hiders its transferability. Many papers in the NLP community have explored to process tables under a more flexible setting:\n * Wang et al. [Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning](https://arxiv.org/pdf/2205.03972). NAACL 2022\n * Yang et al. [TableFormer: Robust Transformer Modeling for Table-Text Encoding](https://arxiv.org/pdf/2203.00274.pdf). ACL 2022\n * Wang et al. [Retrieving complex tables with multi-granular graph representation learning](https://arxiv.org/pdf/2105.01736). SIGIR 2021 * line 98: Why not assign an embedding vector when $x_b \\neq 1$? Do you compare them?\n* line 105: There is no value limit for $x_u$. Do you encounter any problem without scaling $E_{u}$?\n* line 214: (5) -> (2) in Fig1? * It would be meaningful to compare the methods focusing on numerical tables with those focusing on text tables." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "aLqS_gDTdo", "GnGa9cc7wyy", "e6ZzIaB8AkE", "bXi2zHlJhWY", "xCtMBogBz4I", "LnZ9YFu_mmi", "nips_2022_A1yGs_SWiIi", "nips_2022_A1yGs_SWiIi", "nips_2022_A1yGs_SWiIi", "nips_2022_A1yGs_SWiIi" ]
nips_2022_1tnVNogPUz9
Towards Efficient 3D Object Detection with Knowledge Distillation
Despite substantial progress in 3D object detection, advanced 3D detectors often suffer from heavy computation overheads. To this end, we explore the potential of knowledge distillation (KD) for developing efficient 3D object detectors, focusing on popular pillar- and voxel-based detectors. In the absence of well-developed teacher-student pairs, we first study how to obtain student models with good trade offs between accuracy and efficiency from the perspectives of model compression and input resolution reduction. Then, we build a benchmark to assess existing KD methods developed in the 2D domain for 3D object detection upon six well-constructed teacher-student pairs. Further, we propose an improved KD pipeline incorporating an enhanced logit KD method that performs KD on only a few pivotal positions determined by teacher classification response and a teacher-guided student model initialization to facilitate transferring teacher model's feature extraction ability to students through weight inheritance. Finally, we conduct extensive experiments on the Waymo dataset. Our best performing model achieves $65.75\%$ LEVEL 2 mAPH surpassing its teacher model and requiring only $44\%$ of teacher flops. Our most efficient model runs 51 FPS on an NVIDIA A100, which is $2.2\times$ faster than PointPillar with even higher accuracy. Code will be available.
Accept
In this paper, the authors propose a new method for knowledge distillation for 3D object detection in point cloud data. This problem is quite important for self-driving cars and 3D computer vision. The goal of their work is to compress models to achieve reasonable trade-offs in compute performance versus accuracy. The authors explore these questions using two popular forms of 3D detection: pillar-based and voxel-based architectures (a) and focus on extensive experimentation with the Waymo Open Dataset. The authors first examined how to build student-teacher models with good trade-offs between accuracy and computational demand, introducing a new metric the Cost Performance Ratio (CPR). The authors then systematically explore a series of knowledge distillation methods (e.g. logit KD, label KD, and teacher-guided initialization) to identify their best model. The end result of their search is to identify a student model that is able to outperform the teacher model but with ~2x less FLOPS. The reviewers commented positively on the strength experiments and baselines as well as the selection and CPR metric. The main issues surfaced by the reviewers focused on the generalizability of the results outside of 3D object detection including whether the methods or results may be ported to other 3D detection architectures or other localization tasks. The authors responded with some discussion and new early experiments highlighting that the work may be ported to problems in semantic segmentation. From my perspective, the paper comes across as technically sound with strong experiments and a solid overall result. I am concerned about the generality of these results. As this work largely focuses on 3D object detection of specific detection architectures, I could imagine that this this work would be better suited for conferences geared towards the topics of 3D object detection and self-driving cars (e.g. CVPR and associated workshops). In that spirit, I would consider this paper borderline. However, because this work shows promise for other architectures and problems, I view this work as potentially having more generality. It would be incumbent on the authors to revise their manuscript accordingly to include additional discussion on this generality as well as showcase encouraging results on other domains. This paper will be conditionally accepted assuming all of these changes are made to this manuscript. (a) Note that this work seems to exclude the recently popular range-based methods [1] and it would be important for the authors to add discussion to their paper accordingly. E.g. [1] LaserNet: An Efficient Probabilistic 3D Object Detector for Autonomous Driving Gregory P. Meyer, Ankit Laddha, Eric Kee, Carlos Vallespi-Gonzalez, Carl K. Wellington (2019)
train
[ "tHeuIjZSN7N", "TQWDiCKieY", "bLehisoipfG", "CNNoeZ_Mj-e", "QR4SHHlVVd", "OxCqXnR6moZ", "8J2JZeWqyF", "nrJhpDI99ES", "8QEArRFtRP5M", "ckgnNe9eeau", "42XGlIEtU9P", "TwL3kRC6XhO", "G8s6lR2jpYh", "IkhsI_iIMZq", "NpvwiowPx6C", "yGFXqg8h8RQ", "WPtMIpj8Xqx" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive comments and suggestions. If you have other questions and concerns, please let us know and we are happy to further discuss. Thank you again for your time.", " Thank you for your constructive comments and suggestions. If you have other questions and concerns, please let us know and we are happy to further discuss. Thank you again for your time.", " Thanks for your great effort in helping us to strengthen the paper. We are so humbled to receive such recognition. We will punctiliously revise our paper to include the above discussions and experiments in the final version. ", " Thank you for your thoughtful responses. I am happy with the paper, and look forward to the final version.\n", " We sincerely appreciate your effort in helping us to strengthen the paper and your support for our work! We will punctiliously revise the paper based on the above discussion and experimental results in the next revision. They can definitely make our paper more solid!", " I thank authors for providing insightful comments to my questions. These additional experiments are very promising especially the dense prediction task. My concerns are addressed and I recommend acceptance of this paper. I encourage authors to include these discussions/results in the paper and add the sparse distillation results in the next revision. I believe they will further make this paper stronger. ", " Thank you for your reviews. We provide responses to specific questions as below. \n\n\n**Q1: The ideas in the paper are incremental. The high-level idea of pivotal position KD is also not novel.**\n\n(1) Thanks for your comment. Note that our paper focuses more on *exploring the potential of knowledge distillation for efficient 3D detectors*, which aims to provide a general model-agnostic solution to obtain efficient while well-performed 3D detectors and encourage future research to obtain more efficient 3D detectors by improving the compression strategies or KD manners. In addition, our conclusion drawn from both extensive detector compression investigations and KD benchmark results can benefit future research in the community. In fact, our technical contribution -- improved KD pipeline only takes a small part of this paper, and we hope the reviewer can also pay attention to our other contributions.\n\n(2) *Clarification about the novelty of PP logit KD*: As this question is also asked more concretely in Q6, please refer to our answer to Q6.\n\n\n**Q2: The Generality of the approach.**\n\n(1) *Generality of the selection of student model*: Note that the training and selection of students are agnostic with KD methods. We develop CPR to choose students that have a good trade-off between performance and efficiency. When selecting student models, we consider their CPRs and try to cover a wide range of model capabilities for validating the generality and scaling ability of KD methods. Please note that experimental results presented in the S3.2 of supplementary material demonstrate that CPR correlates well with student models' accuracy and efficiency. Besides, our experiments in the semantic segmentation task further demonstrate the generality of CPR as a comprehensive criterion to assess student models (see the response to Q2 of Reviewer hUXR).\n\n(2) *Why CenterPoint-based model and Waymo dataset*: We focus on the CenterPoint-based method, as its different variants rank 2nd on Waymo [E] and 1st in nuScenes [F], respectively, which demonstrates that it is a top-performing model at the time of submission. Besides, our experiments are constructed on WOD as it is the largest annotated public 3D detection dataset, around $5 \\sim 15 \\times$ larger than other 3D detection datasets such as nuScenes, Lyft, Argoverse and KITTI. We believe that KD methods verified on top-performed detectors and large-scale dataset should be more general and beneficial for both future research and industrial applications. \n\n(3) *Generality on another detector and dataset*: To further verify the generalization of our KD methods on both detector-level and dataset-level, we provide the model compression and KD results for KITTI dataset based on the anchor-based detector: SECOND [46]. As shown in the following tables, both our compression conclusion and KD methods can generalize to the new detector and dataset, where SECOND (d) surpasses teacher performance by around 0.5\\% with $3.5\\times$ fewer flops. We will add these results to the revised paper.\n\n| Detector | Width-PBE | Width-BEF | Params (M) | Flops (G) | Acts (M) | Latency (ms) | mAP@R40 | CPR |\n| :-: |:-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | \n| SECOND | 1.00 | 1.00 | 5.3 | 80.5 | 69.3 | 77.4 | 67.24 | - |\n| SECOND (a) | 1.00 | **0.50** | 2.0 | 26.0 | 40.0 | 54.3 | 66.64 | 0.70 | \n| SECOND (b) | **0.50** | 1.00 | 4.6 | 72.4 | 65.2 | 70.6 | 65.70 | 0.50 | \n| SECOND (c) | **0.50** | **0.50** | 1.4 | 20.5 | 35.9 | 46.1 | 64.21 | 0.68 | \n| SECOND (d) | **0.75** | **0.50** | 1.6 | 23.0 | 38.0 | 51.8 | 65.62 | 0.69 | \n\n\n| Detector | No Distill | KD | GID-L | FitNet | Mimic | FG |GID-F | Label KD | Ours | Flops(G) | Acts (M) |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| SECOND | 67.24 | - | - | - | - | - | - | - | - | 80.5 | 69.3 |\n| SECOND (d) | 65.62 | 66.06 | 66.34 | 66.00 | 66.37 | 66.58 | 66.75 | 67.03 | 67.70 | 23.0 | 38.0 |\n\n(4) *Generality on new semantic segmentation task*: We also verified the effectiveness of our proposed KD methods on the 3D semantic segmentation task, which strongly demonstrates the generalization ability of our proposed method (see the answer to Q2 of Reviewer hUXR). \n\n[E] https://waymo.com/open/challenges/2021/real-time-3d-prediction/\n\n[F] Scaling up Kernels in 3D CNNs.", " \n**Q3: Why not simply use measured runtime instead of activations for CPR?**\n\nThank you for the comments. Actually, we have discussed the reason for using activations rather than runtime/latency in both the evaluation metrics part of the main paper (see line 129-131) and supplementary material (see Section S3.1 in the supplemental material). \nThe main reason lies in the fact that the runtime of a detector largely depends on the hardware devices and operation-level optimizations. Besides, the runtime is even not stable on the same machine with different machine statuses such as temperature. Experimental results and discussions can be found in Section S3.1 of the supplemental material. Since the used hardware devices and operation optimization largely vary between different research groups, we use a machine-independent metric -- activations to calculate CPR to benefit more future research.\n\n**Q4: For Section 3: it’s not clear to me how these models are trained — what KD approach was used, what dataset etc?**\n\nThank you for your comment. We clarify it as below:\n\n(1) In Section 3, our objective is to investigate how to design an efficient 3D detector, where we simply train the designed detectors without any knowledge distillation methods as the training schema in OpenPCDet [41] (see line 124-125). We will further add a clarification: this part is agnostic to KD methods in the revised paper.\n\n(2) For the dataset, we train those models on Waymo Open Dataset with 20\\% training samples, which is also the default training schema of OpenPCDet [41] on WOD. Related clarifications can be found in: line 78, line 122-123, line 131 as well as the table header of Table 1 and Table 2 as LEVEL 2 mAPH is the specific metric of WOD.\n\n\n**Q5: Table 3: explain that the ''Ours'' column is not re-explained until a later section.**\n\nThank you for pointing out such a confusing expression. We will add an explanation of ''Ours'' in the caption of Table 3 to make it easier to understand in the revision.", " \n**Q6: For PP Logit KD: (1) It is referring to the idea: instance-aware local region imitation. (2) How can it include some background contribution in the loss function? (3) Focal loss is often used in 2d detection. Can focal loss work on KD for 3D detection?**\n\n(1) *Comparison with instance-aware local region imitation*: Actually, we have claimed that PP logit KD is motivated by the imbalance of foreground and background imbalance issue and previous designs in the 2D area to alleviate this problem (see line 246-250 and 271-272). And we have compared the difference between PP logit KD and previous instance-wise imitation methods in line 51-65 of the supplemental material.\n\nAs a 3D detector needs to make predictions on a large $150m \\times 150m \\times 6m$ 3D space, the foreground and background imbalance issue is more extreme than 2D detection with just a single image. Besides, a single position on the BEV pillar image can cover a large region of $0.8m \\times 0.8m \\times 6m$. Thus, this extreme sparsity and informative BEV detection paradigm pose a new challenge to knowledge distillation for 3D object detection. To this end, our teacher-guided pivotal position selection offers sparser selected areas, focusing on more informative areas (see visualization comparison between PP KD and instance-aware KD: https://drive.google.com/file/d/1B6wMRke_Ivy7broikGvXKPCHy7sXEJhR/view?usp=sharing). This design is tailored to 3D object detection.\nOur experimental results also demonstrate that the proposed PP logit KD is more powerful with only $\\frac{1}{5} \\sim \\frac{1}{20}$ attention regions compared to 2D instance-aware local region imitation methods (see Table 3 and Table 7 in the main paper as well as Table S1 in the supplemental material).\n\nBesides, how to select better local regions for imitation is a long-standing research problem in KD for 2D detection areas such as [22, 42, 6, G] from 2017 to present. In this regard, leveraging local region imitation to solve foreground/background imitation is a common motivation and a research direction which is non-trivial and very important to advancing this area.\n\n\n(2) *Background areas*: As our confidence and rank PP logit KD relies on teacher prediction to select pivotal positions, if the background points are predicted with high or top-ranked confidence, our PP logit KD will also apply distillation loss on those background positions. As for the loss function, PP logit KD just set the corresponding positions of $m_{\\text{cls}}$ in Eq. (2) to one, which then makes distillation loss active in those positions.\n\n(3) *Focal loss*: As we don't know exactly what the focal loss you are referring to, we discuss both the two scenarios of applying focal loss: ($i$)focal loss on GT supervision loss; ($ii$) focal loss on distillation objective.\n\nScenario ($i$): Focal loss is a defacto selection in 3D object detection to solve foreground/background imbalance and is already equipped in the supervised training objective in all our trained models. \n\nScenario ($ii$): As far as we know, focal loss is not widely employed as a distillation loss for 2D object detection as shown in Mimicking [22], FG [42], FGD [G], etc. Still, we implement a focal distillation loss similar to the supervised loss. The experimental results are shown in the following table. Our PP logit KD is around 0.7\\% higher than focal loss on CP-Voxel-XS. As for CP-Pillar-v0.64, since the capability difference between teacher and student are large, focal loss even suffers performance degradation compared to vanilla KD, while our PP logit KD consistently brings performance boost.\n\n| Detector | Role | No distill | KD | Focal loss | PP Logit KD |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| CP-Voxel | Teacher | 64.29 | - | - | - |\n| CP-Voxel-XS | Student | 62.23 | 62.81 | 63.48 | 64.16 |\n| CP-Pillar | Teacher | 59.09 | - | - | - |\n| CP-Pillar-v0.64 | Student | 52.81 | 50.78 | 46.11 | 54.32 |\n \nThe reason for the inferior performance of focal loss for distillation is that it will emphasize regions that are most different among teacher and student but not necessarily be information-rich areas. Instead, such prediction difference might be caused by the capability gap between teacher and student. Hence, emphasis on such regions might be a suboptimal strategy and even penalize student learning. We will include these results in the supplemental materials in the revision.\n\n\n[G] Focal and Global Knowledge Distillation for Detectors.\n", " Thank you for your thoughtful reviews. We are grateful for your appreciation and interesting questions. We provide responses to specific questions as below. \n\n**Q1: Whether this proposed sparse distillation still be applicable to sparse Transformer-based models?**\n\nThank you for your thoughtful question, which is actually an open question and an under-studied problem. \nFrom our perspective, sparse distillation is still applicable for sparse transformer-based detectors such as DETR [A], Deformable DETR [B], Object DGCNN [C], etc. \n\n(1) As for sparse transformer-based detectors that directly provide instance predictions, their predictions actually rely on learning to some sparser reference points and corresponding position features. For example, each object query in Deformable DETR or Object DGCNN is decoded into a reference point and $K$ neighboring points in order to focus only on those most informative positions. \n\n(2) Although sparse detectors can directly provide sparse instance predictions, we argue that our sparse distillation (i.e. pivotal position KD) focuses on sparser and more fine-grained position-level information (see visualization comparison between PP KD and instance-level KD: https://drive.google.com/file/d/1B6wMRke_Ivy7broikGvXKPCHy7sXEJhR/view?usp=sharing). In this regard, it should still be applicable to sparse models with some specific modifications.\n\n(3) Here, we take Object DGCNN [C] as an example and provide two possible sparse distillation designs:\n\n(3.1) As the transformer encoder and decoder of Object DGCNN are similar to Deformable DETR [B], it can be simply extended to a two-stage variant as Deformable DETR. In the two-stage variant, the transformer encoder will regard each pixel as an object query and construct a dense prediction on it. Top scoring positions are picked as reference points. This is similar to our designed rank PP KD which enforces the student to imitate the prediction of teacher top-rank positions. Therefore, we can directly apply our sparse rank PP KD to those dense scoring predictions between teacher and student. Besides, we will also carry on feature imitation on those teacher top-ranked positions between teacher and student. \n\n\n(3.2) As for the one-stage variant of sparse detectors, learnable object queries will be decoded into reference points and neighboring points, so the sparse distillation can be constructed on those points and their corresponding BEV features. Specifically, we can first match the positive object queries of teacher and student as query pairs by checking whether they are matched to the same GT box. Then, we can enforce the decoded reference and neighboring points of the student to mimic their paired teacher counterparts. Besides, we will construct imitation on BEV features of those reference and neighboring positions between teacher and student.\n\n\n(4) We are trying to construct experimental verifications about our above designs based on Object DGCNN. However, since we need to change to new dataset (i.e. nuScenes), new codebase (i.e. MMDetection3D) and new detection paradigm (transformer-based) with limited time and resources, we have not obtained results now. We will keep attempts and update them in the comments or revised version once we get results. \n\n\n[A] End-to-end object detection with transformers.\n\n[B] Deformable detr: Deformable transformers for end-to-end object detection.\n\n[C] Object dgcnn: 3d object detection using dynamic graphs.", " **Q2: Whether this method is useful for dense tasks like semantic segmentation or flow prediction?**\n\n(1) Thanks for your constructive and interesting question. We agree that dense prediction tasks such as semantic segmentation requires fine-grained supervisions and might hinder the effectiveness of our sparse distillation strategy (i.e. pivotal position KD). However, we argue that since the student model already has dense GTs as supervision in training, dense distillation loss on massive uninformative points and regions, such as road points, might be redundant and can overwhelm the overall distillation loss. Instead, our sparse distillation might help the student focus on more important areas by using teacher prediction as regularization. \n\n(2) Here, we follow the design principle of PP logit KD and adapt it to handle the dense semantic segmentation task. We apply distillation loss on points with predictions that are correct but less confident than the teacher. Our simple design is motivated by three intuitions: ($i$) Points that are correctly predicted with lower confidence are often some challenging cases that the model is struggling but also has the capability to handle. By harvesting knowledge from a high-performing teacher model, the student can learn to match the confidence level of the teacher which provides more information than the one-hot GT. ($ii$) Points that are correctly predicted with higher confidence are often easy samples that have very close prediction confidence to the teacher model. Considering that these samples are already handled well by the model, they have low chance to benefit from distillation but might cause redundancies. ($iii$) Points that are incorrectly predicted by the student are often cases that might be out of the ability of student models. \n\nNote that we are only able to verify this simple design following the intuition of our sparse distillation strategy in this rebuttal period. Dedicated designs might further strengthen the results.\nSpecifically, we have the confidence of student predictions $\\text{conf}^s$, the confidence of teacher predictions $\\text{conf}^t$ and a pre-defined threshold $\\tau$. We will only apply distillation loss for student predictions that are correct and have $\\text{conf}^s + \\tau < \\text{conf}^t$. \n\n(3) We also provide experimental results for our design on the 3D semantic segmentation dataset ScanNet. Here, we use a small version of MinkowskiNet [D] for fast verification. As shown in the following table, first, we try both model width and input resolution compression to obtain student models, and select MinkowskiNet14-v0.04 as the student model for KD due to its higher CPR. Then, we validate logit KD, our PP logit KD and our TGI on it. Both our proposed PP logit KD and TGI obtain improvements. In particular, our sparse PP logit KD surpasses the dense logit KD method with around 0.8\\% gains. \nOur statistics also show that our PP logit KD only leverages 19.03% points for distillation at the first epoch and 3.66% points for distillation at the last epoch. These experiments and statistics demonstrate that sparse distillation can also work on the dense prediction task. We will include the above results in the supplemental material.\n\n\n| Model | Width | Voxel Size (m) | Params (M)| Flops(T) | Acts (M) | mIoU | CPR|\n|:-:|:-:|:-:|:-:|:-:|:-:| :-:| :-:|\n|MinkowskiNet14 (teacher)| 1.0 | 0.02 | 1.7|46.2|27.9| 65.77 | -|\n|MinkowskiNet14-w0.5| **0.5** | 0.02 | 0.5 | 18.2 | 17.4| 61.84 |0.60|\n|MinkowskiNet14-v0.04| 1.0 | **0.04** | 1.7 | 5.7 | 8.9 | 62.82 |0.78|\n\n| Model | Role | No distill | Logit KD | PP Logit KD | TGI | Flops (T) | Acts (M) |\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:| :-:|\n|MinkowskiNet14| Teacher | 65.77 |-|-|-| 46.2 | 27.9 | \n|MinkowskiNet14-v0.04| Student | 62.82 | 63.65 | 64.40 | 64.22 | 5.7 | 8.9 |\n\n\n[D] 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks\n\n**Q3: What do authors think of sparse/dense distillation? Can we still do similar distillation on sparse models?**\n\nThank you for your comments. \n\n(1) *Sparse/dense distillation*: In our opinion, as GT labels can provide necessary supervision for training the neural network, dense distillation could waste attention on some uninformative positions that could even overwhelm the distillation loss on pivotal positions. On the contrary, sparse distillation can only focus on informative regions/positions, regularizing the training objective of student networks to those informative and improvable regions/positions with teacher guidance.\nOur experimental results on 3D detection and 3D semantic segmentation tasks also support that sparse distillation can be a stronger strategy than dense distillation.\n\n(2) *Similar distillation on sparse models*: As this question is similar to Q1, please refer to our answer to Q1. \n\n\n**Q4: Can authors provide code to facilitate reproducibility?**\n\nAs the claim in the abstract, we promise that we will make our code publicly available once accepted.", " Thanks for your constructive reviews and questions. We are encouraged by your appreciation to our contribution. We provide responses to specific questions as below.\n\n\n**Q1: Discussions on more designs of lightweight student networks are encouraged, to see if the conclusions are still held. It will also increase the difficulty of knowledge distillation.**\n\nThanks for your thoughtful comments. \n(1) As our paper focuses on providing the first systematic study for efficient 3D detection with KD, we investigate the most widely-adopted and easy-to-use compression schemes including input resolution and model width/depth compression. We have not tried sophisticated layer-wise compression or combined different compression methods which require complicated algorithms such as neural architecture search in EfficientNet [40] and is beyond the study of this paper. In fact, we have discussed such limitations in the ''Limitations'' part of supplemental materials, which will be our future work.\n\n(2) Here, we still would like to provide some experimental results to combine model and resolution compression along with some KD attempts. As shown in the following table, we further apply model width compression based on CP-PP-v0.4. The obtained CP-PP (b) and CP-PP (c) both achieve higher CPR compared to CP-PP-v0.4, because coarser-resolution detectors are supposed to have more architecture-level redundancy with less input information. These results also support our claim in the supplemental materials (see line 152-156 in the supplemental material). Furthermore, by comparing CP-PP (b) and CP-PP (c), we can find that our compression conclusion ''PFE has less redundancy to be reduced'' is still valid.\n\n| Detector | Width-PBE | Width-BEF |Width-Head | Voxel Size (m) | Params (M) | Flops (G) | Acts (M) | Latency (ms) | mAP@R40 | CPR |\n| :-: |:-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | \n| CP-PP | 1.00 | 1.00 | 1.0 | 0.32 | 5.2 | 333.9 | 303.0 | 157.9 | 59.09 | - |\n| CP-PP-v0.4 | 1.00 | 1.00 | 1.00 | **0.40** | 5.2 | 212.9 | 197.7 | 103.4 | 57.55 | 0.64 | \n| CP-PP (a) | 1.00 | **0.875** | **0.875** | 0.32 | 4.0 | 260.1 | 267.7 | 134.7 | 58.53 | 0.54 |\n| CP-PP (b) | 1.00 | **0.875** | **0.875** | **0.40** | 4.0 | 163.9 | 175.5 | 92.1 | 57.36 | 0.67 | \n| CP-PP (c) | **0.875** | **0.875** | **0.875** | **0.40** | 4.0 | 163.2 | 173.2 | 91.1 | 56.92 | 0.66 | \n\n(3) In addition, we also provide extra results of applying various KD methods to the newly designed CP-PP (b). As shown in the following table, although CP-PP (b) has similar no distillation results with CP-PP-v0.4, it obtains fewer benefits from distillation among all KD strategies. These results illustrate that more sophisticated compression can increase the difficulty of KD. \n\n| Detector | No Distill | KD | GID-L | FitNet | Mimic | FG |GID-F | Label KD | Ours | Flops(G) | Acts (M) |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| CP-PP | 59.09 | - | - | - | - | - | - | - | - | 333.9 | 303.0 |\n| CP-PP-v0.4 | 57.55 | 57.51 | 57.54 | 57.89 | 58.57 | 58.44 | 58.26 | 58.10 | 59.24 | 212.9 | 197.7 |\n| CP-PP (b) | 57.36 | 56.93 | 56.70 | 57.15 | 57.81 | 57.48 | 57.77 | 57.57 | 58.62 | 163.9 | 175.5 |\n\n\n**Q2: In Tab. 3, what is the performance of applying all three kinds of KDs (like Eq. 4). It seems that only the result of each KD (i.e., logit KD, feature KD and label KD) has been analyzed.**\n\nThanks for your comments. We provide the combination results of three KD methods in Tab. 4 based on CP-Voxel-XXS along with the synergy effect analysis. Furthermore, we show the synergy effect results of all designed architectures in Table S1 in the supplemental material due to the page limit of the main paper. Hope these results could address this concern.\n", " Thanks for your thoughtful feedback! We are really glad for your appreciation to our paper. We provide responses to specific questions as below. \n\n\n**Q1: The study was only done on CenterPoint-based models, and there were no point-cloud based models evaluated.**\n\nThanks for your constructive feedback. We agree that including point-based methods would make our work more general and comprehensive. Actually, we also investigate point-based architecture at the initial of this work.\n\n(1) However, point-based methods achieve sub-optimal performance on practical ring-view datasets (i.e. datasets including LiDAR point clouds in all 360 degrees among ego-vehicle) such as Waymo, Lyft, nuScenes, Argoverse, etc. As shown in the following table, the state-of-the-art point-based detector IA-SSD [50] performs inferior than voxel/pillar-based detectors on Waymo. \n| Method | Type | LEVEL2 mAPH |\n| :------: | :----: | :----: |\n| CP-Voxel | Voxel-based | 65.58 |\n| CP-Pillar | Pillar-based | 61.56 |\n| IA-SSD [50] | Point-based | 58.08 |\n\n(2) Besides, the efficiency of point-based detectors decreases quadratically when the number of points increases linearly, as the points-wise feature encoding network (e.g. PointNet++) relies on furthest point sampling with $O(n^2)$ time complexity. On the contrary, voxel/pillar-based detectors are less sensitive to the number of points with voxelization operation. Therefore, point-based methods are rarely employed in the practical ring-view datasets with dense point clouds.\n\n(3) To further verify the generality of our approach, we try our method on KITTI with an anchor-based detector -- SECOND [46] (see our response to Q2 of Reviewer Sm5s). Experimental results show that our compression conclusions are still valid and KD methods also show outstanding performance on the new dataset as well as the new detector. \nIn addition, we are glad to make our work more general by constructing investigations on point-based detectors on more suitable datasets (e.g. KITTI). However, due to the large work overhead and limited time during the rebuttal, we leave it to future research. Once we obtain comprehensive results, we will update them to the paper.\n\n\n\n**Q2: Miscellaneous: L82 vs. L 85: \"Suppl.\" vs \"Appendix\" term mismatch.**\n\nThanks for your careful reading and pointing that out. We will fix this error in the revision.\n\n\n**Q3: In Section 5.3, it is observed that the student model can outperform the teacher model. Is there intuition for why this may be? Intuitively, unless the student model has access to labels, it should not have enough information to correct these mistakes.**\n\nThank you for your comments.\n(1) Actually, we use GT labels during the KD process, following the default setting for previous KD methods. As shown in the ''Label KD part'' of Sec. 4.1 (see line 235), the teacher assisted GT set $\\hat{y}^{\\text{KD}}$ is constructed by combining GT labels $y$ and confident teacher predictions $\\hat{y}^t$. Besides, for KD methods other than label KD, student models still leverage GT labels as naive supervised training. In this regard, student models always have access to GT labels and have enough guidance to correct teacher's mistakes. We will add clarification of this part in the revision. \n\n(2) We also provide an extra experiment to investigate how teacher prediction and GT labels influence the performance of Label KD. As shown in the following table, although the student can achieve reasonable performance with only teacher prediction as supervision, it still needs GT labels to obtain gains compared to the no distillation model. We will include these results in the revision.\n\n| Detector | GT | Teacher Pred | LEVEL2 mAPH |\n| :------: | :----: | :----: | :----: |\n| CP-Pillar-v0.48 | $\\surd$ | $\\times$ | 56.24 |\n| CP-Pillar-v0.48 | $\\times$ | $\\surd$ | 54.66 |\n| CP-Pillar-v0.48 | $\\surd$ | $\\surd$ | 57.54 |\n", " This paper studies knowledge distillation for point cloud object detection. Designing efficient architecture to process large scale point clouds is a long-standing challenge for autonomous driving. To that end, this paper tries to improve model efficiency by leveraging knowledge distillation. This paper conducts an extensive study of existing knowledge distillation methods under several settings. It proposes a new metric called Cost Performance Ratio (CPR) to measure the performance and efficiency trade-off. Also, this paper provides two systematical ways to compress both inputs and architectures respectively. In addition, this paper analyzes the feature responses in both teacher network and student network and proposes a novel way to select a sparse set of locations in the feature map to distill. The best performing method using the proposed knowledge distillation achieves very strong performance on the Waymo dataset and even outperforms its teacher counterpart while only requiring less than half of teacher flops. Overall, I think this paper makes good contributions to point cloud knowledge distillation. It provides an adequate study of existing knowledge distillation methods under 3D object detection settings. In addition, it introduces a novel metric to compare different knowledge distillation methods. Moreover, the analysis of feature response leads to a module to select a sparse set of features to distill. I summarize strengths and weaknesses as follows.\n\nStrengths:\n\n1. First, I like the adequate study of existing distillation methods conducted in the paper. These experiments show a clear overview of how knowledge distillation works in the point cloud object detection scenario. Also, the reimplementations of existing methods (e.g., PV-RCNN++) are faithful and very close to SOTA models. \n\n2. The proposed CPR is technically sound. This metric considers both cost and performance and can be potentially used in general knowledge distillation settings. Also, this metric reflects empirical results/observations of models. \n\n3. The analysis of feature response is interesting and intriguing. Based on the feature response, this paper comes up with a way to select features to distill rather than distilling the whole feature map to avoid focusing on feature noises. \n\n4. Empirically, the proposed method shows quite strong performance compared to existing knowledge distillation baselines. Its best performing student network even outperforms the teacher counterpart while only requiring 44% of teacher flops. What is more convincing is that the improvements are consist under almost all settings. \n\nWeaknesses:\n\n1. My first concern is about the generalizability of the proposed method. It seems that this paper focuses on two specific architectures -- PointPillars/SparseConvs and PVRCNN. I am curious that whether this proposed method is applicable to Transformer-based model such as Object DGCNN and 3DETR. As those methods are already sparse, can the proposed sparse distillation still be applied in those cases?\n\n2. I understand this paper focuses on object detection, which is a coarse prediction task compared to fine-grained prediction like semantic segmentation. So I am wondering, if this method is useful for dense tasks like semantic segmentation or flow prediction. My worry is that for dense prediction tasks, the feature selection module won't introduce too much improvement compared to dense feature map distillation. It would be great if authors can provide evidences on tasks other than object detection. I would appreciate feedback/comments to the following questions:\n\n1. What do authors think of sparse/dense distillation? If we work on sparse models such as Object DGCNN and 3DETR, can we still do similar distillation as this paper does? \n\n2. Can authors provide code to facilitate reproducibility? I think there are many details/hyperparameters still missing. It is not straightforward to reproduce the work. I don't have concerns about the potential negative societal impact. ", " This paper proposes to use knowledge distillation to train more lightweight 3d detectors (which tend to suffer from heavy computational overhead). For given teacher networks (CenterPoint Pillar and Voxel variants), the authors experimentally determine a collection of student networks (e.g. via reducing width/depth, input size in certain ways) that achieve good efficiency and performance.\n\nThen given these student architectures, the authors experiment with different knowledge distillation schemes (including a proposed scheme - called Pivotal Position Logit KD - that puts more emphasis on foreground regions). There are a few main findings, but one punchline (across a number of experiments mostly done on the Waymo dataset) is that the authors are able to realize a 3d detection model that outperforms the teacher model with ~2x less FLOPS.\n The clear strength of this paper is that it is able to get a clear win over CenterPoint with the Waymo dataset and the paper also offers insights that would help others that would like to do knowledge distillation for a similar architecture.\n\nOn the other hand, the ideas in the paper are incremental — mostly this paper can be viewed as an empirical exploration of existing knowledge distillation ideas (with the exception of the pivotal position KD idea — but even the high level idea here of emphasizing foreground regions is not novel). There is a running theme that some KD approaches are synergistic with each other — and these are discovered empirically, but there are no principles offered by which a reader can think about this at a higher level. Finally, it is hard to know how general these findings are. For example, how do we know that the choice of student models does not depend on the specifics of each KD approach? And can we expect any of these findings to generalize beyond CenterPoint based models? Would they generalize to non-Waymo datasets?\n * For the proposed CPR (cost performance ratio) - why not simply use actual measured runtime instead of activations?\n* In the section on selecting student architectures, it’s not clear to me how these models are trained — what knowledge distillation approach was used, what dataset etc?\n* A suggestion for Table 3: have a more descriptive caption --- explain that the “Ours” column is not re-explained until a later section\n* On pivotal position logit KD\n * The authors mention “instance-aware local region imitation” in Sec 4.2 but don’t really talk about it until Sec 5.1 (at which point it’s called something else — pivotal position logit KD). So am I correct that these are referring to the same idea? \n * It still makes sense to include some background contribution in the loss function — how is this incorporated in pivotal position logit KD?\n * Focal loss is often used in 2d detection as a way to deal with foreground/background imbalance — could this idea be adapted for knowledge distillation for 3d detection?\n Yes", " This paper focuses on the task of efficient 3D object detection. It first studies how to obtain student models with good trade off between accuracy and efficiency. Then, it proposes an improved KD pipeline incorporating an enhanced logit KD method and a teacher-guided student model initialization to facilitate transferring teacher model’s feature extraction ability to students through weight inheritance. Extensive experiments on Waymo dataset show the efficiency of the proposed method. $\\textbf{Strength}$\n\n1. The motivation of solid, and the paper is well organized.\n\n2. Extensive experiments are conducted to analyze the designs of efficient student networks and the performance of benchmark knowledge distillation for 3D object detection.\n\n3. Experiments on WOD demonstrate the effectiveness of the improved knowledge distillation.\n\n$\\textbf{Weakness}$\n\n1. The designs of student network are limited to input resolution compression or model width/length compression. And each variant only involves in one compression method. Discussions on more designs of lightweight student network are encouraged, to see if the conclusions are still held. It will also increase the difficulty of knowledge distillation.\n\n2. In Tab. 3, what is the performance of applying all three kinds of KDs (like Eq. 4). It seems that only the result of each KD (i.e., logit KD, feature KD and label KD) has been analyzed.\n Please see the comments above. Yes, the limitations have been discussed in the supplementary file.", " This work comprehensively looks into the state of knowledge distillation as it applies to point cloud 3D object detection. In particular, the authors argue for knowledge distillation’s use case for model compression. They begin by analyzing how different compression techniques, both model- and input-level affect performance by looking into two CenterPoint based detectors. Next, they benchmark a strong suite of current knowledge distillation methods. The authors also propose their own KD method combining the strengths of prior KD paradigms (logit KD, label KD, and teacher-guided initialization), showing superior performance over the baselines on the Waymo dataset. The paper is very well organized and clearly written, and each component of the research question seems thoroughly analyzed. Reading through the paper was a self-containing opportunity to learn about the field of KD in 3D object detection. The analysis section is thorough; the work not only benchmarks the different baselines, they also organize them according to the KD paradigms explained in the paper. Interestingly, the authors also explore the combination of different distillation paradigms. The method proposed was also convincing, obtaining superior performance over the baselines with a compressed model. The ablation section is thorough, showing that each part of the final method contributed to the performance. \n\nA minor criticism is that the study was only done on CenterPoint based models, and that there were no point-cloud based models evaluated (i.e., the conclusions were for pillar- and voxel-based inputs only). It would be interesting to see how point-based models (such as PointRCNN) would compress and hold up to these conclusions.\n\nMiscellaneous:\nL82 vs. L 85: \"Suppl.\" vs \"Appendix\" term mismatch\n In Section 5.3, it is observed that the student model can outperform the teacher model. Is there intuition for why this may be? Intuitively, unless the student model has access to labels, it should not have enough information to correct these mistakes. The authors do not explicitly address the limitations in their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 2 ]
[ "NpvwiowPx6C", "yGFXqg8h8RQ", "CNNoeZ_Mj-e", "G8s6lR2jpYh", "OxCqXnR6moZ", "42XGlIEtU9P", "NpvwiowPx6C", "NpvwiowPx6C", "NpvwiowPx6C", "IkhsI_iIMZq", "IkhsI_iIMZq", "yGFXqg8h8RQ", "WPtMIpj8Xqx", "nips_2022_1tnVNogPUz9", "nips_2022_1tnVNogPUz9", "nips_2022_1tnVNogPUz9", "nips_2022_1tnVNogPUz9" ]
nips_2022_Ag3ycrdh6n
Tensor Wheel Decomposition and Its Tensor Completion Application
Recently, tensor network (TN) decompositions have gained prominence in computer vision and contributed promising results to high-order data recovery tasks. However, current TN models are rather being developed towards more intricate structures to pursue incremental improvements, which instead leads to a dramatic increase in rank numbers, thus encountering laborious hyper-parameter selection, especially for higher-order cases. In this paper, we propose a novel TN decomposition, dubbed tensor wheel (TW) decomposition, in which a high-order tensor is represented by a set of latent factors mapped into a specific wheel topology. Such decomposition is constructed starting from analyzing the graph structure, aiming to more accurately characterize the complex interactions inside objectives while maintaining a lower hyper-parameter scale, theoretically alleviating the above deficiencies. Furthermore, to investigate the potentiality of TW decomposition, we provide its one numerical application, i.e., tensor completion (TC), yet develop an efficient proximal alternating minimization-based solving algorithm with guaranteed convergence. Experimental results elaborate that the proposed method is significantly superior to other tensor decomposition-based state-of-the-art methods on synthetic and real-world data, implying the merits of TW decomposition. The code is available at: https://github.com/zhongchengwu/code_TWDec.
Accept
Two reviewers consider that the proposed construction is clearly innovative. and all reviewers consider that the contribution is useful to the tensor learning community. The experiments show that the proposed method yields improved performance. The three reviewers who participated in the discussion with the authors and/or took into account the rebuttal of the author expressed that they were satisfied with the rebuttal. Reviewer Bfwm who is the only reviewer assigning a score of 4 or lower, did not consider the rebuttal and did not respond to any message after the initial review. The AC considers that his concerns have been well addressed by the authors and this reviewer states in their initial review that this work "might still brings new knowledge to the area". The authors are encouraged to take into account in particular the fruitful discussion with reviewer Dt2K to enhance their manuscript with additional discussions and insights, and to further strengthen their experiments if possible (consider core tensors in Tucker with different ranks in different modes), given that the results tend to be sensitive to the choice of hyperparameters (and possibly on the hyperparameter search strategy) and to the choice of dataset.
val
[ "PMQzpwvf5Vj", "4dubWirRtz", "8p9CnV5XMJX", "VvMmtjyzHl", "RUt2NBR_bs5", "nkFATfltdMZ", "8Br49Nwuyyr", "8nsh909zVR", "5ShbRcaseLs", "Q36uXfk4GL2", "Fut7h-oeN2N", "achcnnzuIEf", "uop69AIaaTc", "oRLhC4-nR_", "B0bOrAgVWKL", "oGe28T4F2c", "RtMu9CNvSb", "uP_0-6b8xHT" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have detailedly responded to your comments and carefully addressed your concerns. Thanks again for making our results even stronger. Sincerely, we look forward to further communication with you!\n", " Thanks for the reviewer's insightful and valuable comments to our work. We have posted all responses in the OpenView system, including the response to your comments. If you have any other questions, you may find the answers from the response to other reviewers. Thank you very much.", " We appreciate your feedback regarding our rebuttal, and further respond to your one comment, as follows:\n\nComment 1: The proposed method and competing methods are sensitive to hyperparameter selection, which suggests overfitting when tuning hyperparameters on certain data. Could authors provide 50-frame results using 20-frame hyperparameters? Does the performance surpass that of [R2]?\n\nResponse: You are right! Except for the HaLRTC method, the remaining methods are relatively sensitive to the selection of hyperparameters. That is, their optimal parameter configurations vary with different data. Therefore, we fine-tune their parameters to not be static, for better performance and fairer comparison. Following your comment, we provided the results on a 50-frame data (i.e., \"news\" data with SR = 0.1) using those hyper-parameters tuned for 20-frame data (i.e., \"news\" data with SR = 0.1), as follows:\n\nMethod: Observed, HaLRTC, t-SVD, TMacTT, TRLRF, FCTN-TC, TW-TC;\n\nCase A: 9.04, 19.58, 28.75, 29.21, 29.31, 30.27, 32.55;\n\nCase B: 9.05, 20.24, 30.89, 29.48, 32.99, 30.75, 32.89;\n\nCase C: 8.95, 18.51, 28.13, 27.49, 27.82, 29.52, none (because TW-TC is not involved in [R2]);\n\nThe above Case A, Case B, and Case C, represent the results reported in our manuscript (i.e., 20-frame \"news\" data with SR = 0.1 using fine-tuned parameters), the requested experiment (i.e., 50-frame \"news\" data with SR = 0.1 using previously specified parameters), and [R2] (i.e., 50-frame \"news\" data with SR = 0.1), respectively. Compared with Case A, the performance of all compared methods in Case B are improved owing to the increased data redundancy from 50-frame data. Notably, the TRLRF method [R1] achieved the best incremental improvement (an even higher MPSNR value than the proposed TW-TC model) due to its stability for rank selection. Consequently, when executed on 50-frame \"news\" data (i.e., Case B), this experiment still yields superior results than other methods reported in [R2] (i.e., Case C). Regarding the difference between Case B and Case C, we consider that it may be caused by two aspects: 1) the different positions of the 50-frame data inside the original data; 2) the fineness of parameter adjustment. However, we emphasize that the parameters of all compared algorithms are finely tuned in our manuscript, thus encountering laborious hyper-parameter selection work.\n\n[R1] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion, AAAI, 2019.\n\n[R2] Fully-connected tensor network decomposition and its application to higher-order tensor completion, AAAI, 2021.", " Thank you for the detailed answers. I'm satisfied with the authors' response.", " After reading the authors' response to Q1, I still have concerns about it.\n\nIt looks like the proposed method and other competing methods are quite sensitive to the hyperparameter selection, which suggests some degree of overfitting when tuning hyperparameters on certain data. \n\nCould the authors provide the results on 50-frame data using the hyperparameters tuned for 20 frames? Does it still lead to better results than other methods reported in [R2]?\n\n", " We have responded in detail to each reviewer's comments, but we have not received feedback on our rebuttal from the other two reviewers yet. There are only two days left until the rebuttal deadline, and we hope that our rebuttal will receive an objective and fair response.\n", " Thanks for your feedback and the positive comments on our rebuttal. We further respond to your two comments, as follows:\n\nComment 1 (i.e., Weaknesses 2-3): Regarding the answered downsides of FCTN-TC, I would suggest including a discussion in the paper. \n\nResponse: Thanks for your valuable comment! Certainly, these two involved discussions relating to Weaknesses 3–4 deserve to be added to this manuscript, aiming at contributing more insightful analysis. Following your suggestion and considering the page limitations, we have incorporated them into the revised supplementary materials (see Appendix C.2). Again, thanks for your efforts in improving our manuscript.\n\nComment 2 (i.e., Question 1): Could the authors provide some results? \n\nResponse: For this mentioned experiment, we employ a RGB image with structural missing entries, i.e., the random stripes missing for the color Barbara image of MATLAB, to evaluate the performance of all compared methods. Under this case, the so-called sampling rate amounts to 0.8, i.e., SR = 0.8. Numerically, the results are presented as follows:\n\nMethod: Observed, HaLRTC, t-SVD, TMacTT, TRLRF, FCTN-TC, TW-TC;\n\nMPSNR:13.47, 13.47, 13.47, 13.47, 21.55, 21.42, 21.91;\n\nConsequently, we claim that the proposed TW-TC model is invariably superior to other compared methods, and all the performances are diminished. Particularly, since the random stripes missing promote the low-rank property of image data, the three low-rank-based methods, i.e., HaLRTC, t-SVD, and TMacTT, are ineffective. To improve the numerical results, some preprocessing operators, e.g., VDT [R10], or regularizers, e.g., total variation (TV), may be required for these tensor network decomposition-based methods. When we attempted to impose the VDT preprocessing or TV regularizer into the TW-TC model, its performance was improved by 6-8 dB. \n\n[R10] High-dimension tensor completion via gradient-based optimization under tensor-train format. Signal Processing, 2019.", " Thank you for the detailed answers.\n\n- I can understand the authors' point about the deeper analysis.\n\n- About the above downside of FCTN-TC, I'd like to suggest including the discussion in the paper.\n\n- In the rebuttal, the authors said that the performance is decreased for all methods under structured missing conditions. Does TW-TC still achieve the best performance? Could the authors provide some results?\n\n--> I'm not trying to say that this point is crucial for the paper's contribution (i.e., this setting is not only about the choice of factorization but also about that of the regularizer, as the authors also stated), but the authors' answer suggests that there are some results, so I'm curious.", " Thanks for your promote response and the positive comments to our rebuttal. We further respond to your two comments, as follows:\n\nComment 1 (i.e., Weakness 2): Why Tucker algorithm performs so badly on the synthetic data generated using a Tucker model? Moreover, each algorithm should be tested on a dataset generated with the same model and, for example, some additive noise.\n\nResponse: Thanks for your question. Mainly, the reason consists in two aspects: 1) the normalization of the tensor data (i.e., “All synthetic data are numerically renormalized into [0, 1]”), and 2) the local rather than global convergence of the Tucker-TC (PAM) algorithm. On Tucker factors-generated unprocessed data, the Tucker-TC (PAM) algorithm definitely can achieve the best recovery error ($10^{-16}$ to $10^{-10}$) compared to other algorithms, e.g., the suboptimal TW-TC (PAM) ($10^{-14}$ to $10^{-8}$). Nonetheless, such an evaluation may be worthless for the Tucker-TC (PAM) algorithm, since the experimental data unduly privileges it, which is usually impractical. Therefore, we renormalized all synthetic data into [0, 1] for a fairer comparison. Although these scaled data can still be accurately characterized by performing appropriate scaling on the Tucker factors, the local convergence of the Tucker-TC (PAM) algorithm only provides local rather than global minimizers owing to an immutable initialization (i.e., the uniform distribution U(0, 1)). After normalization, all the compared models are executed with the same initialization for fair comparison, while the proposed TW-TC model achieves the best performance. \n\nWe regret that our experiments in Fig. 3 did not provide you with significant benefits. Actually, the suggested experiment design has been considered before. However, each model achieves remarkable results on the corresponding dataset generated by the same model, as in the case of Tucker, leading to obstacles in comparison. Moreover, since the experimental data for each model is different, we may not be able to explain whether better RES values indicate better performance. To the best of our knowledge, the synthetic data experiments in many excellent works, e.g., [R1, R2, R3, R4], are performed by comparing different methods on the same noise-free data, yielding an intuitive and comparable numerical result. Consequently, our experiments have a common setting with them and may be more suitable for this manuscript. Again, I appreciate your insightful comment.\n\nComment 2 (i.e., Weakness 3): Insisting on an interesting property, TW can be seen as a TR topology.\n\nResponse: Thanks for your efforts for such a detailed explanation. After careful understanding and thorough discussion, we entirely agree with your viewpoint, especially the statement \"Of course, the obtained TR has larger ranks than the ring in the original TW\". From such a perspective, these tensor topologies having an internal core tensor, e.g., Tucker [R5] (ring rank being 1), and projected entangled state pairs (PEPS) [R6], may be able to establish a relationship to TR decomposition, which has been barely studied in previous works. \nMoreover, we are sorry that we misunderstood your comment earlier and did not emphasize this exciting property in the current version. Inspired by your affirmation, we will exert more effort and look forward to developing a more comprehensive work in the future, thereby making significant contributions to the tensor network community. Also, we will revise corresponding description to this part, making it clearer to readers.\nAgain, we gained a lot from your discussions, and sincerely thank you for your remarkable efforts in improving our manuscript.\n\n[R1] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion. AAAI, 2019.\n\n[R2] Fully-connected tensor network decomposition and its application to higher-order tensor completion. AAAI, 2021.\n\n[R3] Efficient low rank tensor ring completion. ICCV, 2017.\n\n[R4] Adaptive tensor learning with tensor networks. In Proc. NeurIPS 1st Workshop on Quantum Tensor Networks in Machine Learning, 2020.\n\n[R5] Some mathematical notes on three-mode factor analysis. Psychometrika, 1966.\n\n[R6] Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. Adv. Phys., 2008.", " Dear authors,\nI really appreciate your efforts improving the paper and providing answers to all my concerns. Se below my comments to your responses.\n\nWeaknesss 1: Thanks for providing detailed responses that helped me to better understand your point of view.\n\nWeakness 2: I agree that for TC and denoising the highest compression ratio is not the main goal although intuitively higher compression ratio usually comes with higher predictability of missing/corrupted entries. Anyway, I accept your argument that in TC it is not necessary to compare different algorithms having the same size model. However, there are some issues with the experiments on synthetic data:\n-\tSynthetic data in Fig. 3 was generated using a Tucker model so it is expected that the algorithm based on the Tucker model outperform any other assumed model in terms of obtained error. Why Tucker algorithm performs so badly? This could be explained by some problem in the implementation of the Tucker algorithm. \n-\tCurrent Fig. 3 is not very helpful to my understanding because each algorithm should be tested on a dataset generated with the same model and, for example, some additive noise.\n\nWeakness. 3: Sorry, I must disagree with your response or maybe there was a misunderstanding. Let me explain a how a mathematical proof can be written to prove my affirmation. By decomposing core C into a TR (this is always possible), we obtain two connected concentric rings. The inner ring has 3rd-order core tensors, while the outer ring has 4th-order core tensors. As a final step, we can contract the nodes connecting both rings which results in a single Tensor Ring topology. Of course, the obtained TR has larger ranks than the ring in the original TW. I am not telling that TW is not an acceptable topology for inferring missing data in tensors, in fact, I think it is a valid proposal. I only wanted to highlight that TW can be seen as a TR topology, which is an interesting property that is missing in the current version of the paper.\n\nWeakness 4: Thank you for adding the parameter configuration for all models in the supplementary material. I think it is important for reproducibility and a more complete evaluation of the presented results.\n\nWeakness 5: Thanks for considering my suggestion of adding tensor completion in the title.\n\n", " Weakness 1: Dissenting descriptions regarding TR limitations.\n\nResponse: In terms of tensor computation or representation, your comment is correct, i.e., more storage parameters will yield better decomposition performance. Thus, in tensor efficient computation, the parameter scale is not negligible when evaluating a new decomposition. However, we emphasize that the involved descriptions regarding TR limitations are mainly concerned with the characterization capabilities, which are crucial for TN models in high-order data recovery (line 60, page 2 of the revised manuscript). For high-order data recovery tasks, e.g., TC and denoising, the tensor decompositions are utilized as various predictors or fitters, whose topologies uniquely determine their characterization or fitting capabilities. Instead of aimlessly accumulating the number of parameters, the best results in the TC experiments are obtained by employing the optimal number of parameters. To avoid ambiguity, we have modified this mentioned sentence, see line 56, page 2 of the revised manuscript.\n\nWeakness 2: Lack of a comparison between TR/TT and TW for a fixed size of both models.\n\nResponse: We consider that the experiment using a fixed size of both models may be superfluous. For the TC experiment, the best performance is usually achieved only with the proper rank parameters (i.e., an appropriate number of parameters). According to the revised Fig. 3 and Table 1 of the revised supplementary material, the proposed TW-TC model outperformed both the TT-TC and TR-TC models, while requiring fewer parameters. Therefore, we argue that such a suggested experiment may be redundant. Thanks for your valuable suggestion.\n\nWeakness 3: The proposed TW model is equivalent to TR.\n\nResponse: We will explain this question from the following two folds. Firstly, after the tensor topology is determined, any re-decomposition or contraction will change the structural properties of the TN model. That is, such a changed decom- position is no longer the original one, they are not equivalent. Secondly, even if the core factor C is decomposed into TR factors, TW decomposition cannot be transformed into a TR decomposition after performing the corresponding contraction. This is because the newly obtained tensor topology has two layers of connections on adjacent fifth-order factors, which are different from those of TR decomposition. Consequently, we may disagree with this point. Thanks for your feedback. \n\nWeakness 4: Comparisons to other models are unclear. The models' rank parameters are omitted.\n\nResponse: Thanks for your constructive comment. Follwing your suggest, we have supplemented parameter configuration for all models in the synthetic data experiments (see Table 1 of the revised supplementary material), aiming to more clearly compare the model complexity from different algorithms. Also, we emphasize and discuss the comparison performance of all methods (see lines 249-256, page 8 in the revised manuscript).\n\nWeakness 5: The title should include the term “tensor completion”. The absolute value operation is not needed. Also, an inaccurate setence.\n\nResponse: Thank you for helping improve our manuscript. For a more rigorous presentation, we have added the term \"tensor completion\" into the title, and removed the absolute value operation (see line 78, page 3 of the revised manuscript). Moreover, we have modified the ambiguous word \"Apparently\" as \"It is clear that\", leading to a new sentence (see line 164, page 5 of the revised manuscript).\n\nQuestion 1: Could you please provide the details of the ranks used for each model used in Fig. 1 and Fig. 2?\n\nResponse: Since Fig. 1 and Fig. 2 are just two illustration figures that are not related to model ranks, perhaps you meant to ask about Figs. 3 and 4? If so, see the subsequent responses. As answered to Reviewer Bfwm, we cannot list all the rank parameters of different algorithms in Fig. 4, because some of them are non-decompositional, e.g., the HaLRTC [R5] (Tucker decomposition-based convex relaxation method), TMac-TT [R6] (TT decomposition-based parallel matrix decomposition method). However, we have wholly provided the details of the ranks for all models in Fig. 3 (see Table 1 of the revised supplementary material). Thanks for your this good question.\n\nQuestion 2: Can you compute the number of model parameters in each case?\n\nResponse: We appreciate your question. Similar to Question 1, some of the methods used in Fig.4, e.g., the HaLRTC [R5], do not have the core factor tensor. Thus, we didn't compute the number of their model parameters under this case. For the models in Fig. 3, we have provided the number of model parameters for all models (see Table 1 of the revised supplementary material).\n\n[R5] Tensor completion for estimating missing values in visual data, IEEE TPAMI, 2013.\n\n[R6] Efficient tensor completion for color image and video recovery: Low-rank tensor train, IEEE TIP, 2016.", " Question 1: Could the author list the rank parameter or model size for each algorithm in Table 1?\n\nResponse: We appreciate your comment. Regrettably, we are unable to list all the rank parameters of different algorithms in Table 1, because some of them are non-decompositional, e.g., the HaLRTC [R5] (Tucker decomposition-based convex relaxation method), and TMac-TT [R6] (TT decomposition-based parallel matrix decomposition method). Instead, we have supplemented Tucker decomposition as a comparison method to the synthetic data experiments (see the revised Fig. 3), and wholly listed all their rank parameters (see Table 1 of the revised supplementary material), aiming to compare the model complexity of different algorithms. Consequently, your concerns can reasonably be addressed.\n\nQuestion 2: Could the author compare TW and Tucker decompositions?\n\nResponse: Thanks for your valuable comment. Following your suggestion, we have highlighted the difference between TW and Tucker decompositions (line 166, page 5 of the revised manuscript), as follows: \"Unlike Tucker decomposition, TW one considers the potential relationship between adjacent factors and establishes a connection for a higher characterization capacity. Actually, such a strategy also reduces the loadings of core factor, which contributes a smaller ${L_i}, i=1,2,\\cdots,N,$ in TW-ranks than Tucker decomposition, thus alleviating the limitations of high storage and computational complexity.\". Moreover, since the HaLRTC [R5] is a Tucker decomposition-based method in real-world data experiments, we further included Tucker decomposition as a comparison to the synthetic data experiments (see Tucker-TC (PAM) in the revised Fig. 3).\n\nQuestion 3: Will generating data via TW or TR with Tucker decomposition included, makes more sense for algorithm comparison? Will there be any regularization methods to reduce TW performance variance?\n\nResponse: Thanks for your careful review! In the previous version, we construct the synthetic tensors by Tucker decomposition using Tucker factors, mainly considering that the Tucker factors comprise an underlying high-order structure, which may be closer to reality than TR representation. In the revised Fig. 3, we added the Tucker decomposition as a new competitor for a more comprehensive comparison.\n\nCompared to TR, especially TT decompositions, the proposed TW decomposition has a more complex structure, thus resulting in reduced model stability. When executed on multiple synthetic data using the same parameter configuration, TW-TC inevitably exhibits greater performance variance. Definitely, there are some regularization methods to reduce TW performance variance. As in [R1], by imposing low-rank constraints on the TR factors, the stability of TR decomposition for rank selection is enhanced. Since the high performance variance of TW decomposition is attributable to the TW-ranks, we believe that such a strategy can also be imposed on the TW factors, thereby reducing the performance variance. Notably, the main purpose of this manuscript is to propose a novel tensor decomposition, which shows a more pronounced significance. Hence, we do not consider applying these regularization methods to TW decomposition. They may be more suitable for further work in the future.\n\n[R1] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion, AAAI, 2019.\n\n[R5] Tensor completion for estimating missing values in visual data, IEEE TPAMI, 2013.\n\n[R6] Efficient tensor completion for color image and video recovery: Low-rank tensor train, IEEE TIP, 2016.", " \nThanks for your time and compliments!\n\nWeakness 1: Only provides the basic algebraic properties rather than deeper ones.\n\nResponse: Thanks for the insightful comment. Certainly, the deeper analysis of a decomposition, e.g., approximation error bounds, can further contribute to the theories of the manuscript. Nevertheless, such an analysis is usually established on the SVD-based version of the decomposition algorithm, e.g., [R7], rather than the ALS-based version, e.g., [R2, R8]. Since our manuscript only proposes an ALS-based decomposition algorithm, we did not consider providing the suggested analysis before. Following your feedback, we will make efforts to perfect more theories in subsequent supplementary materials, aiming to make further contributions to the tensor community.\n\nWeakness 2: Why does the FCTN-TC in Fig. 3 require less time than TW-TC? Its best hyperparameter setting is small?\n\nResponse: You are right! Experimentally, these relatively small parameter configurations almost always yield the optimal performances for the FCTN-TC (see Table 1 of the revised supplementary material), even if the obtained results are unsatisfactory (see the revised Fig. 3), leading to a shorter computing time even than both the TT-TC and TR-TC models. This situation is empirically analyzed in [R2] (lines 31-34, page 3), i.e., the required FCTN rank values are usually far less than Tucker rank values. Moreover, as compared in Fig. 2 of our supplementary material, the proposed TW topology has a more complicated structure than FCTN one under two special cases, i.e., when the operated tensors are third-order or fourth-order. Resultantly, the proposed TW-TC model requires incremental computational time, despite its higher recovery accuracy.\n\nWeakness 3: Why does FCTN-TC perform worse than TT-TC and TR-TC despite larger connections?\n\nResponse: According to the optimal rank parameter configuration of the FCTN model, we argue that the inferior performance of the FCTN-TC method may be caused by over-fitting, judged by its several optimal ranks between non-adjacent factors being 2 (see Table 1 of the revised supplementary material). Compared with real-world data, the low-rank characteristics of synthetic data are simpler. That is, not all non-adjacent dimensions have a direct relationship, thus leading the authentic ranks between some non-adjacent factors to be 1 (i.e., without connection). However, the minimum values of FCTN rank among several non-adjacent factors are 2 rather than 1, forcing FCTN topology to maintain its fully-connected structure. Accordingly, some ineffective structures may reduce the performance of the FCTN-TC method, when applied to the synthesized fourth-order and fifth-order data.\n\nQuestion 1: TC experiments use unrealistic i.i.d missing conditions. How will it go if we utilize structured missing conditions?\n\nResponse: This is a good question! To the best of our knowledge, the i.i.d. missing condition is more classical and widely used in tensor completion, e.g., [R1, R2, R4, R8, R9], leading to a more comparative and convincing evaluation, without loss of generality. Actually, the TC experiments using the structured missing condition have previously been verified on RGB images. Although the proposed TW-TC is invariably superior to other compared methods, the more challenging condition leads to inferior performance of all methods. To improve the numerical results, some preprocessing operators, e.g., VDT [R10], or regularizers may be required for these tensor network decomposition-based methods.\n\n[R1] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion, AAAI, 2019.\n\n[R2] Fully-connected tensor network decomposition and its application to higher-order tensor completion, AAAI, 2021.\n\n[R4] Nonconvex Low-Rank Tensor Completion from Noisy Data, NeurIPS, 2019.\n\n[R7] Tensor-train decomposition, SIAM JSC, 2011.\n\n[R8] Tensor ring decomposition, ArXiv Preprint, 2016.\n\n[R9] Adaptive tensor learning with tensor networks. In Proc. NeurIPS 1st Workshop on Quantum Tensor Networks in Machine Learning, 2020.\n\n[R10] High-dimension tensor completion via gradient-based optimization under tensor-train format. Signal Processing, 2019.", " We sincerely appreciate your efforts and objective reviews!\n\nQuestion 1: Why are Table 1's TRLRF [R1] and FCTN-TC [R2] results different from those of FCTN-TC [R2] on the identical data (container, news, and HSV)?\n\nResponse: Specifically, the cause consists mostly of two factors: 1) different data, and 2) distinct hyper-parameters. Firstly, the \"container\" and \"news\" CVs in [R2] are of size $144\\times176\\times3\\times50$ (50 frames), which is a cropped version of the full size $144\\times176\\times3\\times300$ (300 frames). Although the proposed TW-TC method can also achieve excellent results when tested on an identical 50-frame data, manually tuning parameters for all compared methods is time-consuming, which may cause missing the submission deadline for manuscript. Therefore, we only employ the first 20 frames of CVs for a reduced workload, thus forming our \"container\" and \"news\" data sized $144\\times176\\times3\\times20$. Similarly, the HSV in our manuscript is also distinguished from the one in [R2]. Secondly, we have fine-tuned the hyper-parameters of all the compared methods, including TRLRF [R1] and FCTN-TC [R2], resulting in distinct parameter configurations. Consequently, these two aspects collectively contribute to the numerical difference.\n\nQuestion 2: Why does the FCTN-TC model run quicker in Toy data?\n\nResponse: When applied to third-order tensors, the complicated FCTN decomposition degenerates graphically into TR decomposition, i.e. a complete graph with three nodes. Since the \"Toy\" data is a third-order tensor, the FCTN-TC model for \"Toy\" data is essentially the TR-TC model, thus allowing for efficient computation. Compared with the FCTN-TC (i.e., TR-TC) model, the TRLRF [R1] model applies extra regularizers for factor matrix rank minimization, while our TW-TC model increases a core factor structure. Moreover, the TMacTT [R3] algorithm experimentally exhibits slower convergence, resulting in a greater number of iterations. Consequently, the FCTN-TC method runs much faster than others, i.e., TMacTT, TRLRF, and TW-TC, in the third-order tensor data, i.e., \"Toy\".\n\n[R1] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion, AAAI, 2019.\n\n[R2] Fully-connected tensor network decomposition and its application to higher-order tensor completion, AAAI, 2021.\n\n[R3] Efficient tensor completion for color image and video recovery: Low-rank tensor train, IEEE TIP, 2017.", " This paper presents a new tensor network decomposition model for high-order tensor analysis. The proposed model factorizes a high-order tensor into a set of latent factors connected to a specific wheel topology. This can be viewed as a combination of the classic Tucker decomposition and the tensor ring decomposition. The numerical experiments of tensor completion application on both synthetic data and real-world examples demonstrate the superiority of the proposed method in terms of reconstruction accuracy. [Strengths] \n- Innovative construction of the tensor network model inheriting the merits of the classic Tucker decomposition model\n- State-of-the-art performance on real-world tensor completion applications\n- Linear scaling for the number of hyperparameters with increased tensor dimension\n\n[Weeknesses]\n- Increased computational burden compared with tensor-train and tensor-ring models\n- Expensive storage burden scaled exponentially with the dimension of tensor data -- sacrificing one of the significant benefits of the tensor-train model - Considering the numerical results in Table 1, why are the reported PSNR values for competing methods TRLRF [A] and FCTN-TC [B] different from those reported in Table 1 of [B] on the same data (container, news, and HSV)? \n- Why does the most complicated model FCTN-TC run much faster than others (e.g., 26.27s v.s. 154.67s) in Toy data\n\n[A] Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion, AAAI 2019 \n[B] Fully-connected tensor network decomposition and its application to higher-order tensor completion, AAAI 2021 \n\nIMO, this paper serves as a nice contribution to the tensor decomposition domain and should be accepted if the above question could be properly answered. \n***\n**post-rebuttal: I keep my original score though I feel the hyperparameter sensitivity issue should be extensively studied --- I'm not sure whether the superior results beyond other competing methods are partially owing to the test data selection and hyperparameter tuning. Missing to reveal this problem would definitely make the results weaker** N.A", " This paper proposes a new tensor decomposition method to improve the downsides of existing ones. The proposed tensor wheel decomposition can be viewed as a hybrid between TR decomposition and Tucker decomposition, i.e., it is similar to TR decomposition but there is an additional core tensor of which the order is the same as the input data and all the factor tensors are connected (tensor-producted) to it. This has some advantages: (1) The Nth order structure of the input data can be preserved. (2) Nevertheless, it is more manageable than FCTN. (3) All the factors are now interconnected by the core tensor, unlike TR decomposition where some factors are far away so it is not easy to find direct relationships between them. Experiments show that, for a few synthetic and real data, the proposed method achieves the best performance. The paper proposes a clever way to bypass the downsides of the existing tensor network type decompositions. Unlike TT and TR decompositions, nonadjacent factors are connected via the core tensor and the original structure of the data is better preserved. At the same time, the number of connections is minimized (unlike FCTN) so that it is more manageable. The idea is quite intuitive and convincing, and accordingly, the superior performance is also convincing.\n\nThe proposed decomposition is found by a PAM-style method, and the convergence proof is provided. One shortcoming of the paper is that there is no deeper analysis of the decomposition. There are some theorems in the paper, however, they are about basic algebraic properties rather than deeper ones such as approximation error bounds.\n\nAnother thing that bothers me is the comparison with FCTN. In Figure 3, Why does FCTN-TC take a shorter time than TW-TC? This seems somewhat unintuitive, considering the main claim (TW decomposition being more manageable than FCTN). Is it because the best performing hyperparameter setting of FCTN happens to be very small? Moreover, it seems that FCTN-TC usually performs worse than TT-TC and TR-TC even though it has heavier connections. Why is this? The TC experiments only use i.i.d missing conditions which are not realistic. How will it go if we use more realistic conditions, e.g., structured missing conditions?\n I believe that this paper does not have any serious potential negative societal impact.", " The paper proposed a new tensor network decomposition method, namely Tensor Wheel Decomposition. Different from prior tensor train or tensor ring decomposition, the tensor wheel decomposition introduced a new core tensor factors C in the factorization. An ALS algorithm (algorithm 1) and proximal solution (algorithm 2) are provided. Empirical study in completion task on synthetic data and real data shows that the new algorithm is better than the TT and TR based factorization methods. The paper extends the existing tensor networks method to a new tensor factorization. The extension is trivial, but might still brings some new knowledge to the area. \n\nThe new tensor wheel factorization seems to be a combination of tensor ring factorization and Tucker factorization, without core tensor taking into considerations. While the comparison to Tucker factorization is not included in the paper. [Q1] In the experimental results of Table1, since the rank are learnt different based on different algorithm, could the author list the rank parameter or the model size to understand the model complexity from different algorithm?\n\n[Q2] The tensor wheel seems to be more general as a combination of tucker decomposition and tensor ring decomposition. Could the author highlight the difference between the TW to Tucker decomposition, and include tucker decomposition as a comparable here? \n\n[Q3] In Figure 3, the data is generated via Tucker decomposition while the comparison is mainly between TT, TR, and TW. Will generating data via TW or TR with Tucker decomposition included, makes more sense for algorithm comparison? Additionally, from the systematic data, it seems TW algorithm is mostly slow and having large performance variance, where TR seems to be faster with smaller performance variance. Will there be any regularization methods to reduce TW performance variance? The paper proposed a new tensor wheel factorization method. Although it is a trivial but interesting extension, comparison to Tucker decomposition method is needed to demonstrate the improvement. From the existing results, model complexity and performance trade-off also needs to provided to demonstrate the efficiency of the new factorization method. Last but not least, interpretations on the algorithms efficiency and performance variance is needed to show the advantage of TW as compared to TT and TR. ", " In this paper, a new Tensor Network (TN) topology is proposed, the Tensor Wheel (TW), which can be seen as a combination of Tensor Ring (TR) and Tucker network topologies. The paper explores the properties of TW and develop a Proximal Alternating Minimization (PAM) algorithm to make inference of missing values in tensor data. Experimental results of tensor completion on synthetically generated and real-world signals are presented and compared against some state-of-the-art tensor completion methods. Strengths:\n-\tClear definition of the new topology and description of some of its properties\nWeaknesses\n-\t(major) I don’t agree with the limitation (ii) of current TN models: “At least one Nth-order factor is required to physically inherit the complex interactions from an Nth-order tensor”. TT and TR can model complex modes interactions if the ranks are large enough. The fact that there is a lack of direct connections from any pair of nodes is not a limitation because any nodes are fully connected through a TR or TT. However, the price to pay with TT or TR to model complex modes interactions is having bigger core tensor (larger number of parameters). The new proposed topology has also a large price to pay in terms of model size because the core tensor C grows exponentially with the number of dimensions, which makes it intractable in practice. The paper lacks from a comparison of TR/TT and TW for a fixed size of both models (see my criticism to experiments below).\n-\tThe new proposed model can be used only with a small number of dimensions because of the curse of dimensionality imposed by the core tensor C.\n-\t(major) I think the proposed TW model is equivalent to TR by noting that, if the core tensor C is represented by a TR (this can be done always), then by fusing this TR with the cores G_n we can reach to TR representation equivalent to the former TW model. I would have liked to see this analysis in the paper and a discussion justifying TW over TR.\n-\t(major) Comparison against other models in the experiments are unclear. The value of the used ranks for all the models are omitted which make not possible a fair comparison. To show the superiority of TW over TT and TR, the authors must compare the tensor completion results for all the models but having the same number of model parameters. The number of model parameters can be computed by adding the number of entries of all core tensors for each model (see my question about experiment settings below).\n-\t(minor) The title should include the term “tensor completion” because that is the only application of the new model that is presented in the paper.\n-\t(minor) The absolute value operation in the definition of the Frobenius norm in line 77 is not needed because tensor entries are real numbers. \n-\t(minor) I don’t agree with the statement in line 163: “Apparently, the O(NIR^3+R^N) scales exponentially”. The exponential grow is not apparent, it is a fact. \n------------------------------------------------------\nI updated my scores after rebuttal. See my comments below -\tCould you please provide the details of the ranks used for each model used in Fig. 1 and Fig. 2? \n- Can you compute the number of model parameters in each case (sum of number of entries in each core tensor for each model)?\n------------------------------------------------------\nI updated my scores after rebuttal. See my comments below Yes, the authors have stated that the main limitation of their proposed model is its exponentionally grow of model parameters with the number of dimensions." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "RtMu9CNvSb", "RtMu9CNvSb", "RUt2NBR_bs5", "8Br49Nwuyyr", "oRLhC4-nR_", "nips_2022_Ag3ycrdh6n", "8nsh909zVR", "uop69AIaaTc", "Q36uXfk4GL2", "Fut7h-oeN2N", "uP_0-6b8xHT", "RtMu9CNvSb", "oGe28T4F2c", "B0bOrAgVWKL", "nips_2022_Ag3ycrdh6n", "nips_2022_Ag3ycrdh6n", "nips_2022_Ag3ycrdh6n", "nips_2022_Ag3ycrdh6n" ]
nips_2022_JGLW4DvX11F
Optimistic Tree Searches for Combinatorial Black-Box Optimization
The optimization of combinatorial black-box functions is pervasive in computer science and engineering. However, the combinatorial explosion of the search space and lack of natural ordering pose significant challenges for current techniques from a theoretical and practical perspective, and require new algorithmic ideas. In this paper, we propose to adapt the recent advances in tree searches and partitioning techniques to design and analyze novel black-box combinatorial solvers. A first contribution is the analysis of a first tree-search algorithm called Optimistic Lipschitz Tree Search (OLTS) which assumes the Lipschitz constant of the function to be known. Linear convergence rates are provided for this algorithm under specific conditions, improving upon the logarithmic rates of baselines. An adaptive version, called Optimistic Combinatorial Tree Search (OCTS), is then introduced for the more realistic setup where we do not have any information on the Lipschitz constant of the function. Similar theoretical guarantees are shown to hold for OCTS and a numerical assessment is provided to illustrate the potential of tree searches with respect to state-of-the-art methods over typical benchmarks.
Accept
This paper proposes two methods for black box optimization of Lipschitz combinatorial binary functions. The reviewers agree that the paper is well written, the methods are sufficiently novel, and that the results are of interest to the NeurIPS community. The main drawback with the paper is that reviewer n1bW felt that the theoretical results are straightforward (but nevertheless useful). Several reviewers also had hoped for comparisons with Bayesian optimization techniques, but during the discussion period it was decided that this comparison can be omitted due to the much higher computational cost of Bayesian methods. I tend to agree with the reviewers that this paper is above the bar for NeurIPS.
train
[ "fYutebcKKkK", "v8KEiUH9TOn", "ksvNtferRN", "QGxr8kA6WU1", "M0ghQXQ63M", "Bg0E893mOR", "0cOmCIejYB", "Y91yYhD2-5zp", "Ef5r10DluXk", "j6GbWfY5j_y", "Iyc6FPf1-i", "W2D2bB2kGQQ", "SXqhojOaJPB", "vHpY-aULDH7", "PJ672Qkdxk", "VVyfDbYztk5" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Finally, we would like to thank the reviewer for taking the time to read the rebuttal and we are happy that it helped to clarify some of its questions.", " Thank you for pointing out the applicability of the approach.\nFor the indices, we agree that the large budget (considered \nin the current setting) contributes to this phenomenon.\nEven for decision tree, it might not be surprising that\nwe observe a similar behavior by using a large number of samples.\nSimilarly to random trees, we could also imagine to create \non some problems a meta-strategy like random trees or similar techniques \nthat use additional evaluations to further optimize the approach \n(which however seems to have a limited influence\non the smooth problems of the paper in the large budget settings, \nbut are now provided in the Appendix as an extension and cited in the core of the document).\n\nHowever, our view on this topic is the following:\nlike decision trees, our approach is still a valuable tool \nand can even provide very good results without much further tuning\non many problems, and we agree that it could always be further\nimproved on some problems by fine-tuning the ordering or similarly\ndesigning budget dependent strategies or using different \nupper confidence bounds. In our opinion, \nit constitutes interesting extensions to \nthe approach but does not impact the core (and dense)\ncontribution which revolves about\nintroducing a novel approach to solve black-box combinatorial\nproblems. Indeed, it has to be recalled that \nthe approach (1) provides\nstate-of-the-art results on a wide variety of problems, (2) it introduces a novel approach to solve the combinatorial black-box problem\nand (3) we obtain relevant theoretical results as a byproduct.\nWe are sincerely convinced (in our opinion) that it is still a good contribution \nfor the optimization community.\n\n", " Thank you for the thorough response. The table with iteration times is a valuable addition to the paper. I continue to recommend acceptance.", " In the case of contiuous systems the validity of Lipschitz condition is fairly strong. I am not as convinced for combinatorial problems, but probably there are enough such problems so that the algorithm can have wide enough applicability. \n\nI agree that many systems are sufficiently homogeneous, in which case the ordering is not a huge issue. Having a large budget also helps to alleviate the ordering issue, but as the authors pointed out, for a more limited budget the ordering could be a problem. Moreover, even if the overall budget is large, one would prefer an efficient use of the budget. As analogy, one can think of building decision trees, where the feature selection is crucial for the algorithms performance. ", " We thank the reviewer for taking time to read the rebuttal and we are very happy that some of your concerns have been clarified. Finally, we would also like to thank the reviewer for raising its score and for its various remarks which helped to improve the overall quality of the paper. Feel free to point out any additional aspect of our work that could further be improved or clarified.", " First, we appreciate that the reviewer took the time to read the rebuttal and we apologize for the misunderstanding about DIRECT.\n\nHowever, on top of the experiments and theory, it is hard to provide more evidence to debate the use of the Lipschitz constant/ordering. Nonetheless, we can give you our opinion/point of view on these topics in case that speaks to you.\n\nIn our opinion, it is often the case that, in practice (like for the problems of the experiments we took from existing standardized benchmarks in various fields), the systems we wish to optimize are generally non-chaotic and have a good conditioning. Indeed, we generally wish to optimize systems that \"represent\" real systems and thus exhibit some smooth structure. Moreover, in applied settings, it is also often the case that the objective function of the problem is renormalized with regards to its various inputs before the optimization which further smoothen the system. It is generally an important aspect of the optimization pipeline. As a result, the systems we wish to optimize are often \"smooth\" in the sense that for similar inputs with large dimensions, they will have similar outputs $f(x) \\approx f(x')$ for $x \\approx x'$. In general, exploiting this smoothness signal can greatly help to optimize the system, e.g., captured in this work through the Lipschitz constant. In our opinion, this is what justified the large adoption and success of Bayesian methods (that use \"smooth\" kernels) and the DIRECT algorithm for continuous problems, although they might have limitations for instance over poorly conditioned systems (where most approaches might fail and a renormalization of the distance might make the trick). Here, we observe similar behaviors in the combinatorial world, which might even be stronger since its is known that all functions are Lipschitz as opposed to the continuous case. Finally, although it might not be the go-to solution for all problems, we point out that it is still important to note that this approach provides several results outperforming competitive benchmarks in various applications.\n\nSecond, for the ordering of the index, it is possible that their effect/impact might be softened due to the large evaluation budget available when optimizing cheap-to-evaluate systems. Roughly speaking, if we have a budget of 100d**2 function evaluations for a given problem, due to the selection strategy of the OCTS, we know that the algorithm will reach the end of the tree around 100d times regardless the ordering (there are roughly d evaluations per round and each time a node at the end of the tree is evaluated). It means that regardless of the ordering, OCTS will go through all the variables (and check both possible combinations for each variable) at least 100d times which might strongly reduce the impact/variance of the ordering since 100d can be very large. However, we agree that the results might be different for expensive-to-evaluate functions where we can only afford few evaluations (like in Bayesian optimization) and the ordering might have a stronger impact, which is however not the setup considered here.\n\nWe hope that it provides some intuition to the reviewer.", " I appreciate that the authors take the time to respond to my comments. I think my concerns -- overclaim of novelty/contributions, baselines (especially Bayesian optimization), benchmark problems, and the intuitions behind the theorems -- are generally clarified by the authors. The paper does contribute to the field of black-box combinatorial optimization regardless of the concerns I have raised. It will be a stronger paper with the issues resolved. ", " In the review I mentioned DIRECT as a similar strategy to handle the unknown Lipschitz constant case, and not really as an indication of incomplete related research. \n\nI do not have a clear pointer to which combinatorial bandit algorithm could be used directly, just that the problem structure is similar. \n\nI still have some reservations about the validity of Lipschitz condition in real problems, and I am not fully convinced about the ordering of the indices. The addtional experiments do indicate that this is not an issue, but intuitively I would find it surprising for a problem with heterogeneous input space.", " We thank the reviewer for their valuable feedback. We try to provide a concise answer due to space issues:\n\n1 First, we strongly apologize if reading the paper gives the feeling of overclaimed contributions which (to be honest) was not done on purpose. Of course, we agree that the optimistic and tree search machinery is not new. In our opinion, the main contributions of the paper are: 1) the introduction of the first combinatorial solvers that use optimistic searches over specific trees, 2) the development of novel theoretical results (which to the best of our knowledge are the first of their kind and bring insightful results for combinatorial problems). and 3) we obtain novel algorithms that display very strong results on benchmarks. In our opinion, it is valuable to the community from both a theoretical and practical side. However, to prevent this feeling of overclaimed contributions, we made the following modifications:\n- l88, the section \"Optimistic Strategies\" is now called 'Optimistic Tree Search Strategies' in order to stress that tree based searches have already been discussed in previous works (such as UCB and UCT) and we stress that SOO and DOO use tree searches in continuous spaces\n- l31, \"we build upon the works of DIRECT[30] and SOO[41] and show how to use optimistic strategies\"\n- l118, 'To implement optimistic tree search strategies [39], we need a hierarchical partition of the combinatorial space'\n- l160, we start with the \"OLTS implements the optimistic principle [39] over combinatorial trees\". Same for OCTS in l232.\n\n2 Up to our knowledge, all the works that employ optimistic tree searches only focus on continuous spaces (where one can use a continuous $2^d$-ary tree partition of the space [39]). There is simply no equivalent hierarchical representation for combinatorial spaces reported in the literature (even for different goals). This is why we came up with the non-trivial tree structure of the paper satisfying assumptions 3.1 & 3.3. Although the nature of combinatorial (CombTree) and continuous (ContTree [39]) spaces are different, we list some differences:\n- [width] In CombTee, only one coordinate is switched at each split. Doing the same in ContTree would result in losing the decreasing diameter property (Ass. 3.3). As a consequence, CombTrees only have 2 children per node (independently of the dimension) while in continuous trees we have $2^d$ children per node. Thus, ContTrees are much wider/flat trees that exponentially explode with the dimension. Moreover, since CombTree imposes that left child has the same value as the parent node, an import consequence is that one can easily navigate through the tree linearly with $d$ evaluations while it would require $d*2^d$ evaluations in ContTree which explodes with $d$. Note that this trick imposes that the size of CombTree is $2^{d+1}$ instead of $2^d$\n- [depth] The depth of CombTree is $d+1$ while the depth of ContTree is infinite. In practice, ContTree are controlled by a parameter $h$ which impacts the performance. In [38] they obtain two very distinct regimes of convergence that depend on $h$ (exponential and polynomial), while we only obtain a single (fast) linear regime of convergence\n- [storage] the nodes of CombTree can be represented as $x_{l,i}= bin_l(i) + 0_{d-l}$. It allows to simply store the index $(l,i)$ of the tree search instead of the full vectors $x_{l,i}$ of dimension $d$ for a better scaling w.r.t. the dimension\n- [theory] most of the analysis boils down to bounding the volume of the sphere $B(x_{l,i},R)$ for some $R>0$ where $x_{l,i}$ is any point in the tree. In continuous space, it is easy to integrate and proportional to $R^d$, while in combinatorial spaces, the results are discrete (hence $l(n)$) and (combinatorically) explode with $R$. To overcome this phenomenon, we introduce specific combinatorial techniques (see Proofs of Lem B3, Lem B4 and Prop B1)\n\n3 Informally, $l(n)$ corresponds to the minimum depth at which the tree search will be after $n$ evaluations, which depends on the sets $I_l$ of potential optima. Similarly, $log_2(n_c)$ corresponds to the level after which the number of potential optima per level stops to explode and the algorithm will get a linear behavior. For example, when $f(x) = -d_H(x, \\vec{1})$ with the tree of Section 3 and OLTS with $k=1.5$ and $d=4$, we have $|I|_0= 1$, $|I_1|= 2$, $|I_2|=3$, $|I_3|=|I_4|=1$. Thus, in this case we have $l(1)=0$, $l(4)=1$, $l(7)=2$ and $l(8)=3$ and we know that after $8$ iterations we will be at least at the level 3. \nThese details are now extended in the main document and examples for the values of $l(n)$ and $n_c$ are illustrated (with a tree) on the previous example in Appendix B with a pointer in the document.\n\n4 We refer to the resp to rev S1XN for Xps/Bayesian methods/societal Impact and we only considered in the benchmark the problems that are (1) non-synthetic, (2) have a free dimension for [18] and (3) cheap-to-evaluate (taking out NAS)\n", " First, we would like to thank the reviewer for its feedback. We try to address the remarks below:\n\n1. The DIRECT algorithm is first cited in the introduction of the document in l31-32 where we introduce the contribution: \"More precisely, we build upon the works of [30, 39] and show how to use optimistic tree searches on combinatorial spaces\". To the best of our knowledge, [30] is the original paper of the DIRECT algorithm. However, to make the connection more clear, we now use \"the DIRECT[30] and SOO[39] algorithms\" and add on top of Definition 5.1 \"we consider a set of potentially optimal nodes similar to the one of the DIRECT algorithm [30]\"\n\n2. To the best of our knowledge, it is the first time fast rates are shown to hold for combinatorial black-box optimization. As a second remark, we point out that obtaining these results is not really straightforward and requires novel work on bounding balls in combinatorial spaces as well as handling combinatorial structures (i.e. proofs of Lemma B3, B4, Prop B1, B4)\n\n\n3. The choice of the tree is interesting. First, we point out that the choice of the ordering does not impact the theoretical results which only require the tree to satisfy Assumptions 3.1 and 3.3. However, to have a finer understanding of the impact of the ordering in practice, we performed the following additional ablation study where we performed $10$ runs of OCTS with various trees. The results are reported below\n\n| | Ising (20) | CT (20) | LABS (20) | MIS (20) | Ising (50) | CT (50) | LABS (50) | MIS (50) |\n|---------------|:----------:|:-----------:|:-----------:|:--------:|:-----------:|-------------|-------------|-------------|\n| $T + R$ | 20 (00) | 4.00 (00) | 7.33 (0.88) | 10 (00) | 50 (00) | 10 (00) | 5.17 (0.33) | 21 (1.0) |\n| $T+R^*$ | 20 (00) | 4.00 (00) | 7.00 (0.88) | 10 (00) | 50 (00) | 10 (00) | 5.22 (0.46) | 21.2 (1.6) |\n| $\\pi(T) + R$ | 20 (00) | 3.80 (0.12) | 6.97 (0.72) | 10 (00) | 45.2 (2.71) | 8.76 (0.08) | 5.19 (0.22) | 19.2 (2.0) |\n| $\\pi(T)+ R^*$ | 20 (00) | 3.84 (0.32) | 6.24(0.62) | 10 (00) | 46.4 (1.49) | 8.55 (0.25) | 5.22 (0.32) | 20.6 (1.01) |\n| $\\pi^*(T)+ R$ | 20 (00) | 3.8 (0.0) | 5.88(0.00) | 10 (00) | 50 (00) | 9.40 (00) | 4.32 (00) | 21.2 (0.1) \n\nThe table repots the best value observed after n=10*d**2 evaluations (with std) where $T$ denotes the tree of Figure 1 with $x_{l,i}=Bin_l(i) + \\vec{0}_{d-l}$. $R$ denotes a root node sampled uniformly, $\\pi(T)$ denotes a random permutation of the order of variables. Finally, R* star set the root as the best points obtained from a RS with budget of $d$ evaluations. $\\Pi^*(T)$ denotes the ordering where the variables in the tree are ranked according to the best function values recorded by switching the bit corresponding to the given variable and recording it ($d$ evaluations in total to get this ordering). As it can be seen, on most test problems OCTS is robust to the choice of the root node in the sense that for a randomly chosen root (line T+R), the algorithm consistently finds similar optima with low std. Moreover, it is interesting to note that using random permutation ($\\pi(T)+R$) does not improve the stability of the algorithm, which is due to the fact that on some problems (e.g. LABS and CT) there is a sequential link between the variables which is preserved by using $T$ and not when permuting the variables. Thus, in practice, it is recommended to keep the natural ordering of the variables. This is now included in the Appendix with a pointer in the main document.\n\n4. For the literature related to bandits, we honestly exactly had the same thoughts as the reviewer that there might be approaches in the bandit literature that are related to the current work. To the best of our knowledge, we surprisingly only found the works of [12, 31] to the present work. The fact that, somehow, the problem is deterministic changes a lot the nature of the algorithms where a lot of effort in the bandit literature is put to handle stochasticity. However, if you have good references that are related to the work, we can definitely add them in the related work\n\n5. Thank you for pointing out that the empirical performance is a strong argument for the paper, which in our opinion validates the use for the Lipschitz approach of the paper. The details and hyperparameters of the baselines are all provided in Section D.1 of the Appendix (3 pages detailing the algorithms as well as hyperparameters). As far as we can see, all the hyperparameters as well as code for all the algorithms is provided\n\n6. Thank you for pointing out the non-clarity of Algorithm 3 in Appendix. In order to fix this issue, we moved the image 5 (Appendix) which illustrates the algorithm in the document to make the connection with the OCTS more clear", " First, we would like to thank the reviewer for its feedback on the paper and the general positive review. We thank you for noticing that the paper aims at optimizing functions with cheap-to-evaluate cost. We try to address the main questions below:\n\n1. We directly move the qualification as to why Bayesian optimization methods are not included in the computational study directly in Section 6. Moreover, in order to make it more clear since the beginning, we now stress in the introduction that we aim at providing algorithms that are tailored to optimize function with cheap-to-evaluate cost. Recall that the Bayesian method are not compared to the current methods, as their heavy computational cost to generate the next evaluation point does not make them suitable for cheap-to-evaluate black-boxes. In order to have a better idea, we computed the following time to sample the next evaluation points:\n\n| | OCTS | GA | EA | RS | RLS | GHC | SA | Bayesian |\n|---------------------------------------------|--------|--------|----------------|--------|--------|--------|--------|----------------------------------------------------------------|\n| Complexity to sample $x_{t+1}$ | $O(d)$ | $O(\\lambda)$ | $O(1)$ | $O(1)$ | $O(1)$ | $O(1)$ | $O(1)$ | Solving a BB problem of dim $d$ ($O(2^d)$ for an exact solution) |\n| Memory to compute $x_{t+1}$ | $t+1$ | $\\lambda (30)+1$ | $\\lambda (30)$ | $1$ | $2$ | $2$ | $2$ | $t+1$ |\n| Time to compute $x_{t+1}$ after $t=100$ | 0.001 (s)| 0.004 (s) | 0.0009 (s) | 0.0007 (s) | 0.0008 (s) | 0.0008 (s) | 0.0007 (s) | 62.00 (s) \n\nWe took COMBO [41] for the Bayesian method as well as their official implementation. The time to compute $x_{t+1}$ is measured on the contamination problem (d=25) on a i7 CPU @ 1.80GHz 1.99 GHz with 16GB of RAM. As it can be seen, OCTS is in the same order of magnitude as other methods (milliseconds). On the other side, Bayesian methods literally take 1 minute to query a novel point. More precisely, since it takes ~1 hour for the current Bayesian method to perform 100 function evaluations, it might be even faster to perform an exhaustive search that querying a single point in some cases (where the function is cheap to evaluate). However, to have an idea of a simple comparison of the proposed algorithms with Bayesian optimization, we performed the following experiment: we took the algorithms of the paper with the budget of the experiments set to 100*d^2 evaluations (as set in the paper) and compared it to Bayesian optimization with a budget of 100 evaluations.\n\n| | OCTS | GA | EA | RS | RLS | GHC | SA | Bayesian |\n|----------------|--------------|--------------|--------------|--------------|--------------|--------------|---------------|-----------------|\n| Number of eval | 62500 | 62500 | 62500 | 62500 | 62500 | 62500 | 62500 | 100 |\n| Contamination 0.0 | -21.35 (32s) | -21.42 (43s) | -21.35 (28s) | -21.57 (24s) | -21.52 (24s) | -21.61 (21s) | -21.43 (23s) | -21.57 (88min) |\n| Contamination 0.01 | -21.52 (34s) | -21.60 (45s) | -21.56 (30s) | -21.73 (23s) | -21.68 (22s) | -21.78 (22s) | -21.61 (24s) | -21.74 (91min) |\n| Contamination 0.0001 | -21.35 (35s) | -21.45 (45s) | -21.36 (25s) | -21.57 (23s) | -21.49 (23s) | -21.61 (21s) | -21.45 (23s) | -21.72 (87min) \n\nAs it can be seen, for cheap-to-evaluate black-box system, we can only query 100 points in more than an hour with Bayesian methods while with other methods we can easily query 50000 points. Of course, it results in a less competitive algorithm with such a different budget (higher is better). However, they are suitable for problems where the black-box evaluation is generally significantly larger than the time to generate the next sample point, and we strongly advise using them in this case.\n\n2. Thank you for the typo in the convergence rate\n\n3. We apologize if the section about the potential negative societal impact. It is now in the document and here is an extract: \n\nIn our work, we proposed a novel methodology to optimize binary functions with cheap-to-evaluate cost. These new solvers are mostly agnostic to the specific application, and can be applied in a wide range of optimization problem (ranging from graph analysis, to electronic design). Therefore, the societal and ethical impacts of our contribution are heavily dependent on the nature of the problems solved with the algorithm. We start by noting that beneficial applications of OCTS are thick on the ground, ranging from the design more efficient telecommunication applications, to the control of contaminations.", " First, we would like to thank the reviewer for their feedback. We try to provide an answer to the questions below:\n\n1. We agree with the reviewer that the word \"provable\" is indeed not appropriate in the context of black-box optimization. More precisely, we agree that only the algorithms that search all the $2^d$ points could be said to be provable in this setting. To avoid this confusion, we now use the term \"with provable guarantees\" in the paper.\n\n2. The solve/iteration time question is interesting. Since this time heavily depends on the implementation details, hardware and tricks; we only measured the solve time in the paper by recording the number of black-box calls. However, to provide some details about the computational time, we added the following experiments in the appendix. \n\n| | OCTS | GA | EA | RS | RLS | GHC | SA | Bayesian |\n|---------------------------------------------|--------|--------|----------------|--------|--------|--------|--------|----------------------------------------------------------------|\n| Complexity to sample $x_{t+1}$ | $O(d)$ | $O(\\lambda)$ | $O(1)$ | $O(1)$ | $O(1)$ | $O(1)$ | $O(1)$ | Solving a BB problem of dim $d$ ($O(2^d)$ for an exact solution) |\n| Memory to compute $x_{t+1}$ | $t+1$ | $\\lambda (30)+1$ | $\\lambda (30)$ | $1$ | $2$ | $2$ | $2$ | $t+1$ |\n| Time to compute $x_{t+1}$ after $t=100$ | 0.001 (s)| 0.004 (s) | 0.0009 (s) | 0.0007 (s) | 0.0008 (s) | 0.0008 (s) | 0.0007 (s) | 62.00 (s) |\n\nThe time to compute $x_{t+1}$ is measured on the contamination problem (d=25) on a i7 CPU @ 1.80GHz 1.99 GHz with 16GB of RAM. As it can be seen, OCTS is in the same order of magnitude as other methods and significantly faster than Bayesian methods. \n\n3. The choice of the tree was initially specified in line 793 of the appendix (e.g. the Tree of Section 3 with random initial point), but it is now detailed directly in Section 6. Moreover, to have a better understanding of the impact in practice, we performed (ten) additional runs of OCTS using the various trees reported below:\n\n| | Ising (20) | CT (20) | LABS (20) | MIS (20) | Ising (50) | CT (50) | LABS (50) | MIS (50) |\n|---------------|:----------:|:-----------:|:-----------:|:--------:|:-----------:|-------------|-------------|-------------|\n| $T + R$ | 20 (00) | 4.00 (00) | 7.33 (0.88) | 10 (00) | 50 (00) | 10 (00) | 5.17 (0.33) | 21 (1.0) |\n| $T+R^*$ | 20 (00) | 4.00 (00) | 7.00 (0.88) | 10 (00) | 50 (00) | 10 (00) | 5.22 (0.46) | 21.2 (1.6) |\n| $\\pi(T) + R$ | 20 (00) | 3.80 (0.12) | 6.97 (0.72) | 10 (00) | 45.2 (2.71) | 8.76 (0.08) | 5.19 (0.22) | 19.2 (2.0) |\n| $\\pi(T)+ R^*$ | 20 (00) | 3.84 (0.32) | 6.24(0.62) | 10 (00) | 46.4 (1.49) | 8.55 (0.25) | 5.22 (0.32) | 20.6 (1.01) |\n| $\\pi^*(T)+ R$ | 20 (00) | 3.8 (0.0) | 5.88(0.00) | 10 (00) | 50 (00) | 9.40 (00) | 4.32 (00) | 21.2 (0.1) |\n\n\nwhere $T$ denotes the tree of Figure 1 with $x_{l,i}=Bin_l(i) + \\vec{0}_{d-l}$, $R$ denotes a root node sampled uniformly, $\\pi(T)$ denotes a random permutation of the order of variables. Finally, $R^*$ set the root as the best points obtained from a RS with a budget of $d$ evaluations. $\\Pi^*(T)$ denotes the ordering where the variables in the tree are ranked according to the best function values recorded by switching the bit corresponding to the given variable and recording it ($d$ evaluations in total to get this ordering). \nAs it can be seen, on most test problems OCTS is robust to the choice of the root node in the sense that for a randomly chosen root (line T+R), the algorithm consistently finds similar optima with low std. Moreover, it is interesting to note that using random permutation ($\\pi(T)+R$) does not improve the stability of the algorithm, which is because on some problems (e.g. LABS and CT) there is a sequential link between the variables which is preserved by using $T$ and not when permuting the variables. Thus, it is recommended to keep the natural ordering of the variables. This is now included in the Appendix with a pointer in the main document.\n\n4 - 5 - 6 - 7. We thank the reviewer for pointing out the confusing statement on p.9: \"no efficient algorithm exists to find maximum independent sets\". It is now replaced with \"there is no known polynomial-time algorithm\" in the final document. Similarly, for MaxSAT in p.9, it is now written: \"optimum that no other algorithm could find by other baselines\". Finally, we added both in the introduction and experimental section that our goal is to design algorithms that aim to optimize cheap-to-evaluate functions. All the typos are now corrected.", " The paper proposes two novel methods for combinatorial black-box optimization (i.e. over an unconstrained binary domain) based on optimistic tree search, one based on a known Lipschitz constant (OLTS) and another one when it is unknown (OCTS). The general idea of the OLTS is to evaluate nodes in a tree with large upper bounds in their subtrees, where the upper bound is based on the Lipschitz constant and the diameter of the subtree. This is extended in OCTS when the Lipschitz constant is not known by searching a superset of nodes that would contain the node in OLTS. Both methods are proven to have linear convergence rates (with a dependence on the Lipschitz constant). Computational experiments show that OCTS outperform several other heuristic-based methods. The black-box methods proposed in the paper are very appealing: they are simple to implement, theoretically grounded, and appear to work well in practice. The approach appears to be original as far as I am aware. The computational section is sufficiently extensive, with six different problem classes and one experiment to illustrate the convergence rates, and the method generally outperforms the baselines. I particularly appreciate the theoretical guarantees and their computational analysis in Section 6.1. The paper could have benefited from a comparison with model-based methods, but I believe it is not too unreasonable to omit them given that they typically have more expensive iterations. The presentation is overall clear, but there are several minor issues that need to be addressed below.\n\nMost of my comments below are regarding presentation, which should be fixable. Assuming those are addressed, I recommend acceptance for this paper. These are only presentation issues, but they nevertheless should be addressed.\n\n1. The introduction claims that these methods are \"provable\", which is unfortunately not discussed further. A natural interpretation might be that the algorithm finds the provably optimal solution for some reasonable budget. However, at least OCTS does not return a provably optimal solution without exhaustively searching all of the $2^d$ points. On the other hand, if one would include exhaustive search in the definition of \"provable\", then this is not a meaningful claim. If there is no reasonable explanation for what \"provable\" means in this context (and I believe there is not), this qualification only adds confusion and should not be used.\n\n2. Could you include some discussion about solve/iteration time compared to the baselines, even if briefly? Preferably, if you have computational results on this, that would be best. My impression is that the iterations should be comparable to the baselines in the proposed method, but as is I cannot exactly verify if OCTS is indeed faster than other methods. This might be more suitable for the appendix but I would add a sentence referring to it in the main text.\n\n3. The paper defines the algorithms relative to a class of tree structures. However, the computational section does not specify exactly which tree was used. Could you mention that in the paper? In addition, I would have liked a brief discussion on the effect of using different trees (e.g. does the tree really make a difference?), but this is not essential.\n\n4. I would adjust the statement in p.9: \"no efficient algorithm exists to find maximum independent sets\". Although the problem is NP-hard, this is not accurate if this is interpreted as \"efficient in practice\". It is fine to say that there is no known polynomial-time algorithm (although this is already implied by the NP-hardness), but maximum independent set is reasonably tractable in practice.\n\n5. In MaxSAT in p.9, please clarify that the \"optimum that no other algorithm could find\" is with respect to the baselines examined in the paper. Since the previous sentence talks about specialized solvers, this clarification would help avoid the misinterpretation that this statement includes them as well.\n\n6. I am aware that this is mentioned in Section 2.1, but I suggest mentioning in the introduction that this method is tailored to black-box problems where evaluation is not expensive. This is so that, first, readers can quickly identify if this is a good approach for their black-box problem, and second, you immediately establish the scope of your method, i.e. you are not comparing against methods that may have expensive iterates such as model-based methods. If you have space, I would even briefly reiterate this when defining the baselines in Section 6.2.\n\n7. Please run a spell checker for the camera-ready version, there are many typos in the paper. Here are some of them: l.54, \"dimensionalitites\"; l.154, \"a-priory\"; l.163, $x_{l,i}$ and $\\mathcal{X}_{l,i}$ should be indexed by $l_t$ and $i_t$; l.195, \"Moroever\"; l.202, repeated \"order\", l.264: \"bt\", l.265: \"iteration\" should be plural; l.289: \"dimensionailites\"; l.318, \"runing\"; l.570: \"maximimum\"; l.782: \"Real-word problems\"; [18] and [19] are the same reference. No limitations besides the ones discussed above.", " This paper presents an algorithm for solving combinatorial optimization problems where the objective function is a \"black box\" accessible only via an oracle. The algorithm is targeted at problems where this oracle is relatively cheap (as opposed to the standard Bayesian optimization setting), and is accompanied by finite time termination guarantees. The core algorithm relies heavily on Lipschitz constants to guide search and prune the tree; as this constant is often not known, the authors present a variant that instead only relies on the existence of a Lipschitz constant. The authors conclude with a computational analysis of the performance of the algorithms as a function of the number of function evaluations. The paper: presents a novel algorithm in an area of interest to the NeurIPS community, includes interesting theoretical results, and is clearly written. The only weakness I can identify is the lack of a computational comparison against Bayesian optimization techniques (see \"Questions\"). * Please consider moving the qualification as to why Bayesian optimization methods are not included in the computational study from the Appendix to Section 6. I am also not sure how convincing I find the author's argument on this point: even if Bayesian optimization is much slower per iteration, if it nonetheless provides good solutions after relatively few function evaluations then this provides an interesting baseline for comparison with the new algorithms.\n* p22 l756: There is a typo in at least one of the convergence rates (I think the first). There is no explicit discussion of potential negative societal impact.", " The paper considers the black-box optimization of combinatorial binary functions. The functions are assumed to obey a Lipschitz condition given some metric on the hypercube. For the optimization problem, the authors propose two algorithms, depending on the knowledge of the Lipschitz constant. Both algorithms rely on tree search and optimistic upper bounds. Theoretical guarantees are provided for the convergence of the algorithms. The empirical work show that the algorithm with unknown Lipschitz constant (OCTS) outperforms the considered baselines on a variety of problems. The proposed algorithm is fairly natural given the Lipschitz assumption. The case of unknown Lipschitz constant is treated in a similar way as the DIRECT algorithm (although it is not referenced).\n\nThe theoretical results are straightforward, but nevertheless useful. \n\nThe binary tree is assumed as provided, but I would assume that the ordering of the indices might have significant influence on the performance. \n\nGiven the optimistic tree search approach, the problem is somewhat related to the combinatorial bandit problem. The main difference here is that the function is deterministic, which allows much stronger bounds, but I would assume some techniques from combinatorial bandits could carry over. \n\nThe empirical performance is a strong argument for the paper. The baselines are difficult to evaluate, since there is little detail provided regarding their implementation and parametrization. \n Algorithm 3 (referenced from Algorithm 2) is only in the supplementary material. This poses some problems in the readability of the paper that should be resolved. It is not clear how meaningful the Lipschitz condition is for the practical problems considered (beyond the constant that results from the discrete nature of the problem). ", " This paper addresses the problem of combinatorial black-box optimization. The solution is built upon a tree-structure search procedure with optimistic search strategy. The contribution of the paper in my opinion is two-fold: 1) Algorithmically, it designs a new combinatorial black-box optimization solver OLTS (and its practical variant, OCTS) by adapting the optimistic strategy applied on tree-search optimizer. 2) Theoretically, it provides convergence analysis on the proposed solver (and its variant OCTS) which is shown to be superior than random search. Strengths: \n1) The structure of the paper is clear and the paper is overall well-written. The clarity is in general good, except for a few points that will be discussed in the weakness part. \n\n2) The problem of combinatorial black-box optimization is an important problem that has vast applicability of various domains, including machine learning. \n\n3) The paper provides the first finite-time linear convergence rates for the problem. It is a significant improvement compared to the logarithmic rates of baselines (random search). \n\n4) The empirical results are promising. The algorithm, though simple, has been shown to be outperforming the baselines on a set of benchmark black-box combinatorial optimization problems, including LABS, MIS, Ising, MaxSAT, and Contamination. \n\n\nWeakness:\n1) The novelty of the proposed solvers, OLTS and OCTS, is limited. Both the tree-based search and the optimistic strategy have been well studied under similar contexts. The main critique from me is not that the algorithms are not novel, but that the novelty is somewhat overclaimed.\n\nFor example, the tree based search has been discussed in a few previous papers (e.g., in [39] and also UCT -- UCB for trees). But this has not been acknowledged in the paper. It appears that the tree structure is first proposed in this paper. \n\nAs another example, the optimistic strategy for estimating the potential of the tree nodes is also adapted from [39]. Though the paper lists three major differences of OLTS/OCTS vs [39], it still seems incremental. Also, it is not clearly explained why these differences are made to adapt to the tree structure and what are the advantages. \n\n2) It is not clear what are the intuitions of l(n) and n_c in the propositions and theorems, so that it is hard to understand how tight the derived convergence bounds are in the respective theorems/propositions. At least from a first look, the bounds do not seem tight, and therefore the theory is not as informative. The paper would be stronger if these are better explained/clarified. \n\n3) Bayesian optimization is an important category of methods for black-box combinatorial optimization problems, but it is not included in the set of baselines. Why is it? It would be good to explain.\n\n4) The empirical results are promising in general. One question from me, though, is that, what are the reasons that certain problems are selected for evaluation. For example, reference [18] and reference [41] each provided a set of benchmark problems, but this paper selected a subset from each of these two references instead of evaluating all of the settings in either one of them. It does not seem that the proposed OCTS cannot work on the other problems, e.g., the neural architecture search benchmark which is of potential high interest to the ML community. \n\nMinor aspects: \n1) The introduction well motivates the paper, but is a bit too condensed. Perhaps better to split it into multiple paragraphs. \n2) Line 186: I_h seems to be a typo, it should be I_l\n3) Line 270: Proposition A.3 -- is it a typo?\n Please see some questions in the detailed comments above. Below are some other questions. I would not expect the authors to answer all of them but I am listing them here nonetheless. \n\n1) Is the tree representation of the combinatorial space first proposed in this paper, or was it proposed by some existing papers but perhaps in a different setting and goal?\n\n2) Why is Bayesian optimization not compared as a baseline? \n\n3) The definition of l(n) is a little bit involved, so that it makes it hard to understand how well bounded the solution of OLTS/OCTS would be. Could you illustrate a bit more about the intuition of l(n)? Perhaps using a concrete example to show what is a typical value of l(n). \n\n4) Similar question to n_c in Theorems 4.3 and 5.3: what is the intuition of n_c, and what is a typical value for it? This is important because if n_c is very large (which seems to be from its exponential form), then Theorems 4.3 and 5.3 would be less informative. \n\n5) In the empirical results, the MIS-30 setting yields no good strategy, but in the MIS-70 setting, OCTS and a few other methods achieve optimal performance in the first few iterations. Why does dimension make the structure of the problem so different so that the difficulty of finding a good solution is totally different?\n\n\n The authors claimed that they discussed potential limitations (1.b) and negative societal impacts of the work (1.c) in the Appendix, but I cannot see any obvious discussions of this kind. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 3 ]
[ "ksvNtferRN", "QGxr8kA6WU1", "W2D2bB2kGQQ", "Bg0E893mOR", "0cOmCIejYB", "Y91yYhD2-5zp", "Ef5r10DluXk", "j6GbWfY5j_y", "VVyfDbYztk5", "PJ672Qkdxk", "vHpY-aULDH7", "SXqhojOaJPB", "nips_2022_JGLW4DvX11F", "nips_2022_JGLW4DvX11F", "nips_2022_JGLW4DvX11F", "nips_2022_JGLW4DvX11F" ]
nips_2022_GKfNB4BegL
Recurrent Video Restoration Transformer with Guided Deformable Attention
Video restoration aims at restoring multiple high-quality frames from multiple low-quality frames. Existing video restoration methods generally fall into two extreme cases, i.e., they either restore all frames in parallel or restore the video frame by frame in a recurrent way, which would result in different merits and drawbacks. Typically, the former has the advantage of temporal information fusion. However, it suffers from large model size and intensive memory consumption; the latter has a relatively small model size as it shares parameters across frames; however, it lacks long-range dependency modeling ability and parallelizability. In this paper, we attempt to integrate the advantages of the two cases by proposing a recurrent video restoration transformer, namely RVRT. RVRT processes local neighboring frames in parallel within a globally recurrent framework which can achieve a good trade-off between model size, effectiveness, and efficiency. Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature. Within each clip, different frame features are jointly updated with implicit feature aggregation. Across different clips, the guided deformable attention is designed for clip-to-clip alignment, which predicts multiple relevant locations from the whole inferred clip and aggregates their features by the attention mechanism. Extensive experiments on video super-resolution, deblurring, and denoising show that the proposed RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime.
Accept
The paper introduces a recurrent video restoration transformer with guided attention, which combines recurrent and parallel methods in some extent. All reviewers found that the proposed method is sound and that experiments are adequate to demonstrate the effectiveness of the proposed method.
train
[ "9RfLAMTp7KI", "xBF1bnvxURG", "y5FpW7w399k", "YVqmTm2pTS", "UcRo-K6kmQ", "AIohC8Rt0LF", "0rca7LYHs9h", "7V7bRaikglL", "2jogGt8dv29" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your responses. The authors address my concerns.", " Thank the authors for their detailed responses. My concerns have been well addressed. I recommend acceptance of this work. ", " I appreciate your responses. All of my concerns have been addressed by the response. After seeing the rebuttal and other review comments, I would like to keep my initial score.", " Thanks for reviewing our paper. In the following we respond to the questions point by point.\n\n>1, What are the training time and memory cost for different tasks?\n\nWe train all models on 8 A100 GPUs. The training time and memory cost per GPU are provided in the following table, which will be added to the paper.\n\n| Task | Training Time (day) | Training Memory (MB) |\n|:-----------------------|:-------------------:|:--------------------:|\n| Video Super-Resolution | 16.6 | 39019 |\n| Video Deblurring | 9.7 | 29263 |\n| Video Denoising | 9.7 | 29269 |\n\n\n>2, Typo in the caption of Fig. 2.\n\nThanks for pointing that out. We have fixed the typo.\n\n>3, The comparison of temporal modelling ability is interesting and is suggested to be put in the main paper.\n\nThanks for your kind advice. We will move it from the supplementary to the main paper.\n", " Thanks for reviewing our paper. In the following we respond to the questions point by point.\n\n>1, More explanations about the proposed module are expected. \n\n>(1) For example, from table 3, the MLP module *** Why? \n\nAs discussed in L210 and L258 of the paper, MLP plays an essential role since the deformable attention part only aggregates information spatially and does not allow inter-channel interaction. To be more specific, in GDA, features from different locations are added together by weighted sum (the predicted weight for each location is just a scalar). Therefore, one channel cannot interact with other channels. Our solution is to add a MLP after deformable attention, so that information from different channels are aggregated and interact with each other.\n\n>(2) If we introduce the MLP *** of GDA? \n\nWe believe deformable attention and MLP in GDA are inseparable. Deformable attention plays the main role in alignment, because it can aggregate information from multiple relevant locations. As analyzed above, MLP also plays an important role (although less important compared with deformable attention), because it aggregates information from different channels. Both of these two parts are necessary for good performance.\n\nWe further conduct ablation studies in the following table to quantitatively evaluate the effectiveness of these two parts.\nAs we can see, when we remove deformable attention or MLP, the performance drops by 4.17dB and 0.27dB, respectively. This indicates that deformable attention plays the main role in GDA. \n\nBesides, according to your advice, we add the MLP to DCN for better comparison between deformable attention and deformable convolution. DCN + MLP only brings a minor PSNR improvement of 0.02dB and is still 0.15dB worse than GDA (deformable attention + MLP), which demonstrates that the gain of GDA mainly comes from deformable attention.\n\n|Method|Deformable Attention/Convolution|MLP|PSNR|\n|:-:|:-:|:-:|:-:|\n|DCN + MLP|√|√|31.95|\n|DCN(original)|√||31.93|\n|GDA (no MLP)|√||31.83|\n|GDA (no attention)||√|28.68|\n|GDA (proposed)|√|√|32.10|\n\n>(3) In addition, since GDA utilizes flow *** in BasicVSR++.\n\nFor DCN, we used flow guidance for fair comparison (similar to BasicVSR++). Removing the optical flow guidance leads to a significant PSNR drop of 0.91dB for DCN. We will clarify that in the paper.\n\n>2, In Eq. (5), the predicted M offsets *** different settings of m?\n\nIn Eq. (5) of the paper, the average of $M$ (offset number) is only used for updating the optical flows in different layers, which can lead to better performance, as shown in Table 3 of the paper. For deformable attention, as indicated in Eq. (7) and (8), we still sample and utilize multiple offsets.\n\nAs described in L234, $M$ is set as 9 in experiments. To investigate the impact of $M$, we conduct an ablation study in the following table. As one can see, larger $M$ may result in better performance at the cost of more testing memory and time, since that GDA can utilize more information from more locations. However, when $M$ is very large (e.g., $M=49$), the performance may become worse, possibly because there might be too many irrelevant locations. Therefore, we choose $M=9$ for a trade-off of performance, memory and speed. \n\n|Offset Number|1|9|25|49|\n|:-|:-:|:-:|:-:|:-:|\n|PSNR (dB)|30.03|32.10|32.21|31.85|\n|Testing Memory (MB)|983|1036|1141|1423|\n|Testing Time (ms)|128|143|194|258|\n\n>3, As shown in Table 7, *** smaller than 20. Why?\n\nWhen sigma is large, the video is heavily corrupted and often requires more spatio-temporal information (or larger receptive fields) for restoration (see Ref. 1 and 2). Based on the recurrent transformer architecture and guided deformable attention, RVRT is effective in long-range dependency modeling. It can utilize information from more frames (temporally) and larger areas (spatially), leading to good performance for large sigma. When sigma is small, it is relatively easy to restore the video from a limited receptive field size. In this case, VRT performs better, possibly due to a larger model size (18.4M) than RVRT (12.8M). To prove it, we show the performance of RVRT when it has a similar number of parameters to VRT (by increasing channel sizes). As one can see, RVRT outperforms VRT for all sigmas under fair comparison. \n\n|$\\sigma$|#Para (M)|10|20|30|40|50|\n|:-|:-:|:-:|:-:|:-:|:-:| :-:|\n|VRT|18.4|40.82|38.15|36.52|35.32|34.36|\n|RVRT|12.8|40.57|38.05|36.57|35.47|34.57 |\n|RVRT (large)|18.3|40.91|38.40|36.82|35.72|34.76|\n\nRef: \n\n[1] Weighted nuclear norm minimization with application to image denoising, CVPR2014\n\n[2] Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising, TIP2017\n\n>4, Can GDA be directly … the impact of this work will be great.\n\nYes. It can be used as a plug-and-play module for alignment in most restoration networks. It can also be used for other tasks such as reference SR and cross-modal learning. The codes will be made publically available.", " Thanks for reviewing our paper. In the following we respond to the questions point by point.\n\n>1.1. From Tab. 1, when N=3, the performance is worse than N=2. When N=3, the frames within each clip can aggregate more information than N=2. Besides, it also aggregates more information from the neighboring clips. Therefore, it is unreasonable that the performance of RVRT saturates when N=3. More detailed analyses are supposed to be provided.\n\nThere are two possible reasons for the performance saturation when $N=3$. First, within each clip, different frames are implicitly aligned by the self-attention mechanism, which may not be good at dealing with large misalignments. For the REDS4 testing set, the average optical flow magnitude for neighboring frames (e.g., between frame 1 and 2) and odd/even frames (e.g., between frames 1 and 3) are 3.89 and 6.91, respectively. However, the spatial window size of RVRT is set as $8\\times 8$ due to memory limitation. Therefore, when $N=3$, RVRT may not be able to deal with large misalignments between the first and the third frame in the clip. One solution to this is to increase the spatial window size or add an alignment module for the local transformer part. We leave it as a future work.\n\nThe second reason is that the long-distance optical flows might be inaccurate, as pointed out in L248 of the paper. For $N=3$, we need to compute the optical flow between frame 1 and 6. For the sake of computation efficiency, we derive distant flows based on neighboring flows (e.g., we derive flow between frames 1 and 6 based on intermediate flows $1\\rightarrow 2$, $2\\rightarrow 3$, $3\\rightarrow 4$, $4\\rightarrow 5$ and $5\\rightarrow 6$), which leads to inaccurate optical flows and hinders the performance improvement. To further validate it, we directly estimate all optical flows. In this case, the PSNR rises to 32.21dB for $N=3$, which shows better performance than $N=2$, at the expense of longer testing time (rises from 143ms to 201ms). \n\n>1.2. From Tab. 6, when BasicVSR++ is equipped with RSTB, the performance gains 0.26dB and achieves 32.61dB. Compared to RVTB, its performance is only 0.14dB lower with 1.2M fewer parameters. Therefore, it is hard to distinguish the effectiveness of the core idea. What is the performance of BasicVSR++ + RSTB, whose parameters are the same as RVRT?\n\nWe increase the channel size of “BasicVSR++ + RSTB” from 144 to 156, so that \"BasicVSR++ + RSTB\" has similar parameter numbers to RVRT. As we can see in the following table, directly increasing the channel size brings little improvement. RVRT is still 0.11dB better than \"BasicVSR++ + RSTB\", which proves that the performance gain mainly comes from architectural design rather than model size. \n\n| Method|#Param (M)|PSNR (dB)|\n|:-|:-:|:-:|\n| BasicVSR++ + RSTB (channel=144)|9.3|32.61|\n| BasicVSR++ + RSTB (channel=156)|10.7|32.64|\n| RVRT|10.8|32.75|\n\n>2.1. From Tab. 5, it is clear that the performance of RVRT is better than VRT on Vid4 and REDS. However, on Vimeo-90K-T and UM10 datasets, RVRT achieves inferior performance.\n\nWe argue that it is possibily due to dataset differences in content and motion distributions. First, different datasets may behave differently in testing due to different video contents. Second, as shown in the following table, compared with VRT, RVRT performs differently on fast/medium/slow motion videos in Vimeo-90K. This indicates that the final performance may also vary with motion conditions.\n\n|Method|fast|medium|slow|\n|:-|:-:|:-:|:-:|\n|VRT|41.44|38.42|34.98|\n|RVRT|41.25|38.37|35.07|\n\n>2.2. From Tab. 7, when sigma is 10 and 20, the performance of RVRT is inferior to VRT. However, when sigma is larger than 30, the performance of RVRT is comparable to or better than VRT.\n\nWhen sigma is large, the video is heavily corrupted and often requires more spatio-temporal information (or larger receptive fields) for restoration (see Ref. 1 and 2). Based on the recurrent transformer architecture and guided deformable attention, RVRT is effective in long-range dependency modeling. It can utilize information from more frames (temporally) and larger areas (spatially), leading to good performance for large sigma. When sigma is small, it is relatively easy to restore the video from a limited receptive field size. In this case, VRT performs better, possibly due to a larger model size (18.4M) than RVRT (12.8M). To prove it, we show the performance of RVRT when it has a similar number of parameters to VRT (by increasing channel sizes). As one can see, RVRT outperforms VRT for all sigmas under fair comparison. \n\n|$\\sigma$|#Para (M)|10|20|30|40|50|\n|:-|:-:|:-:|:-:|:-:|:-:| :-:|\n|VRT|18.4|40.82|38.15|36.52|35.32|34.36|\n|RVRT|12.8|40.57|38.05|36.57|35.47|34.57 |\n|RVRT (large)|18.3|40.91|38.40|36.82|35.72|34.76|\n\nRef: \n\n[1] Weighted nuclear norm minimization with application to image denoising, CVPR2014\n\n[2] Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising, TIP2017", " This paper tries to integrate the advantages of parallel methods and recurrent methods, and proposes a recurrent video restoration transformer (RVRT). RVRT first divides the video into fixed-length video clips and refines the subsequent clip feature based on the previously inferred clip feature and the old features of the current clip from shallower layers. Within each clip, the authors use a self-attention mechanism to fuse the feature. Across the clips, the authors propose guided deformable attention (GDA) to achieve video-to-video alignment. The proposed RVRT achieves state-of-the-art performance in video super-resolution, video deblurring, and video denoising. - Strengths:\n\n1. Parallel methods and recurrent methods are dominant strategies in the VSR area, and most researchers only adopt one strategy in their work. Therefore, the attempt of adopting these two strategies in a single framework is valuable.\n\n2. From Tab. 5, Tab. 7, Tab. 8, and Tab. 9, RVRT achieves state-of-the-art performance in video super-resolution (BI-REDS/Vid4, BD-Vid4), video deblurring, and video denoising. \n\n- Weaknesses:\n\n1. The effectiveness of the core idea (which takes advantage of parallel and recurrent methods) is not well verified. \n\n 1.1. From Tab. 1, when N=3, the performance is worse than N=2. When N=3, the frames within each clip can aggregate more information (the information of the other 2 frames) than N=2. Besides, it also aggregates more information from the neighboring clips (by aggregating the features of 3 frames). Therefore, it is unreasonable that the performance of RVRT saturates when N=3. More detailed analyses (visualization or experiments) are supposed to be provided.\n\n 1.2. From Tab. 6, when BasicVSR++ is equipped with RSTB, the performance gains 0.26dB and achieves 32.61dB. Compared to RVTB, its performance is only 0.14dB lower with 1.2M fewer parameters. Therefore, it is hard to distinguish the effectiveness of the core idea. What is the performance of BasicVSR++ + RSTB, whose parameters are the same as RVRT? \n\n2. Some results are confusing. Wait for the authors’ explanations/analyses.\n\n 2.1. From Tab. 5, it is clear that the performance of RVRT is better than VRT on Vid4 and REDS. However, on Vimeo-90K-T and UM10 datasets, RVRT achieves inferior performance. \n\n 2.2. From Tab. 7, when sigma is 10 and 20, the performance of RVRT is inferior to VRT. However, when sigma is larger than 30, the performance of RVRT is comparable to or better than VRT. \n See questions in Weaknesses Yes", " This work proposed a recurrent video restoration transformer. It divides the video into multiple clips and the correlations among clips and inside clips are explored jointly. Within each clip, different frame features (2 frames in this work) are updated with implicit feature aggregation. Across clips, guided deformable attention is proposed to perform clip-to-clip alignment. Experimental results demonstrate that the proposed method achieves SOTA results on video SR, delurring, and denoising. Strengths:\n1.\tThe proposed guided deformable attention for clip-to-clip alignment is the main technical contribution of this work. The ablation study also demonstrates the effectiveness of the proposed module. \n2.\tThe proposed method can efficiently utilize long-term correlations \n3.\tThe proposed method works well on multiple video restoration tasks. \n\nWeaknesses:\nMore explanations about the proposed module are expected. For example, from table 3, the MLP module is essential for high performance (0.17 dB gain). Why? If we introduce the MLP module to DCN, will it outperform the proposed GDA? In other words, the gain of GDA over DCN comes from which part of GDA? In addition, since GDA utilizes flow for guidance, a more fair comparison with GDA should be the flow-guided DCA, which was utilized in BasicVSR++. \n\nIn Eq. (5), the predicted M offsets are averaged. Why? In DCN, we utilize multiple offsets for the same pixel and it is demonstrated to be more effective than the single-offset-based optical flow (each pixel only has one offset). What is the value of m in experiments? Are the results sensitive to different settings of m? \n\nAs shown in Table 7, the denoising performance of the proposed method is worse than the compared methods when the sigma is smaller than 20. Why? \n\n\n The authors are suggested to give responses for my concerns about the proposed GDA module. Can it be directly plugged into benchmark video restoration networks (i.e., replacing their original alignment modules) to further improve their performance? If yes, the impact of this work will be great. Yes, the authors have addressed their limitations. ", " In this paper, the authors propose a general recurrent video restoration model for various video restoration tasks, including video SR, video deblurring and video denoising. It divides the video sequence into multiple video clips and deal with them by two strategies: globally, it propagates clip features in a recurrent way to reuse model parameters and save memory; locally, it jointly updates different frame features from one clip in parallel. Besides, since it processes the video as clips, it proposes the guided deformable attention for video clip-to-clip alignment. Extensive experiments on various benchmark datasets show the effective and the generalizability of the model. The proposed RVRT architecture is novel, effective and technically sound. Unlike previous methods that are either recurrent or parallel, RVRT takes the advantages of both directions and alleviates their corresponding problems. As validated in ablation studies and comparisons with existing methods, RVRT makes a good trade-off. It is much smaller, quicker and more memory efficient than parallel methods, and, still, it achieves state-of-the-art performance on benchmark datasets. RVRT also provides a useful way to tackle the information loss and noise application problems that are inherent to recurrent models, as proved in the supplementary. Furthermore, it proposes a guided deformable attention module that is directly applicable for clip-to-clip alignment, in which relevant features from different frames are aggregated dynamically and efficiently. This module is well illustrated and is proved to be effective in experiments. \n\nOverall, this paper makes a good contribution to video restoration and conducts extensive experiments to support its arguments. It provides an alternative video sequence modelling option, rather than focusing on the design of specific modules. It should also be an interesting paper for the wider community. I think this paper should be accepted.\n\n\nPros:\n\n1, It proposes a novel architecture that extracts features locally in parallel and accumulates information globally in a recurrent way. It has many benefits due to reduced sequence length, larger hidden state and local parallel processing.\n\n2, It proposes a one-stage guided deformable attention module for dynamic and global feature aggregation in clip-to-clip alignment. It fits the overall architecture well.\n\n3, The proposed method is evaluated on most standard video restoration benchmarks (8 different datasets), and the performance is solid and convincing. In particular, on deblurring and denoising, it remains the surprising performance of transformer models and outperforms most of its competitor by up to 2.27~2.37dB. \n\n4, The paper writing is fairly good.\n\nCons:\n\n1, What are the training time and memory cost for different tasks?\n\n2, Typo in the caption of Fig. 2.\n\n3, The comparison of temporal modelling ability is interesting and is suggested to be put in the main paper. See weakness above. The authors have discussed the limitations and potential societal impact." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "0rca7LYHs9h", "7V7bRaikglL", "YVqmTm2pTS", "2jogGt8dv29", "7V7bRaikglL", "0rca7LYHs9h", "nips_2022_GKfNB4BegL", "nips_2022_GKfNB4BegL", "nips_2022_GKfNB4BegL" ]
nips_2022_kCTZt0b9DQz
Prototypical VoteNet for Few-Shot 3D Point Cloud Object Detection
Most existing 3D point cloud object detection approaches heavily rely on large amounts of labeled training data. However, the labeling process is costly and time-consuming. This paper considers few-shot 3D point cloud object detection, where only a few annotated samples of novel classes are needed with abundant samples of base classes. To this end, we propose Prototypical VoteNet to recognize and localize novel instances, which incorporates two new modules: Prototypical Vote Module (PVM) and Prototypical Head Module (PHM). Specifically, as the 3D basic geometric structures can be shared among categories, PVM is designed to leverage class-agnostic geometric prototypes, which are learned from base classes, to refine local features of novel categories. Then PHM is proposed to utilize class prototypes to enhance the global feature of each object, facilitating subsequent object localization and classification, which is trained by the episodic training strategy. To evaluate the model in this new setting, we contribute two new benchmark datasets, FS-ScanNet and FS-SUNRGBD. We conduct extensive experiments to demonstrate the effectiveness of Prototypical VoteNet, and our proposed method shows significant and consistent improvements compared to baselines on two benchmark datasets.
Accept
The paper received mixed reviews. Two reviewers were fairly positive, on the basis of of the novelty of the problem, the quality of the results, the introduction of datasets and benchmarks, and the proposed method. Although the method combines two existing solutions to point cloud detection and few-shot, these reviewers considered that the paper shows that this combination is not trivial. They liked the implementation of the PVM and PHM modules as a means of disentangling the feature embedding and detection. The negative reviewers raised a number of concerns, including a diverging opinion that the combination of the two strategies is somewhat trivial and the paper lacks technical novelty, the fact that the datasets are mostly a combination of existing ones, that several baselines could have been borrowed from the 2D few shot literature for a more extensive evaluation, and other concerns of detail. The authors provided a very thorough rebuttal, which addressed many of these issues. In result, one of the negative reviewers mentioned that, despite the limitations, the paper is worth publishing and the other engaged in an extensive discussion with the authors, oscillating between positive and negative positions towards the paper at different points of the interaction. After discussion, there was a sense that no reviewer significantly opposed the publication of the paper. While the limitations above (somewhat limited technical and dataset novelty) hold, the novel nature of the problem, its potential interest for future work by the community, and the results achieved by the method were found to justify publication.
train
[ "du7vQwNT3bV", "6Sv5xU7jwR", "15c0W0nz_fw", "c1FWpz0jfRi", "cp6OeYvJOSs", "ZrwNabRSX3h", "Dp1o2o0djGI", "j2MHVvt1DQk", "Qr0SK31Gl6", "MMSmLz1orKP", "5BaOvaoCXr", "wbtPtMOuEb2", "IU383M4S6Jl", "3CL_rVet-IP", "DKMXV7m6Hsm", "q0i9Wignlw0", "cexKDfbqTwv", "fTI8V3h_jf3", "TOEAOs1OYrC" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer ymyp,\n\nThank you so much for your time and efforts in assessing our paper. Hope our rebuttal has addressed your concerns. We are happy to discuss with you further if you still have other concerns. Thanks for helping improve our paper.", " Thank you for the detailed and through response to my concerns. I have no additional questions, and the results from the extra baselines are quite convincing. Thank you for putting this together in this limited time. I look forward to the final version of the work.", " **Q6. The consistency between the performance of few-shot and unbalance setting.** \n\n**A6.** Thanks for your time and comments. You are very responsible and we appreciate your efforts a lot.\n\nHere we would like to further clarify the difference between the few-shot learning and the imbalance problem in ScanNet and SUN RGB-D, and address the consistency concerns of performance improvement.\n\n1. **Few-shot learning focuses on feature learning under the circumstance of severely scarce data**. For example, in the widely-used 2D few-shot learning benchmark [1], the instance number for novel classes is from 1 to 5, as studied in our paper. When data is extremely scarce, the feature learning process will suffer from serious overfitting. As shown in the following table, when the bathtub class has only a few samples from 1 to 5, the original VoteNet can not learn well, while our proposed method can outperform the baseline VoteNet by a large margin. \nHowever, in the original dataset, which is an imbalanced dataset, with the baseline VoteNet, the performance of the bathtub category is already pretty high (as shown in the table below, and you can check it by this link [2] in epoch 21), approaching perfect results. Therefore, our model can only deliver a smaller improvement.\n| Bathtub performance | VoteNet | VoteNet | Ours | Ours | \n|-----------------------------|:--------:|:-------:|:-----:|:-----:|\n| | AP25 | AP50 | AP25 | AP50 | \n| 1-shot | 0.74 | 0.01 | 9.01 | 7.63 |\n| 3-shot | 12.96 | 1.26 | 22.96 | 8.60 | \n| 5-shot | 17.57 | 3.25 | 30.33 | 12.87 | \n| **113-shot (original dataset)** | **91.86** | **84.48** | **92.57** | **85.36** | \n\n2. The **imbalance problem** focuses on how to learn good representations and classifiers that can deliver good performance for both head and tail categories. **It is designed to address the problem of unequal sample sizes within the dataset, but not necessarily the few-shot problem.** For example, in ScanNet V2, the category (Bathtub) with the minimum samples has 113 instances, and the performance for this class already achieves a very high performance, as shown in the table above. \n**To our best knowledge, all few-shot benchmarks do not use such a large number of samples for novel classes, so this is no longer a problem of few-shot learning.** While our method is specifically designed for few-shot 3D detection, our model may not improve performance on the original dataset if it already contains a sufficient number of training instances (e.g., ScanNet) for all classes. \n\n3. In a widely-used long-tailed learning benchmark [3], it sets the many-shot classes (classes each with over training 100 samples), medium-shot classes (classes each with 20∼100 training samples) and few-shot classes (classes under 20 training samples). Therefore, this is another evidence that we cannot consider a dataset with a minimum sample number greater than 100 as a few-shot problem.\n\n4. In terms of **performance consistency**, as shown in the table in Q5, with the imbalance becoming more severe (e.g., 25P, 50P), our approach outperforms the baseline more. In the even more extreme case, the imbalance problem will degrade to few-shot learning, and our proposed method will benefit more. \n\n5. We would also share our understanding on **the practical significance of few-shot learning**. For some scenarios, such as autonomous driving and the medical domain, it’s very challenging to gather many samples (such as car accidents or rare diseases) where few-shot learning will help a lot. \nOn the other hand, in the open-world setting, considering the number of potential classes, few-shot learning will also help a lot to significantly alleviate the burden of data collection and annotation. Besides, an existing study [3] already demonstrated that the open-world long-tailed recognition will encounter the few-shot problem and develop methods inspired by the few-shot domain to address the issue. \n\nThanks again for your time and efforts in assessing our paper. **Our code, as well as the new benchmark, will be released to facilitate future works.**\n\n[1] Sung, Flood, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M. Hospedales. \"Learning to compare: Relation network for few-shot learning.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1199-1208. 2018.\n\n[2] https://download.openmmlab.com/mmdetection3d/v1.0.0_models/votenet/votenet_8x8_scannet-3d-18class/votenet_8x8_scannet-3d-18class_20210823_234503.log.json, Log of the publicly trained VoteNet by OpenMMab. \n\n[3] Liu, Ziwei, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. \"Large-scale long-tailed recognition in an open world.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2537-2546. 2019.\n\n\n\n\n", " After re-reading your paper and carefully thinking about it, I still don't understand why your method is not better than Votenet. There should be consistency between the performance of few-shot and unbalance setting. As far as I know, both datasets are very unbalanced, (*e.g.*, the number of chairs is 4357 and 113 of bathtubs. ). That phenomenon makes me confused. Considering the previous bug in the evaluation code, it's very hard for me to judge the reliability of this paper. Terribly sorry, I may consider revoking my previous decision of raising the score.", " Dear Reviewer ZU9c,\n\nWe sincerely thank the reviewer for the constructive feedback and support. We will update the new results in our final version. Thanks again for your time and efforts in assessing our paper. ", " Thanks for your careful reply. You're very responsible and conscientious. Even though the responses didn't completely solve my concerns, such as limited improvement compared to Votenet on the full dataset, I would still raise my score (4->5) for your effort and honesty. By the way, I hope you will pay attention to the normal or long-tail settings rather than few-shot task. Please don't mind, in my opinion, few-shot is of little significance and limited application. Thanks for your careful reply again. Hope our community can have more responsible reviews and responses. Good Luck!\n\nPlease update the new results in the final version. Thanks. ", " Thank you so much for your time and efforts on assessing our paper. Your valuable comments help improve our paper a lot.\n\n**Q4. Group-Free vs 3DETR in 3D few-shot detection.**\n\n**A4.** Thanks for your comment. As shown in the following Table (3-shot in split-1 of FS-ScanNet), although Group-Free performs slightly worse than 3DETR on the novel categories, we found that on the base categories (where abundant training samples exist), the Group-Free based method is better than the 3DETR based method. This echoes the performance of the original papers in the fully-supervised setting.\n\nOne possible reason for the lower performance of Group-Free in the few-shot setting is that the learnable proposal candidate generation stage might be biased toward base categories, as elaborated below. \n\nSpecifically, Group-Free first obtains initial object candidates using k-Closest Points Sampling (default implementation) which needs a learned objectness classifier to predict the probability of each point belonging to a ground-truth object candidate. Then, points with high classification scores are further used as queries for the second stage (i.e., decoding process) to predict 3D boxes. Note that in the few-shot setting, the learning of objectness classifiers can be easily dominated by the base classes due to the small number of novel objects. Therefore, the proposal candidate stage might be easily biased toward base classes, which impedes the decoding of novel samples. \n\nAs for 3DETR, a randomly sampling method is used to generate initial object candidates. Therefore, it is not biased towards base classes, which can lead to better generalization ability on the few-shot problem. In the future work, we will equip Group-Free with a randomly sampling method to see whether it benefits Group-Free on the few-shot problem or not. \n\n| | Novel | Novel | Base | Base | \n|---------------------------|:------:|:-----:|:-----:|:-----:|\n| Method | AP25 | AP50 | AP25 | AP50 | \n| GroupFree[3] + FADI [2] | 25.73 | 11.02 | 64.86 | 44.01 | \n| 3DETR[4] + FADI [2] | 26.24 | 11.12 | 62.56 | 42.10 | \n| GroupFree[3] + DeFRCN [1] | 25.22 | 10.90 | 64.95 | 44.28 | \n| 3DETR[4] + DeFRCN [1] | 26.01 | 10.95 | 62.43 | 42.26 | \n\n**Q5. The performance of the imbalance problem.**\n\n**A5.** We are grateful for your careful review. Much appreciate your comments. After receiving your comment, we carefully checked all codes and found one error in our evaluation code for the imbalance problem. The updated results are shown in the following Table. Note that we achieve comparable performance in the original dataset setting. With the imbalance becoming more severe (e.g., 25P, 50P), our approach outperforms the baseline more. \n\nNote that our focus is on few-shot 3D object detection, where representation learning of new categories becomes the top consideration of algorithm design. This few-shot problem is more useful for scenarios where many new categories appear frequently and require the system to quickly adapt to recognize them. \n\nHowever, the long-tailed problem focuses on how to learn good representations and classifiers that can deliver good performance for both head and tail categories. We believe that dedicated designs can further improve the performance of long-tailed 3D object detection. We will also add the results and analysis for the long-tailed setting in our paper and hope to inspire more future investigations.\n\nThe new testing logs can be seen at the anonymous link:\nhttps://drive.google.com/drive/folders/18S2SxEEtqYGb1Qb3njylWDqGDG2wv8Mo?usp=sharing\n\n| ScanNet V2 | P (Original Dataset) | P (Original Dataset) | 10P | 10P | 25P | 25P | 50P | 50P |\n|-------------|:--------------------:|:--------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| Method | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 |\n| VoteNet | 62.34\t | 40.82 | 52.06 | 35.64 | 43.12 | 27.13 | 40.01 | 26.77 |\n| Ours | 62.59 | 41.25 | 52.60 | 36.87 | 44.53 | 29.17 | 41.99 | 29.01 |\n| Improvement | 0.25 | 0.43 | 0.54 | 1.23 | 1.41 | 2.04 | 1.98 | 2.24 |\n\n\n| SUN RGB-D | P (Original Dataset) | P (Original Dataset) | 10P | 10P | 25P | 25P | 50P | 50P |\n|-------------|:--------------------:|:--------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| Method | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 |\n| VoteNet | 59.78 | 35.77 | 51.09 | 31.81 | 43.68 | 29.08 | 40.46 | 22.23 |\n| Ours | 60.34 | 36.80 | 51.85 | 32.98 | 44.66 | 31.93 | 41.84 | 25.04 |\n| Improvement | 0.56 | 1.03 | 0.96 | 1.17 | 0.98 | 2.85 | 1.38 | 2.81 |\n\n\n", " Thanks for the high-quality rebuttal. If all the experiments were reliable, this would be a heavy rebuttal. However, I am confused about the performance of group-free and 3DETR. In regular 3D object detection, group-free is better than 3DETR with the same backbone (PointNet++). Why is 3DETR better than Group-Free in your experiments (*e.g.*, few-shot setting)? Besides that, i am also confused about the results of Q2. Why is the performance gap the same in map@0.25 and map@0.50. \n|ScanNet V2\t|P (Original Dataset)\t|P (Original Dataset)\t|10P\t|10P\t|25P\t|25P\t|50P\t|50P|\n|:---------------:|:--------------------------:|:---------------------------:|:----:|:----:|:----:|:----:|:----:|:----:|\nMethod\t |AP$_{25}$ |AP$_{50}$ |AP$_{25}$|AP$_{50}$|AP$_{25}$|AP$_{50}$|AP$_{25}$|AP$_{50}$|\t\t\t\t\t\t\t\nVoteNet\t| 62.34\t|40.82\t|53.82\t|34.93\t|45.43\t|28.01\t|39.22\t|24.82|\nOurs\t|64.37\t|42.96\t|57.05\t|38.01\t|50.22\t|33.27\t|45.43\t|31.08|\nImprovment |2.03 |2.14 |3.23 |3.08 |4.79 |5.26 |6.21 |6.26|\n", " Thanks for your valuable comments and efforts in helping make our work better. We will explain your concerns and add them to our revised paper. \n\n**Q1.More baseline approaches widely used in 2D few-shot object detection.**\n\n**A1.** Thank you for the great suggestion. We will add the comparisons into our paper.\n\nWe combine two SOTA 2D few-shot object detection techniques (i.e. DeFRCN [1], FADI [2]) and two SOTA 3D detectors (i.e. GroupFree [3], 3DETR [4]). These two few-shot techniques are plug-in-play modules and can be easily incorporated into the different detection architectures. We conducted this experiment on 3-shot and 5-shot in split-1 of FS-ScanNet. The results below show that our method still surpasses these methods by a large margin. \n\nThis is potential because these 2D few-shot object detection techniques might not be directly transferable to the 3D domain. In the 2D domain, they often build their model upon a large-scale pre-trained model on ImageNet. However, in the 3D community, there does not exist a large-scale dataset for model pre-training, which requires future investigations. \n\n| | 3-shot | 3-shot | 5-shot | 5-shot | \n|------------|:------:|:-----:|:------:|:-----:|\n| **Method** | **$AP_{25}$** | **$AP_{50}$** | **$AP_{25}$** | **$AP_{50}$**| \n| VoteNet+ DeFRCN[1] | 23.17 | 9.82 | 25.92 | 13.51 | \n| VoteNet + FADI[2] | 24.08 | 9.93 | 26.03 | 13.47 | \n| GroupFree[3] + DeFRCN [1] | 25.22 | 10.90 | 26.42 | 14.01 | \n| GroupFree[3] + FADI [2] | 25.73 | 11.02 | 27.12 | 14.32 | \n| 3DETR[4] + DeFRCN [1] | 26.01 | 10.95 | 26.88 | 14.45 | \n| 3DETR[4] + FADI [2] | 26.24 | 11.12 | 26.93 | 15.22 |\n| Ours | 31.25 | 16.01 | 32.25 | 19.52 | \n\n[1] Qiao, Limeng, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, and Chi Zhang. \"Defrcn: Decoupled faster r-cnn for few-shot object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.\n\n[2] Cao, Yuhang, Jiaqi Wang, Ying Jin, Tong Wu, Kai Chen, Ziwei Liu, and Dahua Lin. \"Few-Shot Object Detection via Association and DIscrimination.\" Advances in Neural Information Processing Systems (NeurIPS), 2021.\n\n[3] Liu, Ze, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. \"Group-free 3d object detection via transformers.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.\n\n[4] Misra, Ishan, Rohit Girdhar, and Armand Joulin. \"An end-to-end transformer model for 3d object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. \n\n**Q2. Improvement of the whole class by the proposed model. (Compared with VoteNet).**\n\n**A2.** To analyze the performance of the proposed model on the imbalance problem, we conduct experiments using all the classes. Note that we conduct the experiments not only on the original ScanNet V2 and SUN RGB-D datasets, but also on their more unbalanced counterparts. \n\nWe follow the benchmark [5] to create these counterparts: 1) sorting the classes in descending order according to number of samples in each class, then we have $n_i > n_j$ if $i < j$, where $n$ is the number of samples, $i$ and $j$ denote the index of the classes. 2) reducing the number of training samples per class according to an exponential function $n=n_i*u^i$, where $u \\in (0,1)$. The test set remains unchanged. \n\nAccording to the benchmark [5], we define the imbalance factor of a dataset as the number of training samples in the largest class divided by the smallest. Note that we use P as the value of the imbalance factor in the original ScanNet V2 and SUN RGB-D datasets. Additionally, we add another three sets, whose values of imbalance factor are 10P, 25P and 50P, for both ScanNet V2 and SUN RGB-D datasets. \n\nAs shown in the table below, the experimental results indicate that our proposed method consistently outperforms the baseline VoteNet by a large margin, especially when the dataset is severely unbalanced. This is because the proposed method develops a more generic vote module by learning geometric prototypes, and leverages class-specific prototypes to enhance the discriminative feature learning. Note that on the original dataset (P), our model also outperforms the baseline. \n\n| ScanNet V2 | P (Original Dataset) | P (Original Dataset) | 10P | 10P | 25P | 25P | 50P | 50P |\n|-------------|:--------------------:|:--------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| Method | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 |\n| VoteNet | 62.34\t | 40.82 | 52.06 | 35.64 | 43.12 | 27.13 | 40.01 | 26.77 |\n| Ours | 62.59 | 41.25 | 52.60 | 36.87 | 44.53 | 29.17 | 41.99 | 29.01 |\n| Improvement | 0.25 | 0.43 | 0.54 | 1.23 | 1.41 | 2.04 | 1.98 | 2.24 |", " | SUN RGB-D | P (Original Dataset) | P (Original Dataset) | 10P | 10P | 25P | 25P | 50P | 50P |\n|-------------|:--------------------:|:--------------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| Method | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 | AP25 | AP50 |\n| VoteNet | 59.78 | 35.77 | 51.09 | 31.81 | 43.68 | 29.08 | 40.46 | 22.23 |\n| Ours | 60.34 | 36.80 | 51.85 | 32.98 | 44.66 | 31.93 | 41.84 | 25.04 |\n| Improvement | 0.56 | 1.03 | 0.96 | 1.17 | 0.98 | 2.85 | 1.38 | 2.81 |\n\n[5] Cui, Yin, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. \"Class-balanced loss based on effective number of samples.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), 2019.\n\n**Q3.Limitation Discussion.** \n\n**A3.** Thanks for your comments. Indeed, we have already included the limitation discussion in the supplementary material. We also copy it here for your reference.\n\n“Although the 3D cues of point clouds are more stable since they can get rid of some visual distractors, such as lighting and perspectives, some factors still impede the model from better generalization. For instance, in 3D scene understanding, if the point cloud in the training set is dense and that of the test set is sparse, a model often performs poorly, which can be treated as a cross-domain problem. Regarding few-shot 3D object detection, the performance might degrade if there is such a large domain gap between base classes and novel classes. Even though the basic geometric features are learned in the base classes, they might not be generalized well to the novel classes due to the difference in point cloud sparsity. The performance of this model has much room for improvement. One way to achieve better performance is large-scale pre-training. Large-scale pre-training enables the model to learn more generic features for transfer learning using limited samples, which benefits the community of 2D few-shot learning (i.e., ImageNet Pretraining). For future works, we might resort to the pre-training models in the 2D domain to facilitate the few-shot generalization on 3D few-shot learning and how these techniques can be combined with our method.“\n\n", " Thanks for your valuable comments and efforts in helping make our work better. We will explain your concerns and add them to our revised paper. \n\n**Q1. As a general problem when a few-shot detection benchmark is composed, do we have novel classes in the training set but annotated as background? What is the ratio?**\n\n**A1.** Because of the separation between instance and background in the 3D point cloud, the removal of the novel samples does not affect the global scene. To ensure that there are only a few (k) instances for novel categories, we artificially remove those samples. Therefore, considering the large sample size of base classes, this ratio is that is k-shot $\\times$ number of novel categories $/$ total number of samples including base and novel, which is nearly zero. \n\n**Q2. Can you visualize some typical structures of geometric prototypes? For example, the neighborhood of the point with features close to each prototype?**\n\n**A2.** The anonymous link for visualization: \nhttps://drive.google.com/file/d/1vu4qMcsmYlau-518PYSbqbm-6OYnrBVM/view\n\nThank you for this insightful suggestion. Here, we visualize the relation between the learned geometric prototype and the 3D points by searching points with features that are similar to a given geometric prototype. First, we feed object point clouds to a trained Prototypical VoteNet. Second, for each point feature, we can search for its most similar prototype. If the similarity is above a threshold, we can assign the point to that prototype. Third, we use a density-based clustering algorithm to cluster the point groups, and we draw the minimum 3D bounding box around each point group. \n\nAs shown in the figure, all the red bounding boxes within each subfigure belong to the same prototype. The result shows that in each subfigure, the enclosed geometric structures are similar. For example, subfigure (a) illustrates that the prototype learns the feature of corners, while subfigure (b) shows that the prototype learns the long stick. \n\n**Q3. The abbreviation FSOD is usually used for few-shot object detection (2D). As a novel task for 3D, the name could be more specific, such as FS3D.**\n\n**A3.** Thanks for the valuable suggestion. We have changed the abbreviation to FS3D in our rebuttal revision.\n\n**Q4. Limitation Discussion.**\n\n**A4.** Due to limited space, we have included limitation discussion in the supplementary material Section A.7. We are sorry for not stating it in the manuscript. We copy the limitation analysis here for your reference.\n\n“Although the 3D cues of point clouds are more stable since they can get rid of some visual distractors, such as lighting and perspectives, some factors still impede the model from better generalization. For instance, in 3D scene understanding, if the point cloud in the training set is dense and that of the test set is sparse, a model often performs poorly, which can be treated as a cross-domain problem. Regarding few-shot 3D object detection, the performance might degrade if there is such a large domain gap between base classes and novel classes. Even though the basic geometric features are learned in the base classes, they might not be generalized well to the novel classes due to the difference in point cloud sparsity. The performance of this model has much room for improvement. One way to achieve better performance is large-scale pre-training. Large-scale pre-training enables the model to learn more generic features for transfer learning using limited samples, which benefits the community of 2D few-shot learning (i.e., ImageNet Pretraining). For future works, we might resort to the pre-training models in the 2D domain to facilitate the few-shot generalization on 3D few-shot learning and how these techniques can be combined with our method. “ \n", " Thanks for your valuable comments and efforts in helping make our work better. We will explain your concerns and add them to our revised paper. \n\n**Q1. The technical components are largely adapted from existing papers, so technical novelty is somewhat limited. How are the new components different from existing work, such as [37]?**\n\n**A1.** Our technical contribution majorly lies in how to make 3D object detection work when only a few training samples are available for a novel class. This is the first investigation in this area, and a challenging problem as few samples are not sufficient for learning useful feature representations. To this end, we propose two modules to enhance feature representation learning from a local and global perspective: 1) based on our motivation that 3D primitives to constitute objects can be shared among different categories, PVM is developed to learn robust class-agnostic geometric prototypes from base categories with abundant training samples, which are further transferred to enhance local feature learning of novel categories; 2) to improve discriminativeness of class categorization, we design class-specific prototypes which can be treated as a template for classifying novel categories and are used to refine the global features of samples. \n\nOur work is different from existing work [37] (Attention is All you Need) in the following folds: 1) [37] focuses on a new method for NLP tasks and proposes to use attention as the core representation learning block for the whole network, i.e. stacking many attention layers. The network is trained in a data-rich setting. Later on, multi-head self-attention becomes a generic block just like the residual block in resnet. 2) In contrast, our focus is on few-shot 3D object detection, that is how to enhance representation learning when only a few samples are available. Our core insight is to develop class-agnostic geometric prototypes and class-specific prototypes to enhance local and global feature representation learning, respectively. To make prototypes interact with feature representations, we leverage the multi-head attention block which computes affinity and aggregate prototypes to refine local and global features. \n\nNote that our highlight is not on the design of the multi-head attention module (which is the contribution of [37]) but how we develop prototypes and employ them to improve feature representation learning in the few-shot setting. \n\n[37] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. \"Attention is all you need.\" Advances in neural information processing systems (NeurIPS), 2017.\n\n**Q2. The categories chosen for few-shot learning are not really rare categories, so it is not sufficiently meaningful. The performance is also rather low for practical use.**\n\n**A2.** This paper aims to investigate the problem of 3D object detection with a small number of samples for novel categories. Therefore, we randomly set some base/novel splits in ScanNet and SUNRGBD, which are the well-known datasets in 3D object detection. This is also the widely-adopted splitting method in the few-shot community. \n\nPerformance:\n* Please note that although few-shot object detection in the 2D community is relatively well-studied, the performance of 2D few-shot object detection [1, 2] has a large gap ( i.e., around 30% mAP) compared to the full-supervision counterparts. \n* As an exploratory work, we are the first attempt to study Few-Shot 3D Point Cloud Object Detection and set up the basic benchmark for future studies. We believe that there are more chances in 3D few-shot learning since it suffers less influence on distortion, scale ambiguity, and texture variations. We hope to inspire more future studies. \n* The few shot learning setting that we study also has high practical impacts. The recent success of 3D detectors relies heavily on a huge amount of training data with accurate bounding box annotations. However, in many practical applications such as self-driving vehicles and robot manipulations, recognition systems need to rapidly adapt and recognize some never-before-seen objects from a very limited number of examples. We believe few-shot 3D recognition is one important step toward recognition in the open world as there are so many categories in our 3D world that we cannot afford to annotate them all with abundant samples. \n\n[1] Wang, Xin, Thomas E. Huang, Trevor Darrell, Joseph E. Gonzalez, and Fisher Yu. \"Frustratingly simple few-shot object detection.\" In Proceedings of the International Conference on Machine Learning (ICML), 2020. \n\n[2] Qiao, Limeng, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, and Chi Zhang. \"Defrcn: Decoupled faster r-cnn for few-shot object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.", " **Q3. The datasets are straightforward adaptations of well-known datasets, so they should be declared as a contribution.**\n\n**A3.** Thanks for the comments. We would like to emphasize that our contribution is not to collect and annotate datasets, but to standardize a few-shot dataset setting and set up a benchmark where several baseline methods have been implemented. This benchmark can become the basis for future investigations and inspire follow-up works.\n\n**Q4. The paper only compares the baseline VoteNet, not follow-up improvements which could give better performance.**\n\n**A4.** Thank you for the comments. We conduct experiments on more advanced object detectors (i.e., GroupFree [3], 3DETR [4]) and SOTA 2D few-shot detection techniques (i.e., DeFRCN [5], FADI [6]). We conducted this experiment on 3-shot and 5-shot in split-1 of FS-ScanNet. The experimental results are shown below. Our method still surpasses these methods with a large margin. \n\nThis is potential because these 2D few-shot object detection techniques might not be directly transferable to the 3D domain. In the 2D domain, they often build their model upon a large-scale pre-trained model on ImageNet. However, in the 3D community, there does not exist a large-scale dataset for model pre-training, which requires future investigations. \n\nMoreover, comparing the performance of different backbone detectors, we observe that a better detection architecture does not bring large performance gains in the few-shot 3D detection scenario. The most challenging issue for few-shot 3D object detection still lies in how to learn effective representation if only a few training samples are provided. The architecture don’t help much if the model cannot effectively extract features to represent novel categories with only a few samples.\n\n\n| | 3-shot | 3-shot | 5-shot | 5-shot | \n|------------|:------:|:-----:|:------:|:-----:|\n| **Method** | **$AP_{25}$** | **$AP_{50}$** | **$AP_{25}$** | **$AP_{50}$**| \n| VoteNet+ DeFRCN[5] | 23.17 | 9.82 | 25.92 | 13.51 | \n| VoteNet + FADI[6] | 24.08 | 9.93 | 26.03 | 13.47 | \n| GroupFree[3] + DeFRCN [5] | 25.22 | 10.90 | 26.42 | 14.01 | \n| GroupFree[3] + FADI [6] | 25.73 | 11.02 | 27.12 | 14.32 | \n| 3DETR[4] + DeFRCN [5] | 26.01 | 10.95 | 26.88 | 14.45 | \n| 3DETR[4] + FADI [6] | 26.24 | 11.12 | 26.93 | 15.22 |\n| Ours | 31.25 | 16.01 | 32.25 | 19.52 | \n\n[3] Liu, Ze, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. \"Group-free 3d object detection via transformers.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.\n\n[4] Misra, Ishan, Rohit Girdhar, and Armand Joulin. \"An end-to-end transformer model for 3d object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. \n\n[5] Qiao, Limeng, Yuxuan Zhao, Zhiyuan Li, Xi Qiu, Jianan Wu, and Chi Zhang. \"Defrcn: Decoupled faster r-cnn for few-shot object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.\n\n[6] Cao, Yuhang, Jiaqi Wang, Ying Jin, Tong Wu, Kai Chen, Ziwei Liu, and Dahua Lin. \"Few-Shot Object Detection via Association and DIscrimination.\" Advances in Neural Information Processing Systems (NeurIPS), 2021.\n\n**Q6. How are the hyperparameters such as alpha_1, alpha_2, etc. set?**\n\n**A6.** We follow the implementation of the released code [7] and don't make any adjustments to these hyperparameters in all our experiments.\n\n[7] Qi, Charles R., Or Litany, Kaiming He, and Leonidas J. Guibas. \"Deep hough voting for 3d object detection in point clouds.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.\n\n**Q7. Limitation Discussion**\n\n**A7.** Thanks for pointing this out. Beyond resolution difference, the relatively low performance is another limitation. Besides only a few training samples available, another reason for this problem is the lack of large-scale pre-training in the 3D domain. Large-scale pre-training enables the model to learn more generic features for transfer learning using limited samples, which benefits the community of 2D few-shot learning (i.e., ImageNet Pretraining). For future works, we might resort to the pre-training models in the 2D domain to facilitate the few-shot generalization on 3D few-shot learning, and how these techniques can be combined with our method.\n \nWe will add this discussion in our paper.\n\n\n\n\n\n\n\n\n\n\n\n\n", " Thanks for your valuable comments and efforts in helping make our work better. We will explain your concerns and add them to our revised paper. \n\n**Q1. Visualize whether it is learning basic geometric shapes.**\n\n**A1.** The anonymous link for visualization: \nhttps://drive.google.com/file/d/1vu4qMcsmYlau-518PYSbqbm-6OYnrBVM/view.\n\nThank you for this insightful suggestion. Here, we visualize the relation between the learned geometric prototypes and the 3D points by searching points with features that are similar to a given geometric prototype. First, we feed object point clouds to a trained Prototypical VoteNet. Second, for each point feature, we can search for its most similar prototype. If the similarity is above a threshold, we can assign the point to that prototype. Third, we use a density-based clustering algorithm DBSCAN to cluster the point groups, and we draw the minimum 3D bounding box around each point group. \n\nAs shown in the figure, all the red bounding boxes within each subfigure belong to the same prototype. The result shows that in each subfigure, the enclosed geometric structures are similar. For example, subfigure (a) illustrates that the prototype learns the feature of corners, while subfigure (b) shows that the prototype learns the long stick. \n\n**Q2. KNN assignment and other detectors.**\n\n**A2.** Thank you for this great suggestion.\n* We apply KNN assignment to VoteNet and two SOTA 3D detectors GroupFree [1] and 3DETR [2]. We conducted this experiment on 3-shot and 5-shot in split-1 of FS-ScanNet. The KNN assignment is realized by calculating the distance between each object feature and features of all training objects in the classification step, and assigning the sample to the class based on voting from its k-nearest objects of the training set. Here, we take k as one since we find increasing the value k doesn’t improve performance. The results are shown in the following Table. \n* Comparing the performance of “Baseline VoteNet”, “VoteNet + KNN” and “ours”, we see that the non-parametric KNN classifier will not help improve few-shot learning much (“VoteNet” vs “VoteNet+KNN”).\n* Comparing the performance of different backbone detectors (“VoetNet+KNN”, “GroupFree + KNN”, and “3DETR+KNN”), we observe that a better detection architecture does not bring large performance gains in the few-shot 3D detection scenario. \n* The most challenging issue for few-shot 3D object detection still lies in how to learn effective representation if only a few training samples are provided. The classifier and architecture don’t help much if the model cannot effectively extract features to represent novel categories with only a few samples.\n* We will add the comparison and analysis in the paper.\n\n\n| | **3-shot** | **3-shot** | **5-shot** | **5-shot**|\n|:---------------:|:------:|:-----:|:------:|:-----:|\n| **Method** | **$AP_{25}$** | **$AP_{50}$** | **$AP_{25}$** | **$AP_{50}$**| \n| VoteNet | 22.64 | 9.04 | 24.93 | 12.82 |\n| VoteNet + KNN | 23.07 | 9.56 | 25.58 | 13.51 | \n| GroupFree[1] + KNN | 24.22 | 9.97 | 26.33 | 13.92 | \n| 3DETR[2] + KNN | 24.08 | 10.21 | 26.01 | 14.36 | \n| Ours | 31.25 | 16.01 | 32.25 | 19.52 | \n\n[1] Liu, Ze, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. \"Group-free 3d object detection via transformers.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021.\n\n[2] Misra, Ishan, Rohit Girdhar, and Armand Joulin. \"An end-to-end transformer model for 3d object detection.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. \n\n\n**Q3. Were other methods of updating the prototypes tried ?**\n\n**A3.** Thank you for pointing this out. Indeed, we included another method for updating the prototypes in the supplementary material. It calculates the similarity between a point feature with all geometric prototypes, and updates all geometric prototypes in a soft manner considering the similarity scores between a point feature and the geometric prototypes. The details can be found in Section A.3 in the supplementary material. \n\n**Q4. Does setting the prototype at the end (no updates) perform well ?**\n\n**A4.** As shown in the table below, for the proposed Prototypical VoteNet, if we don’t update the prototype in PVM, the performance would degrade significantly. Without updating, the randomly initialized prototypes can not learn the geometry information from base classes in the training phase. In this case, it is hard to transfer the basic geometry information from base classes to the novel classes as the prototypes are meaningless.\n\n| | 3-shot | 3-shot | 5-shot | 5-shot | \n|------------|:------:|:-----:|:------:|:-----:|\n| **Method** | **$AP_{25}$** | **$AP_{50}$** | **$AP_{25}$** | **$AP_{50}$**| \n| No updates | 28.05 | 13.89 | 28.51 | 14.51 | \n| Updates | 31.25 | 16.01 | 32.25 | 19.52 |", " **Q5. The authors did not discuss any limitations.**\n\n**A5.** Due to limited space, we have included the limitation discussion in the supplementary material. We are sorry for not stating it in the manuscript. We copy the limitation discussion in Section A.7 here for your reference.\n\n“Although the 3D cues of point clouds are more stable since they can get rid of some visual distractors, such as lighting and perspectives, some factors still impede the model from better generalization. For instance, in 3D scene understanding, if the point cloud in the training set is dense and that of the test set is sparse, a model often performs poorly, which can be treated as a cross-domain problem. Regarding few-shot 3D object detection, the performance might degrade if there is such a large domain gap between base classes and novel classes. Even though the basic geometric features are learned in the base classes, they might not be generalized well to the novel classes due to the difference in point cloud sparsity. The performance of this model has much room for improvement. One way to achieve better performance is large-scale pre-training. Large-scale pre-training enables the model to learn more generic features for transfer learning using limited samples, which benefits the community of 2D few-shot learning (i.e., ImageNet Pre-training). For future works, we might resort to the pre-training models in the 2D domain to facilitate the few-shot generalization on 3D few-shot learning and how these techniques can be combined with our method.“\n\n**Q6. Miscellaneous.**\n\n**A6.** Thanks for pointing these out. We have corrected these problems in our rebuttal revision. \n\n\n\n\n", " This work explores the problem of few-shot learning in 3D object detection. It appears to be the first work in the field, and contributes 2 dataset-settings, based on SUNRGB and ScanNet, as well as 4 benchmarks, mostly based on VoteNet. The authors contribute their own method, which extends VoteNet to incorporate prototypes in each of the feature representation “stages”. The prototypes are moving averages of the closest features. The method is simple, yet effective, showing strong performance gains compared to the other VoteNet baselines. This work, to the best of my knowledge, is one of the first works to tackle the problem of few-shot learning in a 3D object detection setting. In addition to their own method, the authors also contribute a set of benchmarks and a proposed dataset setting to evaluate future few-shot learning methods. This is a good contribution to the community as a whole. The method proposed by the author appears to achieve strong performance compared to proposed benchmarks. The method appears to be informed, and empirical evidence backs up each module’s necessity. By initializing the prototypes used for the features (“geometric”) and for the classes with the feature-space centroids and updating with a moving average, it seems to prevent the few-shot learner from overfitting as much (table 1,2). Experimentally, the ablation section is detailed and I do not have any questions after reading it. Nice to see the prototypes didn’t collapse (fig 3).\n\nThe conclusion in section 3.2, L162 that the prototypes are learning basic 3D geometric shapes is a bit misleading, since distribution of features in any pre-trained detection model should be similar for similar objects as well (fig 3). Considering the prototypes are updated in the feature space, there is nothing to indicate that there is 3D geometric interpretation; one way to visualize if it truly is learning basic geometric shapes is to optimize a shape input to maximize one prototype. This would give a better “geometric” interpretation of what that prototype is representing. However, this is not necessarily a weakness, since feature-space prototypes are just as useful, but I would recommend either changing the claim, or backing up the claim with visualized representations of what each prototype corresponds to in 3D input space. \nOne weakness is the benchmarks proposed are all built on top of VoteNet. A benchmark that would be good to include is KNN assignment to the closest support class. This would be able to leverage other detectors.\n\nMiscellaneous:\nL121: “Taking a point cloud scene P_i as input… localize and categorize…” → “VoteNet takes a point cloud scene P_i as input and localizes and categorizes…”\nFig 4 caption: “Tsne” → “t-SNE”\nL225: “Till now” → “Until now” Were other methods of updating the prototypes tried? Does setting the prototype at the end (no updates) perform well? (not necessarily a weakness, just a curious possible experiment) The authors did not discuss any limitations.", " The paper presents a method for few-shot 3D point cloud object detection. The method extends VoteNet by incorporating two new modules, prototypical vote module (PVM) and prototypical head module (PHM). The experimental results show that the proposed prototypical VoteNet improves the performance for few-shot t 3D point cloud object detection. Strengths:\n\n- The paper addresses a new problem, which is potentially useful.\n- The method is plausible and seems to work better than existing methods.\n\nWeaknesses:\n\n- The technical components are largely adapted from existing papers, so technical novelty is somewhat limited.\n- The few-shot setting is not sufficiently convincing. The categories chosen for few-shot learning are not really rare categories, so it is not sufficiently meaningful. The performance is also rather low for practical use.\n- The datasets are straightforward adaptations of well-known datasets, so they should be declared as a contribution. \n- The paper only compares the baseline VoteNet, not follow-up improvements which could give better performance. \n\n - How are the new components different from existing work, such as [37]?\n- How are the hyperparameters such as alpha_1, alpha_2, etc. set?\n The limitations of the paper are not sufficiently discussed. Given the relatively low performance, the paper should include more discussions of limitations beyond just resolution difference. ", " This paper proposes a new task, few-shot 3d point cloud object detection. It is a combination of two well-studied topics, few-shot learning and 3d point cloud detection. And naturally, two well-known methods from both sides, prototypical learning and VoteNet, are combined to tackle the new task.\n\nWhile the combination seems straight forward, the paper further shows that it is not trivial. The implementation of PVM and PHM disentangles the feature embedding and detection. And successfully It is overall a good work. Strengths are already covered in summary. Here are some weaknesses.\n\n1. As a general problem when a few-shot detection benchmark is composed, do we have novel classes in training set but annotated as background? What is is ratio?\n\n2. Can you visualize some typical structures of geometric prototypes? For example, the neighborhood of the point with features close to each prototype?\n\n3. The abbreviation FSOD is usually used for few-shot object detection (2D). As a novel task for 3D, the name could be more specific, such as FS3D. Questions are listed in weaknesses. The limitations of the work has not been addressed.", " This paper proposes a few-shot framework for 3D point cloud indoor object detection. They propose prototypical votenet to recognize\nand localize novel instances based on the PVM and PHM modules. PVM leverages class-agnostic geometric prototypes learned from base classes to refine local features of novel categories. PHM is designed to utilize class prototypes to enhance the global feature of each object. This paper also provides two new benchmark datasets, FS-ScanNet and FS-SUNRGBD. They conduct extensive experiments to demonstrate the effectiveness of their method, which shows a promising performance compared to several self-designed baselines on two benchmark datasets. Strengths:\n\n-This paper is well-written and easy to follow.\n\n-The idea is interesting. This paper considers utilizing geometric learning and object prototypes to assist few-shot 3D object detection, which will help our community. \n\n-The results of their experiment are promising with a remarkable margin.\n\nWeakness:\n\n-The baselines of this paper are not well-designed. Few-shot object detection in 2D community is well-studied. Why didn't this paper compare some classical few-shot approaches in 2D community? I don't think it's difficult to transform some 2D techniques into the 3D domain in this task. It's difficult to judge their contribution based on these weak baselines. \n\n-If you statistics them, both of these datasets i.e. ScanNet V2 and SUN RGB-D are unbalanced. In ScanNetV2, the number of chairs is 4357 and 113 of bathtubs. If this paper focuses on solving the imbalanced problem of different classes by considering the prototype learning, it would contribute more to the 3D community. So I want to see the improvement of fully training based on the votenet and proposed PVM&PHM. That means the base class is the whole class (18 classes) and no novel class. All the questions are mentioned in Weakness.\n\n-I want to see more baseline approaches widely used in 2D few-shot object detection.\n\n-I want to see the improvement of the whole class by the proposed model. (Compared with VoteNet).\n\nIf the authors solve my concerns, I will consider raising my score. No" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "cexKDfbqTwv", "DKMXV7m6Hsm", "c1FWpz0jfRi", "ZrwNabRSX3h", "ZrwNabRSX3h", "Dp1o2o0djGI", "j2MHVvt1DQk", "Qr0SK31Gl6", "TOEAOs1OYrC", "TOEAOs1OYrC", "fTI8V3h_jf3", "cexKDfbqTwv", "cexKDfbqTwv", "q0i9Wignlw0", "q0i9Wignlw0", "nips_2022_kCTZt0b9DQz", "nips_2022_kCTZt0b9DQz", "nips_2022_kCTZt0b9DQz", "nips_2022_kCTZt0b9DQz" ]
nips_2022_dVXO3Orjmxk
Decoupling Classifier for Boosting Few-shot Object Detection and Instance Segmentation
This paper focus on few-shot object detection~(FSOD) and instance segmentation~(FSIS), which requires a model to quickly adapt to novel classes with a few labeled instances. The existing methods severely suffer from bias classification because of the missing label issue which naturally exists in an instance-level few-shot scenario and is first formally proposed by us. Our analysis suggests that the standard classification head of most FSOD or FSIS models needs to be decoupled to mitigate the bias classification. Therefore, we propose an embarrassingly simple but effective method that decouples the standard classifier into two heads. Then, these two individual heads are capable of independently addressing clear positive samples and noisy negative samples which are caused by the missing label. In this way, the model can effectively learn novel classes while mitigating the effects of noisy negative samples. Without bells and whistles, our model without any additional computation cost and parameters consistently outperforms its baseline and state-of-the-art by a large margin on PASCAL VOC and MS-COCO benchmarks for FSOD and FSIS tasks.\footnote{\url{https://csgaobb.github.io/Projects/DCFS}.}
Accept
**Summary**: This paper aims to address the missing label issue in few-shot object detection (FSOD) and instance segmentation (FSIS). In these tasks, some foreground examples (more specifically, classes) are not labeled in the training images, which causes classification bias for the conventional classification head. This paper proposes a simple yet effective algorithm that treats the foreground and background proposals differently in the classification loss. The proposed method is evaluated on benchmark datasets and shows competitive performance against SOTA methods for both FSOD and FSIS. **Strength**: The paper is well-written. The observation is interesting and inspiring. The proposed method is novel, simple, effective, well-motivated, and compatible. The empirical study is solid and achieves SOTA results. The proposed method introduces no additional computation costs and hyper-parameters and is thus practical. **Weakness**: The problem being solved is not well articulated. The inherent problem may come from poor task/protocol design. The paper might propose a solution to deal with the bias of existing benchmark datasets rather than the problem (FSOD/FSIS) themselves. Some of the paper's claims are not well-grounded. This missing label issue should be comprehensively analyzed from a broad perspective. **Recommendation**: The paper received mixed review opinions from four reviewers. On the one hand, reviewers found the observation interesting and the proposed method simple, effective, and well-motivated. On the other hand, reviewers also have concerns that the missing label issue may not be the true problem of FSOD/FSIS, but a side effect of the poorly designed benchmark. Indeed, this was the main concern of the two reviewers (mATA and 1weo) who gave 3 and 4. After the rebuttal, two reviewers are satisfied with the authors' responses to their concerns and raised their scores to 7. However, the above-mentioned two reviewers did not participate in the discussion. The AC carefully read the paper and thought that the missing label issue can be a natural/practical issue in FSOD/FSIS. Specifically, when one wants to further detect an object class that is not in the base classes, one natural way to collect data is to find images that contain that object class and just annotate it (and ignore annotating other classes). Given this, the strengths of the paper, and the acceptance suggested by two reviewers, the AC suggests “acceptance.” With that being said, the AC strongly suggests that the authors take the comments by reviewers mATA and 1weo seriously. Those questions, though might be beyond the scope, are quite valuable. For example, reviewer 1weo asks 1) if it is beneficial to collect more images (but each with partial labels) than fewer images (but each with full labels); 2) if other FSOD/FSIS methods work better or worse when only a few fully annotated images are provided. The AC respectfully thinks that the authors misunderstood the questions and unrelated responses. Also, the authors' response to reviewer mATA can be improved. The authors said, “Second, the community accepts the current FSOD/FSIS benchmarks because they are more challenging due to incomplete or partial annotations (i.e., missing labels).” However, more *challenging* does not mean that the problem setting is appropriate/proper/practical. Similarly, *commonly used* benchmarks do not mean that they don't have problems in the setting. In response to reviewer mATA’s question, the authors could have focused more on why this setting is valid, practical, etc. Overall, the AC sees the strength and value of the paper. To make the paper stronger and more impactful, the AC has the following suggestions. First, the AC suggests that the authors incorporate all the reviewers' comments and the authors' rebuttal into their final version. Second, the authors should add content that they promise to the reviewers. Third, the authors should add a paragraph to clarify the missing label issue. Fourth, the authors should further discuss the relationship to semi-supervised learning (e.g., the pseudo-label methods), which could potentially handle missing label issues.
train
[ "3dJRGRYjIf2", "SDYgVSjO7Kz", "73yc0_OcuT9", "l3Y_GvHI0hg", "kU5llO77TxE", "HkWyDEvLtOv", "LKO50POFLGu", "cq0PzGZ2RzD", "sjz1gANqVzk", "Zmy0bUdNGO8", "cFwD7idc_lu", "2GBkvDgkclE", "mLL6HxGSYaZ", "HOSUAdkyu6J", "uh1_KGH7ZkQ", "SC2oWgFdn_i", "Ya4tNviElV" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer LjLJ,\n\nThank you for taking the time to review our work. We have provided corresponding responses, which we believe have covered your questions and concerns. We want to further discuss with you whether or not your concerns have been addressed. \n\nBest,", " Dear reviewer 1weo,\n\nThank you for taking the time to review our work. We believe that our rebuttal has addressed your questions and concerns. We would be more than happy to discuss with you if you have any further questions about the paper or our rebuttal.\n\nBest,", " Dear reviewer mATA,\n\nThank you for taking the time to review our work. We hope our rebuttal has addressed your questions and concerns. We would be more than happy to discuss with you if you still have any unresolved concerns or additional questions about the paper or our rebuttal.\n\nBest,", " We greatly appreciate the strong support for our work (increasing your score) and the positive comments for contributing a simple but effective FSOD and FSIS method from a new missing label perspective. We will certainly include discussions of the missing label issue from a broad perspective in the updated version of our paper.", " I appreciate the authors for their responses to my concerns. This paper contributes a simple but effective method (decoupling classifier) for few-shot object detection and few-shot instance segmentation from a new missing label perspective, which is also approved by other reviewers. The main concern is the training and evaluation protocol (benchmarks) in few-shot detection and instance segmentation, which is well clarified in the rebuttal after carefully reading the comments of other reviewers and the feedback of the authors.\n \nIn summary, I will raise my score from weak accept to accept, in expectation of your inclusion of some discussion of the missing label issue from a broad perspective in the final version.", " We sincerely thank you for your detailed comments and constructive suggestions, especially your appreciation for our work, \"**this new perspective might be inspiring to others in the field**\", \"**novel classifier decoupling idea that is quite interesting and strongly motivated**\", \"**really simple and easy to understand and follow**\", \"**outperforms its baseline and state-of-the-art by a large margin**\", \"**well analyzed and interpreted**\", \"**experiment results are quite convincing**\" and \"**well written and very clarified**\". Next, we respond to your concerns as follows.\n\n**Q1:This missing label issue should be comprehensively analyzed from a broad perspective, e.g., between base and novel classes stages, the novel fine-tuning stage itself.**\n\nThanks for your suggestions. We agree that DeFRCN also can be interpreted from a missing label perspective between base and novel classes. Based on this, we could view fine-tuning few-shot learning paradigm as a domain adaption procedure from base to novel. In this procedure, few-shot detector may suffer from foreground-background confusion because one proposal (potential novel object) belongs to background (negative class) in the base learning stage and becomes foreground (positive class) in the novel fine-tuning phase. To mitigate the label conflict between two domains, DeFRCN decouples RCNN and RPN by stopping gradient backpropagation of RPN in Faster-RCNN. Different from the missing label of cross-domain in DeFRCN, we focus on the missing label issue in the novel (or balanced base-novel) fine-tuning stage. We will comprehensively analyze the missing label issue from a broad perspective and add this disscussion to the next version.\n\n**Q2:The inference pipeline (using the standard classifier or the decoupling classifier) should be more clarified.**\n\nThanks for your suggestions. Your analysis is correct. At inference time, we only use the positive head as the same as the standard classifier for all samples, while the proposed decoupling classifier is used only during training. We add it to the next version for clarification.\n\nWe thank you again for your time and efforts in reviewing our paper. Furthermore, we would be more than happy to discuss with you if you have any concerns about our responses.", " We sincerely thank you for your detailed comments and positive feedback on our paper, “**the idea of the paper is easily reproducible**” and “**the benchmark, evaluation, and comparison are solid**”. Next, we respond to your concerns in a point-by-point manner as follows.\n\n**Q1: I agree with the paper that this is an issue with these benchmarks (the standard evaluation protocol in FSOD/FSIS) and needs to be addressed. However, I do not think this (missing label) is an inherent issue with FSOD/FSIS problems.**\n\nThanks for your endorsement of the FSOD/FSIS benchmarks (training and evaluation protocol) used in our paper. Here, we want to add some explanations below and hope to address your concerns about the missing label issue in FSOD/FSIS.\n\nFirst, we follow state-of-the-art FSOD/FSIS methods and use the standard benchmarks which have been widely accepted and used in the community of machine learning and computer vision for a fair comparison. To the best of our knowledge, recent published FSOD/FSIS papers almost use the same benchmarks that consider an object instance as a “shot”. As pointed out by you, there are generally multiple instances in an image for instance-level FSOD and FSIS, which is different from image-level few-shot image classification.\n\nSecond, the community accepts the current FSOD/FSIS benchmarks because they are more challenging due to incomplete or partial annotations (i.e., missing labels), and the number of labeled instances is well controlled in each class for fair comparison of few-shot instance-level recognition methods. The missing label issue requires that learning algorithms deal with training images each associated with multiple instances, among which only partial instances are labeled, which is also similar to partial label learning. As we know, missing label (partial label) learning is more difficult and challenging than conventional fully-supervised learning, especially few-shot scenarios.\n\nLast but at least, we think that it is generally expensive and time-consuming to label all instances in many real-world applications. Fully-supervised object detection or instance segmentation typically assumes that all interest instances are labeled for given training images. In some real-world applications, e.g., open-vocabulary object detection, however, it is generally challenging to label all instances, and thus there still exists some instances left to be missing labeled, although we agree that it is possible to label all instances given few-shot images. In addition, it is more friendly and convenient for users to label partial instances than all instances even in few-shot scenarios.", " **Q2: The proposed solution could create more bias rather than solving the problem.**\n\nWe are sorry for resulting in this misunderstanding. Here, we try to explain this issue as follows.\n\nFirst, the proposed decoupling classifier is simple but effective, it significantly boosts FSOD and FSIS performance (Tables 1, 2, 3, 4, 5 and 6). Meanwhile, we also analyze why it works from two perspectives including gradient optimization (Sec 3.3 and Fig. 2) and the generalization ability of the learned classifier (Sec 4.3 and Fig. 3). These above points are also recognized by you (\"**the idea is easily reproducible**\", \"**evaluation, and comparison are solid**\") and other reviewers (\"**simple yet effective**\", \"**SOTA results**\", \"**solid quantitative evidence**\", \"**a plugin for many FSOD methods**\" and \"**well analyzed and interpreted**\").\n\nSecond, it is unreasonable and unfair to judge our method which could create more bias (no evidence). In contrast, our proposed method significantly mitigates the biased classification when meeting the missing label FSOD and FSIS (Sec 4.3 and Fig. 3). What is more, our method still achieves comparable performance even if the missing label rate is small or zero (bottom two rows in Table 5), which indicates its robustness. We have discussed it (lines 293-300) and taken it as a limitation of our method (lines 346-348).\n\nThird, it is important to work for our method depending on a premise, i.e., missing label few-shot scenarios. And it is really unnecessary to use our method in a fully-annotated scenario no matter large-scale or few-shot. Here we give some explanations for clarification. \n- As shown in Fig 1(c), only the “dog” instance is labeled and the other two \"person\" instances are missing labeled. Once one missing labeled “person” instance is sampled and it will be mistakenly taken as a background class in a standard classifier. Using the proposed decoupling classifier, the person instance will be fed to a negative head and its optimization is restricted between “dog” and “background” (Eq. 13). Therefore, the biased classification is mitigated.\n- For a special case as mentioned by you, for instance, multiple instances present in an image and belong to the same class, e.g. person class, and only one person instance is annotated. We agree with you that it is difficult for our decoupling classifier to correct the learning for this type of missing instance. It is worth noting that the standard classifier also can't solve this type of case. Our decoupling classifier only degenerates into the standard classifier when meeting this special case, but it doesn't imply that our method creates bias. This case may be present in the current benchmarks because of random sampling mechanism. However, the overall performance has been significantly improved by using our method.\n\nIn summary, our decoupling classifier doesn't involve the wrong backpropagation compared to the standard classifier. In contrast, the negative head of our decoupling classifier corrects or mitigates the wrong backpropagation. This is also why the decoupling classifier works.\n\n**Q3: Some of the paper's claims are not well-grounded. For many few-shot learning papers (MAML and Prototypical Network), base-class performance is not a concern. Could the author motivate more on why we 'always expect the few-shot model to remember base classes‘?**\n\nWe agree with you that many previous few-shot learning methods focus on the performance of novel classes only. But, recent **generalized** few-shot learning considers the performance **not only on novel classes but also on base classes** [1,2]. The papers [1,3] stress that a good few-shot learning system should adapt to new tasks rapidly while maintaining the performance on previous knowledge without forgetting. Recently, some state-of-the-art methods (e.g., TFA and DeFRCN) evaluate their performance not only with FSOD setting but also with generalized FSOD (gFSOD). We follow these works and report performance for FSOD and FSIS under these two settings (FSOD/FSIS and gFSOD/gFSIS).\n\n[1] Dynamic Few-Shot Visual Learning without Forgetting, CVPR 2018.\n\n[2] Generalized Few-Shot Object Detection without Forgetting, CVPR 2021.\n\n[3] Gradient episodic memory for continual learning, NeurIPS 2017.\n\nWe thank you again for your time and efforts in reviewing our paper. Based on the above clarifications, we sincerely hope from the heart that you could re-evaluate our work. Furthermore, we would be more than happy to discuss with you if you have any concerns about our responses.", " We sincerely thank you for your appreciation of our work “**very simple yet effective**”, “**achieving better performance and no negative impact on base classes**”, “**become a plug-in for many FSOD methods**” and “**well written and easy to follow**”. Next, we respond to your concerns in a point-by-point manner as follows.\n\n**Q1: If the novel class training samples are provided in a way that every example is labeled in each image but only a small group of images are labeled (basically the strategy I mentioned in Cons), do you think the proposed method works better or worse?**\n\nThanks for your suggested setting that all instances are fully annotated for given few-shot training images.\n\nThe suggested setting means the missing label rate is zero. In fact, our method still achieves comparable performance even if the missing label rate is small or zero (bottom two rows in Table 5), which indicates the robustness of our method. The detailed discussion can be found in Sec. 4.2: few-shot object detection on the PASCAL VOC and MS-COCO (lines 293-300). And we have discussed it and taken it as a limitation of our method (lines 346-348). \n\nIn this paper, our work mainly focuses on the missing label few-shot scenario (current and popular benchmark) and achieves significant improvements under two different settings (FSOD/FSID and gFSOD/gFSIS). We leave fully annotated (building new benchmarks) FSOD and FSIS as our future work.\n\n**Q2: Also, will other FSOD/FSIS methods work better or worse in this case?**\n\nWe try to take the TFA as our baseline and replace the standard classifier in TFA with the proposed decoupling classifier. And we find that the performance improvements are consistent with that of DeFRCN. The detailed discussions refer to our response to Q6 of reviewer LjLj.\n\n**Q3: Do you think using the proposed method with the current labeling strategy (i.e. randomly labeling a single class and ignoring others in one image) is still more beneficial as it sees more data?**\n\nYes, it is still beneficial when using the randomly sampling mechanism (randomly labeling a single class and ignoring others in one image). \n\nIn this paper, we follow standard FSOD/FSIS training and evaluation protocol which have been widely used in the community of machine learning and computer vision. In the standard protocol, the labeled instances are generated, in fact, by randomly sampling. The number of sampled instances is decided by giving shot numbers. For example, 1-shot implies that only one instance is sampled in giving an image and other instances will be ignored in this image. We have reported 1-shot results based on multiple random seeds for PASCAL VOC and MS-COCO in Tables 1, 2, 3, 4, 5 and 6. It can be seen that the improvements in our method are consistent with a 1-shot setting.\n\nWe thank you again for your time and efforts in reviewing our paper. Furthermore, we would be more than happy to discuss with you if you have any concerns about our responses.", " We sincerely thank you for the detailed comments and positive feedback such as “**solid framework**”, “**SOTA results**”, “**solid quantitative evidence to support the main claim**”, and \"**clear writing**\". For each detailed question, we provide responses below.\n\n**Q1: Contribution is rather shallow and incremental.**\n\nThe main contribution of our paper has been well summarized and recognized by other reviewers (1weo and DeAx), and here we want to emphasize them again as follows:\n\nFirstly, we rethink FSOD and FSIS tasks and discover that the existing fine-tuning-based few-shot FSOD and FSIS methods severely suffer from biased classification because of the observed missing label issue. To the best of our knowledge, this is **the first time to propose the missing label issue in FSOD and FSIS from a label completeness perspective**. **This new perspective might be inspiring to others in this field** as pointed out by reviewer DeAx. For example, benchmark itself (commented by other reviewers) in FSOD and FSIS.\n\nSecondly, we propose a **very simple** (core implementation **only with one line code**, Eq. 8) **but effective** (e.g., **5.6+** AP50 improvements for detection and **4.5+** AP50 improvements for segmentation on challenging MS-COCO with 5-shot setting in Table 2) method that decouples the standard classifier into two parallel heads (positive and negative heads) to independently process clear positive samples and noisy negative samples for mitigating the biased classification in FSOD and FSIS and thus improving the generalization ability of few-shot model on novel classes (and novel and base classes for generalized setting).\n\nLast but not least, our method **consistently outperforms its baseline and state of the arts by a large margin** recognized by you and other reviewers on few-shot benchmarks PASCAL VOC and MS-COCO for both FSOD/FSIS and gFSOD/gFSIS tasks without any additional computation cost and parameters.\n\nIn summary, based on the above clarifications, we sincerely hope from the heart that you could re-evaluate the contribution of our work.\n\n**Q2: It is unclear why one of the person instances is mislabeled in Fig. 1.**\n\nThank you for pointing out this issue. And we have some corresponding observations and discussions in Sec. 3.2: Missing label issue (lines 126-145). Here we provide more explanations. As we know, FSOD/FSIS tasks require instance-level recognition, which is different from image-level few-shot image classification. Note that the number of instances varies considerably because an image may consist of multiple instances. In the standard benchmarks, a few labeled instances are randomly sampled for given training images and shot numbers, and thus other unsampled instances (e.g., the person in Fig. 1) will be mislabeled. ", " **Q3: Is it possible to avoid label noise? How does it happen that the objects that are not supposed to used as supervision signals end up being used as such?**\n\nYes, it is possible to avoid label noise in a few-shot setting. For example, the community could build new benchmarks that all interest instances are fully annotated in given few-shot training images. However, designing a new benchmark is beyond our current research because we mainly focus on few-shot with missing label scenarios. Furthermore, we argue that current benchmarks are more challenging than the fully-annotated ones. The reason is that, as we know, missing label (partial label) learning is more difficult and challenging than conventional fully-supervised learning, especially in few-shot settings. In addition, we want to emphasize that it is often expensive and time-consuming to completely annotate all interest instances in practical applications. So it is more friendly and convenient for users to utilize the miss-labeled protocol (the current benchmarks).\n\nThe missing label (noisy) instances may bring biased classification and finally result in missing detection. And we have some corresponding discussions and analyses in Sec. 3.2: Biased classification issue (lines 146-163). Here we provide more explanations. In a two-stage object detection framework, e.g., Faster RCNN, positive (foreground) and negative (background) samples are generated by computing IoU scores between all RPN proposals and ground-truth bounding boxes. Under this strategy, those missing labeled instances will be mistakenly assigned as negative labels, but they are truly positive. Therefore, the model will be misguided by these noisy negative samples during training and biased towards background at inference time. This potentially limits the generalization ability of the few-shot model on novel classes.\n\n**Q4: It is unclear why this problem is solved architecturally by introducing additional heads while it seems like it is really a problem with the data. I would much prefer if the problem could be solved in the source.**\n\nOur method decouples the standard classifier into two parallel heads (positive and negative) to independently process clear positive samples and noisy negative samples for mitigating the biased classification for FSOD/FSIS and gFSOD/gFSIS. We analyze why it works from two aspects, gradient optimization (Sec 3.3) and the generalization ability of the learned classifier for foreground objects (Sec 4.3). We understand that it will not involve the missing label issue if all instances are fully annotated. Unfortunately, this requires a well-annotated dataset which leads to a chicken-and-egg problem–we need a fully-annotated dataset to train a good few-shot model, but we need a good few-shot model with a partly-annotated dataset.\n\n**Q5: Missing qualitative examples.**\n\nThanks for your suggestion. We show qualitative results including success and failure cases and compare our method with its baseline in the supplementary material (Fig. 1) due to the page space limitation. As shown in Fig. 1, we can observe that the baseline method may tend to incorrectly recognize foreground objects as background due to the biased classification (middle part in Fig. 1). In addition, our method also may produce failure results due to small objects, occlusion, and misclassification of similar appearance objects (bottom part in Fig. 1).", " **Q6: This paper only shows results based on Mask-DeFRCN.**\n\nWe take DeFRCN as our baseline and extend it from FSOD to FSIS in our initial submission. One main consideration is its simple framework and state-of-the-art performance for FSOD. Here we report the performance that replaces the baseline DeFRCN with TFA on MS-COCO with 1-, 5-, and 10-shot. Specifically, we only replace the standard classifier in TFA with the proposed decoupling classifier and keep other components and hyper-parameters unchanged. As shown in the Table below, our re-produced results (denoted as TFA*) are close to that of the original TFA paper both on novel and base classes. What is more, the proposed decoupling classifier helps to improve the baseline TFA by about 1 point in AP (2-3 points in AP50) on novel classes and 3-4 points in AP (5-6 points in AP50) on base classes across different shots settings. These results are consistent with that of plugging our decoupling classifier into DeFRCN.\n\n| Shot | 1 | 5 | 10 |\n|---|:---:|:---:|:---:|\n| **Novel**| AP AP50 | AP AP50 | AP AP50 |\n| TFA (reported in paper) | 1.9 3.8 | 7.0 13.3| 9.1 17.1 |\n| TFA* (re-produced by us) | 1.6 3.5 | 6.8 13.3| 9.0 17.2 |\n| **Ours** | **2.5** **5.6** | **7.8** **15.9**| **10.0** **20.0** |\n\n| Shot | 1 | 5 | 10 |\n|---|:---:|:---:|:---:|\n| **Base** | AP AP50 | AP AP50 | AP AP50 |\n| TFA (reported in paper) |31.9 51.8 | 32.3 50.5| 32.4 50.6 |\n| TFA* (re-produced by us) |31.9 51.3 | 31.7 49.7| 31.9 49.9 |\n| **Ours** |**34.6** **56.9** | **35.2** **56.0**| **35.0** **55.6** |\n\n\n**Q7: The notation in equations 11-13 is confusing: equation 11 is the derivative of Lclsfg, equation 12 is the derivative of Lclsfg and equation 13 is the derivative of Lclsbg, while on the LHS there is always Lcls.**\n\nWe thank you for pointing out the typos in Eq. 12 and 13. Note that the $L_{cls}$ consists of two heads, $L_{cls}^{fg}$ and $L_{cls}^{bg}$ in Eq. 6. Therefore, the derivate of $L_{cls}$ with respect to $\\theta_{cls}$ will also consists of two parts, i.e., Eq. 12 and 13. Note that the name of $L_{cls}$ may be misleading in Eq. 12 and 13, thus we will modify it as $L_{cls}^{fg}$ in Eq. 12 and $L_{cls}^{bg}$ in Eq. 13 in next version.\n\n**Q8: How easy it is to extend other existing backbones with the proposed technique? What does it take in terms of actual code changes? Can you provide a clean code example?**\n\nIt is easy to use other backbones with the proposed method because we only simply modify the classifier of DeFRCN (Faster-RCNN framework) which originally supports various backbones. Considering state-of-the-art DeFRCN reports results based on the ResNet-101 backbone, we follow it and also report experimental results based on the same backbone for fair comparisons.\n\nThank you very much for your attention to the actual code implementation. As pointed out by other reviewers, our method is very simple and easy to understand/follow/reproduce. In addition, we also promise that the code will be available (line 15) in our submission. Here, we provide a PyTorch style code for the proposed decoupling classifier as follows. \n```python\ndef dc_loss(x, y, m):\n \"\"\"\n Compute the decoupling classifier loss.\n Return scalar Tensor for a single image.\n\n Args:\n x: predicted class scores in [-inf, +inf],x’s size: (N, C+1), (N is the number of region proposals of each image)\n y: ground-truth classification labels in [1, C+1], where [1, C] represent foreground object classes and C+1 represents the background class, y’s size (N,1)\n m: image-level label vector and its element is 0 or 1, m’s size: (1, C+1)\n\n Returns: \n loss\n \"\"\"\n # background class index\n N = x.shape[0]\n bg_label = x.shape[1]-1\n\n # positive head\n pos_ind = y!=bg_label\n pos_logit = x[pos_ind,:]\n pos_score = F.softmax(pos_logit, dim=1) # Eq. 4\n pos_loss = F.nll_loss(pos_score, y[pos_ind], reduction=\"sum\") #Eq. 5\n\n # negative head\n neg_ind = y==bg_label\n neg_logit = x[neg_ind,:]\n neg_score = F.softmax(m.expand_as(neg_logit)*neg_logit, dim=1) #Eq. 8\n neg_loss = F.nll_loss(neg_score, y[neg_ind], reduction=\"sum\") #Eq. 9\n\n # total loss\n loss = (pos_loss + neg_loss)/N\n return loss\n```\nIt can be seen that the main change is to only introduce an image-level label vector ($\\vec m$)on the standard softmax function for the negative head and others keep unchanged like the positive head, but the performance improvements are consistent.\n\nWe thank you again for your time and efforts in reviewing our paper. Furthermore, we would be more than happy to discuss with you if you have any concerns about our responses.", " We thank all reviewers for reviewing our paper and their feedback, especially that they found the proposed method “**very simple yet effective**” (1weo, DeAx) and “**novel idea, quite interesting and strongly motivated**” (DeAx), “**solid framework**” (LjLJ), “**competitive performance** and **state-of-the-art results**” (LjLJ, 1weo, DeAx), “**adequate comparisons and evaluation**” (mATA), “**well analyzed and interpreted**” (DeAx) and “**clearly or well written**” (LjLJ, 1weo, DeAx). The main concern is the training and evaluation protocol (benchmarks) in few-shot detection and few-shot instance segmentation. We firstly clarify the training and evaluation protocol issues as follows:\n\nFirst, we strictly follow the [standard training and evaluation protocol (benchmarks)](https://github.com/ucbdrive/few-shot-object-detection/blob/master/datasets/README.md) for FSOD/FSIS and gFSOD/gFSIS tasks which have been widely accepted and used in the community of machine learning and computer vision. To the best of our knowledge, recent published FSOD/FSIS (or gFSOD/gFSIS) papers almost utilize the same benchmarks that consider an object instance as a “shot”. The reason is that there are generally multiple instances in an image for instance-level FSOD and FSIS, which is different from image-level few-shot image classification.\n\nSecond, the community accepts the current benchmarks because they are more challenging for FSOD/FSIS (or gFSOD/gFSIS) tasks due to incomplete or partial annotations (i.e., missing labels) in a few-shot setting. The missing label issue requires that learning algorithms deal with training images each associated with multiple instances, among which only partial instances are labeled, which is also similar to partial label learning. As we know, missing label (partial label) learning is more difficult and challenging than conventional fully-supervised learning, especially few-shot scenarios.\n\nLast but at least, we think that it is generally expensive and time-consuming to label all instances in many real-world applications. Fully-supervised object detection or instance segmentation typically assumes that all interest instances are fully labeled for given training images. In many real-world applications, such as open-vocabulary object detection, however, it is generally difficult to label all instances, and thus there still exists some instances left to be missing labeled. In addition, it may be more friendly and convenient for users to label partial instances than all instances even in few-shot scenarios, although we agree that it is possible to label all instances given few-shot images. The fully-annotated FSOD and FSIS are beyond our current research because we mainly focus on FSOD and FSIS in missing labeled few-shot scenarios. We leave fully-annotated FSOD and FSIS (creating new benchmarks) as future work.\n\nWe thank all reviewers for their time and efforts. Next, we respond to the concerns of each reviewer one by one.", " The paper proposes a method of correcting the missing label issue in the context of few-shot instance segmentation and few-shot object detection problems. Strengths:\n\n- The paper is very well focused on solving one specific issue\n- reasonably clearly written\n- empirical framework is solid\n- SOTA results\n- Section \"Why DC works?\" seems to provide solid quantitative evidence to support the main claim of the paper.\n\nWeaknesses:\n\n- Contribution is rather shallow and incremental, the paper almost reads as a workshop paper in some respects. It is hard to evaluate the significance of the work, because problem being solved is not well articulated (see continuation below)\n- problem being solved is not well articulated\n - In Fig 1 it is unclear why one of the person instances is mislabeled\n - it is unclear if the label noise is due to the dataset deficiency or it is due to the incorrect sampling scheme used to train few-shot methods. Is it possible to rectify the sampling scheme for few shot to avoid label noise? How does it happen that the objects that are not supposed to used as supervision signals end up being used as such?\n - It is unclear why this problem is solved architecturally by introducing additional heads while it seems like it is really a problem with the data. I would much prefer if the problem could be solved in the source, while in this case it looks like the data problem remains and instead of solving it directly, a heuristic is added at the model level to account for it\n- Missing qualitative examples showing when the proposed technique succeeds in solving the stated problem and when it fails\n- Since the proposed technique is claimed to solve a high level issue, applying it to a few base methods and showing that it works seems very important. However, this paper only shows results based on Mask-DeFRCN. - The notation in equations 11-13 is confusing: equation 11 is the derivative of $L_{cls}^{fg}$, equation 12 is the derivative of $L_{cls}^{fg}$ and equation 13 is the derivative of $L_{cls}^{bg}$, while on the LHS there is always $L_{cls}$\n- How easy it is to extend other existing backbones with the proposed technique? What does it take in terms of actual code changes? Can you provide a clean code example? N/A", " This paper addresses the missing label issue in few-shot object detection (FSOD) and few-shot instance segmentation (FSIS). Some foreground examples are not labeled in the few-shot scenario, which causes classification bias for the conventional classification head used in object detectors. This paper proposes a simple yet effective algorithm that treats the foreground and background proposals differently in the classification loss. The proposed method is evaluated on benchmark datasets and shows competitive performance against SOTA methods for both FSOD and FSIS. Pros:\n\n1. The proposed method is very simple yet effective. It decouples the loss function for foreground and background objects to remedy the effect on the classification head from the missing labels for potential foreground objects. The evaluations on benchmark datasets demonstrate its effectiveness in both FSOD and FSIS.\n2. The authors show that the proposed loss can be incorporated into other FSOD methods to achieve better performance. It also has no negative impact on base classes per the evaluation results. It might become a plug-in for many FSOD methods.\n3. The paper is generally well written and easy to follow.\n\nCons:\n\n1. While I acknowledge the issue of missing labels in the FSOD, it is more like a bad design in the current FSOD evaluation protocol rather than a real-world problem. It is weird when you have a lot of images, but you only label a single class and ignore others, especially when you know it will have a negative impact. If you have limited resources (as in a few-shot setting), shouldn't you focus on labeling a small group of images to ensure every class is labeled so that the total # labels are roughly similar? I understand the protocol is designed to control the number of shots, but it just seems unrealistic. 1. If the novel class training samples are provided in a way that every example is labeled in each image but only a small group of images are labeled (basically the strategy I mentioned in Cons), do you think the proposed method works better or worse? Also, will other FSOD/FSIS methods work better or worse in this case? Do you think using the proposed method with the current labeling strategy (i.e. randomly labeling a single class and ignoring others in one image) is still more beneficial as it sees more data? The authors discussed the limitations", " This paper proposes a decoupled classifier to deal with positive and negative proposals separately when learning general few-shot object detection (FSOD) and few-shot instance segmentation (FSIS) networks. Specifically, the paper argues that for FSOD/FSIS problem, missing labels are a common issue. For an image, some classes could be ignored during annotation. The missing annotation problem would lead to a biased classifier, especially when learning with a few images. To solve this problem, the paper proposes to deal with positive proposals with normal cross-entropy loss and deal with negative proposals by only considering the positive class and background of the images.\n\nThe proposed method is implemented based on the architecture of DeFRCN [21] and evaluated on benchmark datasets VOC 15/5 and COCO 60/20. The method is compared with several SOTA methods, including DeFRCN, and appears to be better. \n\nOverall, the idea of the paper should be easily reproducible. The benchmark, evaluation, and comparison are solid. However, I think there are several major issues with the premise, and the proposed method could lead to more bias in certain situations. Details can be found in the Strengths And Weaknesses section. The main strengths of the paper are:\n\n1. The paper is relatively easy to follow. The proposed method is described in detail and should be easily reproducible.\n2. The comparisons and evaluation are adequate, and the results are positive on the benchmark.\n\nThe main weaknesses of the paper are:\n\n1. My major concern with the paper is the premise. I think the paper proposes a solution to deal with the bias of existing benchmark datasets rather than the problem (FSOD/FSIS) themselves. In particular, in Fig 1(c), the paper mentions that for a one-shot image, 'dog' could be annotated and the 'person' could be ignored. This situation is mainly due to how existing benchmarks (VOC/COCO) are proposed. Prior works [14, 26] proposed the benchmarks that consider an object instance (bounding boxes) as a 'shot'. In this case, the training images could just be sampled with one instance, even though there are multiple instances in the same image. I agree with the paper that this is an issue with these benchmarks and needs to be addressed. However, I do not think this is an inherent issue with FSOD/FSIS problems. \n2. Even if the problem is inherent with FSOD, the proposed solution could create more bias rather than solving the problem. With the same example in Fig 1(c). If it is a one-shot scenario for the class 'person', then the other person might not be annotated. In fact, this is a very common problem for object detection, even for large-scale datasets. There are plenty of examples of 'group' bounding boxes with a single example instance in the OpenImages dataset (https://storage.googleapis.com/openimages/web/index.html). These missing annotations would be considered negative ('background') and the proposed method restricted the learning to between ['person', 'background'], and backpropagate wrong losses. I think in many situations, the proposed method can lead to more bias and confusion.\n3. Some of the paper's claims are not well-grounded. For example, Line 94-95, the paper claims that \" It is impractical from the perspective of many real-world applications because ones always expect that few shot model is capable of not only recognizing novel classes but also remembering base classes\". However, for many few-shot learning papers (MAML [a] and Prototypical Network [b] for example), base-class performance is not a concern. Could the author motivate more on why we 'always expect the few-shot model to remember base classes‘?\n\n[a] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Finn et al, ICML 2017\n[b] Prototypical Networks for Few-shot Learning, Snell at al, NeurIPS 2017 Please refer to the weakness section for questions. N/A", " The paper formally proposes the missing label issue which naturally happens in few-shot scenarios including few-shot object detection (FSOD) and few-shot instance segmentation (FSIS). The missing label issue may result in biased classification (mistakenly recognizing novel class objects as background, i.e., missing detection) and thus reduce the generalization performance of many current FSOD/FSIS models. From the missing label perspective, the paper presents a simple but effective method that decouples the standard classifier into two parallel heads which are capable of addressing clear positive samples and noisy negative samples, respectively. Using the proposed decoupling classifier, the model can effectively learn novel classes because the effects of noisy negative samples are well mitigated. The effectiveness of the proposed method is evaluated on standard few-shot benchmarks PASCAL VOC and MS COCO for both FSOD/FSIS and generalized FSOD/FSIS tasks. Strengths:\n+ The paper observes the fact that the missing label issue naturally exists in few-shot scenarios such as FSOD and FSIS. Furthermore, the authors analyze the missing label issue may result in biased classification and reduce the generalization performance of FSOD and FSIS. This new perspective might be inspiring to others in the field. \n\n+ In order to mitigate the effects of noisy negative samples and reduce the biased classification in FSOD and FSIS, the paper proposes a novel classifier decoupling idea that is quite interesting and strongly motivated. It can individually address clear positive samples and noisy negative samples and thus mitigate the biased classification issue. The proposed method is really simple and easy to understand and follow.\n\n+ The performance of the proposed method outperforms its baseline and state-of-the-art by a large margin on PASCAL VOC and MS-COCO benchmarks for both FSOD/FSIS and generalized FSOD/FSIS tasks. Meanwhile, the proposed method doesn’t introduce any additional computation costs and hyper-parameters because it only simply decouples the standard classifier into two parallel classifiers. Therefore, it seems like a practical algorithm that can be put immediately into practice.\n\n+ The proposed method is well analyzed and interpreted from the gradient optimization perspective. On the other hand, the authors evaluate the performance of the learned classifier using the Recall metric for all ground-truth foreground objects of all testing images and demonstrate that the proposed decoupling classifier indeed mitigates the biased classification. \n\n+ Ablation studies cover all the crucial components of the proposed approach. The experiment results are quite convincing.\n\n+ The manuscript is well written and the organization is very clarified.\n\nWeaknesses:\n- Relationship with some existing works~(e.g., DeFRCN). The missing label issue also, in fact, exists between base and novel classes. For example, the novel class objects may potentially present in base images at the base learning stage but these potentially novel objects are viewed as background and it is opposite to the base learning stage at the novel fine-tuning stage. Therefore, the missing label issue also leads to foreground-background confusion between base and novel learning stages because of this two-stage fine-tuning mechanism. I understand that this paper focuses on the missing label issue at the novel fine-tuning stage. However, it is more helpful for the readers to understand the proposed method if these two types of missing label issues can be comprehensively analyzed.\n\n- It is unclear how to use the decoupling classifier at the inference stage because the image-level few-shot label ($m_i$ in Eq. 7) is agnostic for any testing images. I guess that the standard classifier is used at the inference stage and the proposed decoupling classifier only is employed during training time. The authors should clarify it.\n - This missing label issue should be comprehensively analyzed from a broad perspective, e.g., between base and novel classes stages, the novel fine-tuning stage itself.\n\n- The inference pipeline (using the standard classifier or the decoupling classifier) should be more clarified.\n Yes. The authors have discussed the limitations of the proposed methods in Sec.4 (Experiments) and Sec.5 (Conclusion). As stated in the paper, It may not be suitable when the missing label rate is small. However, the proposed method is still comparable to its counterpart even if the missing label rate is zero, which indicates the robustness of the proposed method. Considering that it is unavoidable to meet the missing label issue in few-shot FSOD and FSIS, especially under multiple categories scenarios and so it is ok for me." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "HOSUAdkyu6J", "uh1_KGH7ZkQ", "SC2oWgFdn_i", "kU5llO77TxE", "HkWyDEvLtOv", "Ya4tNviElV", "SC2oWgFdn_i", "SC2oWgFdn_i", "uh1_KGH7ZkQ", "HOSUAdkyu6J", "HOSUAdkyu6J", "HOSUAdkyu6J", "nips_2022_dVXO3Orjmxk", "nips_2022_dVXO3Orjmxk", "nips_2022_dVXO3Orjmxk", "nips_2022_dVXO3Orjmxk", "nips_2022_dVXO3Orjmxk" ]
nips_2022_BYLysbfdJOd
Planckian Jitter: countering the color-crippling effects of color jitter on self-supervised training
Several recent works on self-supervised learning are trained by mapping different augmentations of the same image to the same feature representation. The data augmentations used are of crucial importance to the quality of learned feature representations. In this paper, we analyze how the color jitter traditionally used in data augmentation negatively impacts the quality of the color features in learned feature representations. To address this problem, we propose a more realistic, physics-based color data augmentation – which we call Planckian Jitter – that creates realistic variations in chromaticity and produces a model robust to illumination changes that can be commonly observed in real life, while maintaining the ability to discriminate image content based on color information. Experiments confirm that such a representation is complementary to the representations learned with the currently-used color jitter augmentation and that a simple concatenation leads to significant performance gains on a wide range of downstream datasets. In addition, we present a color sensitivity analysis that documents the impact of different training methods on model neurons and shows that the performance of the learned features is robust with respect to illuminant variations.
Reject
Training representations in computer vision typically requires systematic augmentations of the input training set such as crops, reflections, translations including color jitter. As illumination invariance tends to be a desired property in visual object detection tasks, design of specific color augmentations is of great interest. In this context, this paper proposes to improve the arguably inferior color representation of standard Color Jitter by proposing a realistic and convincing physics-based color representation schema, – called Planckian Jitter -- and provides some experimental support. The discussion about the work concentrated around the overall effectiveness and experimental validation. The reviewers have mixed conclusions about the work. In light of the mixed opinions, I tend to agree with the concerns raised by reviewer W5kb, who suggests to -- Repeat experiments with higher resolution as 32 x 32 seems to be rather small for a conclusion of effectiveness. -- Including some obvious baselines (e.g., no augmentation) -- Clarify the take home message and conclusions from ablations. I feel that the paper would benefit from a further iteration and that the current manuscript is not ready yet for publication.
train
[ "ODr6pL6Asm", "m5VGggNCPYe", "bV9UV5lmFwW", "Yx4hzyrofw", "3aIwtksGXpW", "EabjOKKQ-Eg", "eE8kdZBRC3K", "9OHoJ9chkuR", "wc5BD_KSj9T", "fuIHpzs9vSr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for this discussion and for appreciating our experimental evaluation. We address your concerns below:\n\n>**Q3.1** I think the author may have misunderstood my question. My concern is that the experiment is misleading--it seems to suggest that the illuminant change proposed here is accurate and causes the model trained with it to be stable across illuminant variation. However, if the train and test are being perturbed by the same illumination, it does not matter if the model is correct. The training will allow the resulting model to be stable, regardless of whether the illumination change model is correct. This is why I suggest the author to evaluate using real dataset. While I agree that there is no large dataset that will allow the author to perform this exact experiment, varying temperature of the black body, there are several multi-illumination dataset out there. Murmann et al's \"A Multi-Illumination Dataset of Indoor Object Appearance\" is an example (1000 scenes, 25 \"illuminations\" each).\n\n We agree that confirming results on a real dataset with illuminant changes and object classes would be better. However, we have not found such a dataset. The proposed dataset has no illuminant variation but only the illuminant position is changing. We will also adapt the text to better reflect that the tested augmentations resemble those of the training and will change line 229 ‘Planckian Jitter obtains a remarkably stable performance from around 4000-14000K,’ to ‘The results confirm that Planckian Jitter encourages learning invariance to the augmentations (from 4000-14000K) it has been exposed to during training.\n\nWe would like to stress that Figure 3 does show that the CJ representation is not robust to illumination changes and shows a significant drop in performance on the sites of the graph.\n\n\n>**Q3.3** In my opinion 32x32 images would barely have enough structure for the model to learn \"shape\" representation, and it is too contrived to draw meaningful conclusion for the reader. I understand that computing resource can be limited, but that should lead to different prioritization of the experiment. For example, the authors could choose to focus their augmentation model to just CJ and PJ, rather than adding H&S, B&C, and so on into the mix.\n\nThe results on 32x32 pixel images in our opinion already demonstrate that CJ learns a suboptimal color representation, since PJ obtains significantly better performance on the Flower dataset. We agree that rescaling hurts shape/texture representations more than color representations. This is also shown in our additional experiments done for R2 (see Q2.1 where we show that CJ performance drops much more than PJ performance when lowering the resolution)\n\nAs far as larger images are concerned, we are not sure what result the reviewer would like us to add (for the 224x224 images). In Table 3 we first show CJ, PJ alone and then their combination trained on ImageNet (with 224x224 pixel images) and then evaluated on downstream tasks with 224x224 pixel images. These results confirm that also for higher resolution (224x224) the combination of [CJ,PJ] obtains the best results on all four datasets.\n\nIt is true that the performance gain is smaller than for 32x32, but especially for CUB-200, T1K and VegFru the gain with respect to [CJ,CJ] is still considerable (in all cases over 4%). Note that also in Table 5 of the Supplementary Material we show results on TinyImageNet (with 64x64 pixel images) and our results again show that the combination of [CJ,PJ] obtains superior results.\n ", " I have looked at the other review and carefully read the rebuttal.\n\nQ3.1: I think the author may have misunderstood my question. My concern is that the experiment is misleading--it seems to suggest that the illuminant change proposed here is accurate and causes the model trained with it to be stable across illuminant variation. However, if the train and test are being perturbed by the same illumination, it does not matter if the model is correct. The training will allow the resulting model to be stable, regardless of whether the illumination change model is correct. This is why I suggest the author to evaluate using real dataset. While I agree that there is no large dataset that will allow the author to perform this exact experiment, varying temperature of the black body, there are several multi-illumination dataset out there. Murmann et al's \"A Multi-Illumination Dataset of Indoor Object Appearance\" is an example (1000 scenes, 25 \"illuminations\" each).\n\nQ3.2: I think the author response is fair, and it is true that the most interesting part about this work is not necessarily the performance of PJ. The combination of CJ and PJ tend to lead to higher performance than either alone (Table 3). However, it is unclear if this conclusion would hold in the real world (see comment below).\n\nQ3.3: In my opinion 32x32 images would barely have enough structure for the model to learn \"shape\" representation, and it is too contrived to draw meaningful conclusion for the reader. I understand that computing resource can be limited, but that should lead to different prioritization of the experiment. For example, the authors could choose to focus their augmentation model to just CJ and PJ, rather than adding H&S, B&C, and so on into the mix. \n\nQ3.4: The author is correct. In the simple scaling model, gamma correction only amounts to different scaling parameter. However, I will say that due to the compression at low range, and quantization to uint8, much of the dynamic range is loss and this affect the ability to do proper white balance. The datasets used in this experiment does not seem to contain a lot of night scenes, so this problem is alleviated. However, this is a crucial fact that should be added to the paper.\n\nI will also mention that I had missed in the first review that the author has done a very comprehensive set of experiments, as other reviewers have noted. With this insight, I have upgraded the rating for this paper to 4.", " >**Q3.4** The color jittering model missed one crucial fact about the sRGB space, which is that it is gamma-encoded. Gamma compression is lossy, and resulted in far inferior color correction result compared to images captured in RAW. Nonetheless, I do not see the mentioning of gamma correction prior to color correction (and then reapplication to get back into the original color space)\n\nOne way to account for gamma correction is, as the reviewer suggests, to linearize the sRGB image, apply a RAW-like illuminant, and then delinearize the resulting image (restoring gamma correction). An alternative approach, which is used in our method, is to apply gamma correction to the RAW-like illuminant and then apply the gamma-corrected illuminant directly to the sRGB image.\n\nThe two approaches are mathematically equivalent, as shown in the following:\n\nLet $[a,b,c]$ indicate a RAW-like illuminant, $[R,G,B]$ an sRGB image pixel, $\\gamma$ a delinearization value (gamma), and * a pointwise multiplication.\n\nThen:\n\n$([a,b,c]*[R,G,B]^\\gamma)^{(1/\\gamma)}= ([a,b,c]^{(1/\\gamma)} * [R,G,B])$\n\nThe confusion arises from the fact that we did not explicitly state in our manuscript that the RAW-like illuminant is gamma-corrected $([a,b,c]^{1/\\gamma})$. This was in fact \"hidden\" in the lab-to-sRGB conversion described at the end of page 4, line 149. We apologize for the misunderstanding and will make this explicit in the revised manuscript.\n", " Thank you for your review. We address your concerns (weakness) below (in two separate comments).\n\n>**Q3.1** Fig 3 shows a more stable result, but it seems that the testing dataset also undergoes the simulated jittering rather than a real captures of different illumination. Such artificial manipulation of the test set is going to yield the same result regardless of how poorly the jittering performs.\n\nThe experiment behind Figure 3 is driven by two motivations. One, as mentioned by the reviewer, is to verify that we obtain invariance to the transformation applied during training. This is indeed verified. We agree that this experiment would offer additional insights if performed on real data, however this would require a dataset labeled for both illuminant estimation and image classification. Unfortunately, we were not able to find such a dataset. The second motivation is to characterize the degradation of different non-Planckian training modalities. How do these non-Planckian curves vary under synthetic illuminant changes (this is often used as a predictor for variations under real illuminant). This, in fact, shows different test curves, as can be seen in the figure 3. We will make this double motivation more clear in the revised manuscript\n\n>**Q3.2** Planckian jitter do not always perform the best, and can underperform by a significant margin (see Table 1).\n\nFigure 1 (left) in the paper shows how the standard color jitter transforms an example image. In this paper, we argue that these unrealistic variations (easily verified by the human eye) impede the learning of good color representations. Importantly, this does allow it to learn a good shape/texture representation (since it cannot rely on color to solve the matching problem, see also Figure 5 in [4]). Our proposed color augmentation, in Figure 1 (right), yields more realistic color variations which resemble those caused by changing illuminants in the real world. However, since the color augmentations are less drastic, the shape/texture representation is not as strong as the one learned with standard color jitter. Combining both representations is therefore required to get excellent color/shape/texture features, as is confirmed in our experiments on several datasets.\n\nOur aim is not for Planckian Jitter to be better than Color Jitter (a method used by hundreds of papers). Instead, we argue that the color representation of standard Color Jitter is inferior and that when combined with Planckian Jitter (with superior color representation) results should improve. We show that in general this is the case, and more notably on datasets where color plays an important role -- the combined results [CJ,PJ] consistently outperform [CJ,CJ] and often by a large margin.\n\nMore in detail about the results in Table 1. We argue that the low performance of Planckian data augmentation on the CIFAR-100 dataset is due to the intrinsic nature of the task – i.e. one in which chromatic variations are irrelevant. For example, the \"car\" class in CIFAR-100 has a wide variability of car body colors, and thus color information is not useful for correctly identifying the class. Conversely, each flower/bird/vegetable has one characteristic “reflectance” color, and any observed color variation is primarily due to the illuminant. On CIFAR100 shape/texture is of more importance than color, and consequently the results of CJ are much better than those of PJ. However, when combined [CJ, PJ] obtains good results on CIFAR100 and excellent results on Flowers-102.\n\n>**Q3.3** The motivation for resizing images to 32x32 is unclear. In most practical application, such small patch sizes are unlikely to contain enough information to be useful. More investigation is needed on more realistic patch size of 200+pixel. In the present study, only Table 3 shows this result, and the performance of PJ is mixed on different dataset\n\nWe structured our experiments so that the explorative part of our research can be conducted in an agile manner by exploiting low-resolution images (computationally, even on 32x32 images the training of self-supervised representations is very demanding). The most significant configurations are then transferred to a higher-resolution setup, with the goal of either confirming or refuting the initial hypotheses. This will be made more explicit in the revised version.\n\n32x32 pixel images are admittedly small for some applications where shape and texture are particularly important, but they contain significant information for color-sensitive applications which are the focus of this manuscript. The results of PJ alone in Table 3 are indeed mixed, but the joint combination LSC [CJ,PJ] yields consistent improvement over any individual form of jittering across all analyzed datasets. This, in our opinion, demonstrates the value of Planckian Jitter in practice\n\n\n", " > **Q2.2** Although the LSC combination is a good start to fuse both types of color augmentation, what about a way to make it more class-specific? In other words, for even a coarse dichotomy such as natural vs man-made images, apply some complementary weighting/distribution of Planckian/default color jitter to get the best of both worlds?\n\nWe would like to thank the reviewer for suggesting this interesting approach for a more refined combination of different types of data augmentation. A data-driven optimization of the ideal weight for each component as a function of a macro-level classification will definitely be considered as future developments. We did try training the network simultaneously with both Planckian jitter and Color jitter (switching between these at each minibatch). In this case, we set only part of the latent space to be invariant with respect to Planckian Jitter and another partially overlapping part to be invariant with respect to Color jitter. The advantage of this method was that a single network could be used at inference time. However, results were not outperforming our current naive concatenation method. We are planning to investigate this in more detail in future work. \n", " Thank you for your review. Also thank you for endorsing the empirical results and making good points for discussion.\nWe address your concerns below (in two separate comments).\n\n> **Q2.1** Although the LSC combined augmentation results show that Planckian and random jitter are somewhat complementary, the higher-resolution, 224x244, results in Table 3 do cause the reader to question whether the low resolution case is somehow special with respect to color augmentation. Here Planckian jitter can actually be worse for the target data set of Flowers-102 which isn't fully explained why this happens. Although the idea seems ok for CUB-200 and better for T1K+, again for the natural image data set of VegFru, performance is worse or similar than default random color jitter. The main explanation is a one sentence claim that for higher resolution shape is very representative and color may add little discrimination, but more investigation here is necessary. For example, perhaps higher resolution is actually causing texture to be more discriminative, rather than shape? Intuitively, shape seems reasonably preserved even at lower resolution, unless this is correlated with the actual self-supervised architecture of hyperparameters.\n> As mentioned above, the main question for the reader is the somewhat counter-intuitive results in Table 3, for the same data sets but with higher resolution at 224x224. One would expect a similar trend as the low-resolution data, but the numbers show otherwise. Also, the explanation on better shape discrimination on higher resolution bears more analysis and detail, e.g., is it more about texture than shape at higher resolution?\n\nThe reviewer is correct in pointing out this phenomenon. We agree with their analysis that this is probably due to more access to texture information. Our paper focuses on color representations, and the non color-part of the representation consists of shape and texture. Sometimes we differentiate these three factors (‘color’, ‘shape’, ‘texture’, e.g., lines 27, 165) but often we use the term ‘shape’ to refer to the non-color part, being both shape and texture (e.g., lines 43, 53, 120, 292, etc). We agree that this leads to some inaccuracies, especially in the lines cited by the reviewer, where shape is indeed less high-frequency and texture would be mostly high-frequency. We will carefully revise the text and will explicitly include reference to ‘texture’ where this term is more accurate.\n\nAs an additional experiment, we took the representations and classifiers learned at high-resolution (224x224) and investigated their sensitivity to high-frequency information in images. At inference time, we down-sample (down-sample resolution is given in Table) and then up-sample all images. In this way, we can compare the dependence on high-resolution information of different methods. Note, that here we do not retrain the classifier but use the one trained at 224x224. The results clearly show that CJ suffers more from down-sampling than PJ. For the Flowers datasets, the results of CJ at resolution 224 are better than PJ. However, when we down-sample to 64, the results change and results for PJ are already significantly better than CJ. As suggested by the reviewer, we think the texture information (important for CJ) is removed and this hurts performance. For PJ, which is more dependent on color information, down-sampling hurts results less (note PJ is also using texture, shape but to a lesser degree, so results still deteriorate for smaller resolutions). \n\n**Table 1:** _Classification accuracy as a function of down-sampling size on Flowers. Results confirm that PJ is less sensitive to down-sampling than CJ._\n\n| Method | 32 | 64 | 128 | 224 |\n|----------|-------|-------|-------|-------|\n| CJ | 7.74 | 49.23 | 91.43 | 91.90 |\n| CJ+PJ | 16.30 | 66.43 | 93.01 | 93.45 |\n| PJ | 23.11 | 70.13 | 89.04 | 89.22 |\n\nTo verify that CJ uses less color information than PJ, we did a simple experiment where at inference time we changed the input images from sRGB to gray-scale images. These results are provided in Table 2. These results clearly show that PJ is much more dependent on color than CJ. PJ has a drop of over 67.6% whereas CJ only drops 3.2 percentage points. \n\nWe will incorporate this analysis in any final version of this paper. \n\n**Table 2:** _Methods evaluated with color and gray images on Flowers dataset._\n\n| Method | Color | Accuracy |\n|----------|---------|------------|\n| CJ | COLOR | 92.73 |\n| CJ | GS | 89.51 |\n| PJ | COLOR | 88.97 |\n| PJ | GS | 21.38 |", " We would like to thank the reviewer for the appreciation of our manuscript and for the interesting points of discussion.\n\n>Questions: \n>Are there other color distributions, or ways of obtaining color distributions, that might be better than the Planckian illuminants? There are illumination situations, such as light filtering through trees, that are non-Planckian but still non-zero in terms of likelihoods.\n\nConsidering a wider distribution of illuminants could be a good alternative, possibly referring to real-world data or to synthetic datasets, as rendering engines are getting ever more realistic. As a general observation, we add that accounting for non-uniform illuminant distributions might yield diminishing returns. If a given illuminant is known to be less likely to occur naturally, we might want to sample it less frequently. But then its sparse appearance during data augmentation might turn out to have a small impact on the learned features. Nonetheless, we consider this an interesting direction to evaluate for future research.\n\n>Limitations: \n>There is a difference between color augmentation of materials and color augmentation of illumination. Illumination color is one color augmentation that can be done easily through a color rotation of the pixels. While on a jpg-compressed/color-enhanced image the effect is not perfect, it's close enough that the augmentation can allow the network to learn color-based rules that follow a more typical distribution of colors. Regardless of whether color is a useful feature for an object, this type of augmentation matches what is likely to occur in the real world. \n>Color augmentation for materials, on the other hand, is challenging to do properly as an augmentation. First, it is class-based, as every class will have a different material color distribution. Second, it is not appropriate to change the colors of the entire image in order to modify the material color. The uniformly random color augmentation seems like it is intended to try and cover both types of color changes (illumination and material). \n>It would be nice to have some discussion of this in the paper, though not necessary as it is not directly relevant to the author's contribution but an an additional limitation of the standard color augmentation methods. But it would give a little more context to the issue in the introduction.\n\nThank you for suggesting this additional limitation of default color jittering. We agree that the default color jitter can be interpreted as applying material and illuminant color changes. We also agree that uniformly applying the same color augmentation to entire patches would not yield realistic material color changes, but would require knowledge of object boundaries. In contrast, Planckian Jitter is more realistic in practice because it is based on a physical model of illumination, and it is in fact a transformation uniformly applied over all pixels in the image. We will incorporate this discussion in any final version of the paper.", " The paper examines the color data augmentation used to train many deep networks for object classification tasks. It proposes that the existing color augmentation method does not permit the network to make effective use of color information when appropriate. They propose using a color augmentation that follows Planckian illuminants, which are physically-realistic. The paper shows that using the combination of a network trained using the standard color jitter and a network using the Plankcian jitter perform significantly better on classification tasks that can take advantage of color information. The ablation study and analysis of the results support the hypothesis that a physics-based color jitter enables the network to more effectively use color information. However, there is no direct analysis of the network's learned features/structure to confirm that. The author's assume that color is a useful feature on the flower data set--but is not necessarily a useful feature on other data sets--and use that assumption to make the claim. It is clear that the Planckian color jitter is responsible for the bump in performance on the flower data set, and the assumption is reasonable.\n\nIt's important to look at data augmentation from a physics-based perspective, because uniformly randomized data augmentation ignores the fact that illumination does have typical distributions. The existing data sets usually do not sample extensively from illumination distributions, especially given that most of the images are have white-balancing applied by default.\n\nIt would be nice to see more extensive use of physics-based principles, however, since almost all standard data sets use jpg-compressed and color enhanced images (designed for human viewing), most of the physical rules that govern illumination and materials are lost. The Plackian jitter at least replaces the uniformly random color data augmentation with expected results on a data set where color is likely to play a strong role in classification. Other examples of data sets where that might be the case are the bird and animal identification data sets. Are there other color distributions, or ways of obtaining color distributions, that might be better than the Planckian illuminants? There are illumination situations, such as light filtering through trees, that are non-Planckian but still non-zero in terms of likelihoods. There is a difference between color augmentation of materials and color augmentation of illumination. Illumination color is one color augmentation that can be done easily through a color rotation of the pixels. While on a jpg-compressed/color-enhanced image the effect is not perfect, it's close enough that the augmentation can allow the network to learn color-based rules that follow a more typical distribution of colors. Regardless of whether color is a useful feature for an object, this type of augmentation matches what is likely to occur in the real world.\n\nColor augmentation for materials, on the other hand, is challenging to do properly as an augmentation. First, it is class-based, as every class will have a different material color distribution. Second, it is not appropriate to change the colors of the entire image in order to modify the material color. The uniformly random color augmentation seems like it is intended to try and cover both types of color changes (illumination and material).\n\nIt would be nice to have some discussion of this in the paper, though not necessary as it is not directly relevant to the author's contribution but an an additional limitation of the standard color augmentation methods. But it would give a little more context to the issue in the introduction.", " This paper adds to the data augmentation transforms in the literature by proposing a more physics-based approach to generate more color-realistic augmentations compared to the standard more random color jitter. This is well-demonstrated in Figure 1 for natural image classes such as flowers where although some classes may have strong color variations, in general the variations are more realistic when due to illumination, compared to man-made classes such as cars. The approach is motivated by and follows a standard black-body radiation model, which again as shown in Figure 1 constrains the color jitter to be more realistic.\n\nThe experiments are performed in a self-supervised fashion as that nicely focuses the weight on data augmentation. Both natural and man-made data sets are tested such as Flowers-102, CUB-200, and CIFAR. Overall the results summarized in Figure 3, and Tables 1, 2, 3 show the improvements in downstream classification with Planckian jitter on natural image classes and the potential for combining Planckian and random color jitter in general. The paper is very direct in the exact problem and contribution, namely more realistic data color augmentation in a self-supervised setting, especially for natural image classes such as birds and flowers. For the targeted data sets such as Flowers-102, at low resolution 32x32, downstream classification can be improved by 5%. Also by combining Planckian jitter with default random color jitter, similar or even larger improvements are shown more generally across data sets such as CIFAR and T1K+. Overall the paper keeps it simple and clear for the reader and the overall idea is sound and the experiments very directly justify the basic idea of more realistic color augmentation for data sets with natural images.\n\nFigure 3 analyzing the color sensitivity of the results and the general results showing where for man-made imagery data sets, Planckian Jitter can be worse is useful to point out the limitations and details of the approach. Finally it's an added bonus to verify that this basic general idea is applicable to multiple self-supervised models as shown in Table 4. Although this isn't particularly surprising, it does add to the completeness of the experiments to support the basic idea in the paper.\n\nAlthough the LSC combined augmentation results show that Planckian and random jitter are somewhat complementary, the higher-resolution, 224x244, results in Table 3 do cause the reader to question whether the low resolution case is somehow special with respect to color augmentation. Here Planckian jitter can actually be worse for the target data set of Flowers-102 which isn't fully explained why this happens. Although the idea seems ok for CUB-200 and better for T1K+, again for the natural image data set of VegFru, performance is worse or similar than default random color jitter. The main explanation is a one sentence claim that for higher resolution shape is very representative and color may add little discrimination, but more investigation here is necessary. For example, perhaps higher resolution is actually causing texture to be more discriminative, rather than shape? Intuitively, shape seems reasonably preserved even at lower resolution, unless this is correlated with the actual self-supervised architecture of hyperparameters. As mentioned above, the main question for the reader is the somewhat counter-intuitive results in Table 3, for the same data sets but with higher resolution at 224x224. One would expect a similar trend as the low-resolution data, but the numbers show otherwise. Also the explanation on better shape discrimination on higher resolution bears more analysis and detail, e.g., is it more about texture than shape at higher resolution?\n\nAlthough the LSC combination is a good start to fuse both types of color augmentation, what about a way to make it more class-specific? In other words, for even a coarse dichotomy such as natural vs man-made images, apply some complementary weighting/distribution of Planckian/default color jitter to get the best of both worlds? The paper has an explicit section on limitations regarding the overhead with the combined color augmentation approach and also mentions the incomplete investigation into how much and in what detail the proposed Planckian jitter improves color realism why discouraging shape discrimination.", " This paper proposes to use a realistic white balancing as a way to augment datasets for self-supervised learning approach. The balancing was based on Planckian blackbody radiation. The author used this to train a siamese model in conjunction with standard color jittering that involves hue and saturation perturbation, and showed that the combination of the two color jittering method works best. Furthermore, the planckian jittering seems to lead to better performance on tasks where color is an important features such as flower classification task. Strength:\n- Realistic image processing is complicated and is often skipped in ML literature. The color model used in this paper is largely correct and well-grounded in the color-conversion theory\n\nWeakness:\n- My main concern for this paper is that I am remain unconvinced of the benefit of the planckian jittering as presented:\n - Fig 3 shows a more stable result, but it seems that the testing dataset also undergoes the simulated jittering rather than a real captures of different illumination. Such artificial manipulation of the test set is going to yield the same result regardless of how poorly the jittering performs.\n - Planckian jitter do not always perform the best, and can underperform by a significant margin (see Table 1).\n - The motivation for resizing images to 32x32 is unclear. In most practical application, such small patch sizes are unlikely to contain enough information to be useful. More investigation is needed on more realistic patch size of 200+pixel. In the present study, only Table 3 shows this result, and the performance of PJ is mixed on different dataset.\n- The color jittering model missed one crucial fact about the sRGB space, which is that it is gamma-encoded. Gamma compression is lossy, and resulted in far inferior color correction result compared to images captured in RAW. Nonetheless, I do not see the mentioning of gamma correction prior to color correction (and then reapplication to get back into the original color space). Please see my weakness section. I would like to see the response to my weakness section. particularly around the benefits of PJ, which is my main concern for this paper. Not quite. The author mentioned that their PJ reduces the quality of shape representation, but I do not see a convincing evidence of this in the text. No harm to the society is anticipated for the current manuscript." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "m5VGggNCPYe", "Yx4hzyrofw", "Yx4hzyrofw", "fuIHpzs9vSr", "EabjOKKQ-Eg", "wc5BD_KSj9T", "9OHoJ9chkuR", "nips_2022_BYLysbfdJOd", "nips_2022_BYLysbfdJOd", "nips_2022_BYLysbfdJOd" ]
nips_2022_0ltDq6SjrfW
Efficient Knowledge Distillation from Model Checkpoints
Knowledge distillation is an effective approach to learn compact models (students) with the supervision of large and strong models (teachers). As empirically there exists a strong correlation between the performance of teacher and student models, it is commonly believed that a high performing teacher is preferred. Consequently, practitioners tend to use a well trained network or an ensemble of them as the teacher. In this paper, we make an intriguing observation that an intermediate model, i.e., a checkpoint in the middle of the training procedure, often serves as a better teacher compared to the fully converged model, although the former has much lower accuracy. More surprisingly, a weak snapshot ensemble of several intermediate models from a same training trajectory can outperform a strong ensemble of independently trained and fully converged models, when they are used as teachers. We show that this phenomenon can be partially explained by the information bottleneck principle: the feature representations of intermediate models can have higher mutual information regarding the input, and thus contain more "dark knowledge'' for effective distillation. We further propose an optimal intermediate teacher selection algorithm based on maximizing the total task-related mutual information. Experiments verify its effectiveness and applicability.
Accept
While this paper has 4 accept recommendations among the four reviewers, I have serious misgivings about the content of this paper. The main experimental insight, that a less trained teacher sometimes performs better, is already known and unsurprising. In fact it's the very point of KD that makes that result interesting --- why should it be better to aim for a noisy target than the true target? On any dataset, if we train the teacher for long enough, we will eventually recover the exact labels to arbitrary precision. In that sense, all KD is with an "intermediate" trained model, and the only question is (has always been) just how early to stop. The next issue with the paper is that the authors claim to "explain" theoretically why KD works from the perspective of information bottleneck theory, however what they offer falls short. The “theory” is more like a story, with significant gaps. Most significantly, there is no logic to carry the leaps from stories of how mutual information evolves to why knowledge distillation should work. Moreover what the authors call "mutual information" in their experiments is not actually mutual information and the surrogates they use seem odd choices that are not consistent. For I(F;Y) the authors look at the output of the teacher model but for I(X;F) the authors look at an intermediate layer of F, training a decoder to predict X from the last convolutional layer of F. Why should the information contained in this middle layer of the teacher model matter when the student only accesses the teacher's output? My ambivalence with this paper is two-fold: (i) that the experimental findings are the main contribution and they are by themselves not sufficient for publication and (ii) that the IB component of this paper is misrepresented as a theoretical explanation of the efficacy of KD but actually it falls short. Unfortunately I’m discovering these concerns and expressing them after the discussion, hence my recommendation to accept the paper on the basis of the reviewer's initial recommendations. If the work is accepted, I expect the authors to edit it responsibly to remove all misleading claims that suggest that the paper provides a propert theoretical account for why KD works (they certainly have not), versus a speculative intuition, and to be much more careful to disambiguate the quantities that they track from actual mutual information.
train
[ "NJeUqWZaZn2", "jxDM95r5b3U", "zz2Mwj6-3Ei", "Fid-rh3xJ4M", "Oq8R2nYFYc", "4L2qCccfUa", "kjf3IHGqUqwd", "XSgcqr0P_QG", "kAJl-9lNR4f9", "eGRSC3pLjp9Y", "TWXE3Sy67cq", "HyYOh1maYBa", "FX7S9D9qyJ2", "cfitxZnPw8g", "IOxq5X5tgVP", "oYAYWWHmKuC", "XuRnXiClpmQ", "AGRNAXj1rzh" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your continued support. It encourages us a lot.\n\n(1) We sincerely accept the constructive suggestion of discussing contribution #2 more prominently. Some related study contents were put in the supplementary materials (such as, Section 2 and Section 4). We are considering to supplement and reorganize this part in the final version.\n\n(2) For commonly used models, we suggest that researchers can easily find the proper settings including the training epochs for convergent models, from the published papers or a number of open source sites, such as Github.\n\nWe thank the reviewers again for all their help so far in improving the paper!", " Thank you for the clear answers and the added experiments. The responses to my comments as well as others resolve my question about the differences between this work and other model distillation in the NLP regime. Thus, I am raising my score significantly, from 4 to 6.\n\n", " The response clearly solves my concerns about the difference between this work and teacher-free KD. Thus, I improve my final rating from 5 to 6. ", " I would like to thank the authors for their detailed responses and updating the draft in response to some of the suggestions. I enjoyed reading through other reviewers' comments and suggestions, and the authors' responses. Going over the discussion, I had a follow up suggestion and a small clarification. First, given results from [36], I would suggest the authors discuss more prominently their contributions related to linking the KD observation and IB theory (contribution #2 as per the general response from the authors), rather than emphasizing so much on the observation that a fully-converged teacher is suboptimal (which is already known). Second, I would like to clarify how would a practitioner know about $T_{0.5}$ (a half-point) checkpoint without having trained the model (with early stopping)? That is, how would one know if it may take 117 epochs or just 93 to converge, without actually training it. \n\nI particularly appreciate the additional experiments to include the error bars, and validate the original hypothesis with early stopping. The experiments with respect to early stopping alleviate my concerns. (Additionally, Reviewer 6Nks's concern and alternative hypothesis likely doesn't apply here as the authors' don't use pretrained teacher/student models). In that light, I am happy to update my score and recommend this paper for an acceptance. ", " Dear Reviewer 6Nks:\n\nThank you for raising the weaknesses and questions. We have tried to address them in our responses below and rebuttal revision pdf. We would like to know if our responses address your concerns. If not, we would be happy to provide more explanation. Additional suggestions or discussions are also welcome.\n\nBest wishes.", " Dear Reviewer encu:\n\nThanks for raising the concerns and weaknesses. We have tried to address them in our responses and rebuttal revision pdf. We would like to know if our responses address your concerns. If not, we would be happy to provide more explanation. Additional suggestions or discussions are also welcome.\n\nBest wishes.", " \nDear Reviewer CyDK:\n\nThank you for raising the concerns and questions. We have tried to address them in our responses below and rebuttal revision pdf. We would like to know if our responses address your concerns. If not, we would be happy to provide more explanation. Additional suggestions or discussions are also welcome.\n\nBest wishes.", " Thanks for raising your score. It encourages us a lot.\n\n>How to represent the comparision results.\n\nFollowing your advice, we bold the results within 0.95 confidence interval of the top results, as shown in Table 1 and Table 2 above. Generally, the bold results come from intermediate checkpoints rather than convergent models, and Snapshot Ensemble has better distillation performance than Full ensemble.\n\n>How to show the significance of the contribution.\n\n(1) It's a good suggestion to show the comprehensive comparisons of different teacher models by plotting the total computational cost (teacher and student together) vs the distillation performance. Figure 3 in the paper only showed this time-performance comparisons between the Snapshot Ensemble and the Full Ensemble. To show it more comprehensively , we further illustrate the results of computational cost vs distillation performance for five teacher models ($T^{inter}$, $T^{full}$, $T^*$, $T_1^{inter}+T_1^{full}$, $T_1^{full}+T_2^{full}$) on four teacher-student pairs. These results have been provided in the supplementary materials (see Section 4 and Figure 4). As shown in Appendix Figure 4, the curve of $T^{inter}$ is to the upper left of the curve of $T^{full}$, and the curve of $T_1^{inter}+T_1^{full}$ is to the upper left of the curve of $T_1^{full}+T_2^{full}$. In the figure, \"upper left\" means lower computational cost but higher distillation performance. $T^*$ has higher distillation performance than $T^{full}$ and $T^{inter}$, but $T^*$ needs larger computational cost (though lower than $T_1^{full}+T_2^{full}$). As we said in \"Shared responses\", the algrithm of $T^*$ needs to be further improved in the future work.\n\n(2) We agree that our method can be better than many existing variations of KD. In fact, \"Contrastive representation distillation [ICLR 2020]\" [1] evaluated many algorithms and found that \"KD works pretty well and none of the other methods consistently outperforms KD.\" (see Table 1 and Table 2 of [1]) Many works adopted the best hyperparameters to highlight their own methods, which is unfair to normal KD. However, we did the optimal hyperparameter search for every teacher-student pair in this paper. Our work is only based on normal KD because we try to obtain a common and useful conclusion and avoid any unnecessary trick.\n\n---\n[1] Tian, Yonglong, Dilip Krishnan, and Phillip Isola. \"Contrastive Representation Distillation.\" International Conference on Learning Representations. 2020.", " I appreciate the extended results table, and I have increased my soundness rating accordingly. (For clarity, you should bold all results where the mean lies within the, for example, 5% lower-confidence-bound of the top result. This corresponds to results that are not less than 5% likely to be better than your top result, and is the correct way to ensure that model comparisons are not spurious.)\n\nNow that the standard deviations have been added to the paper, I am more interested in the significance of the contribution. There are two ways (that I can think of) to establish this; if you can think of a different way feel free to pursue that:\n\n1. Show a noticeable improvement in the limited compute case, perhaps by plotting the total amount of compute (for teacher and student training together) vs the final model performance and showing how your method, with the optimal intermediate model selection algorithm, achieves similar results in less time than full ensemble KD. This uses data already in your tables, so you won't need to run any additional experiments, but presenting it in a coherent way will make your case much more convincing.\n\n2. Show a noticeable improvement in performance over the baselines identified by reviewer encu. This would show that your method is better than the many existing variations of KD.\n\n(Also, after reading the other reviewer's responses, I have decreased my own confidence score.)", " Thanks for your positive comments. We carefully respond to the questions.\n\n> Q1: how would the results in Table 1 and Table 2 look like if $T^{full}$ is a model based on early stopping rather rather than a model after 120 or 200 epochs?\n\nOverall, training the teacher models on CIFAR for 200 epochs and ImageNet for 120 epochs does not lead to obvious overfitting. The numbers of training epochs are not significantly affected by using early stopping. We have tested the common early stopping strategy (patience=10) on all teacher models. Table 1 shows the numbers of training epochs for teacher models with or without early stopping. We also show the curves of validation accuracy versus epoch for all teacher models in the supplementary material (see section 3, Figure 3). **It shows that whether we use early stopping has no effect on the results of our paper.**\n\nTable 1. The numbers of training epochs for teacher models with or without early stopping.\n\n|Teacher model|$T^{full}$ without early stopping|$T^{full}$ with early stopping|the optimal $T^{inter}$|\n|:---------:|:---------:|:---------:|:---------:|\n|WRN-40-2|200|197|160|\n|ResNet-110|200|189|120|\n|ResNet-50|120|115|80|\n|ResNet-34|120|119|70|\n\n> Q2: the figure and table captions in the paper are often incomplete and sow confusion.\n\nThese detailed suggestions are very useful in polishing our paper. We have fixed Figures 1, 2, and 3, such as amending the X-axis and Y-axis captions (see the rebuttal revision pdf). We will keep improving the presentation of figures and tables till submitting the final version.\n\n> Q3: the paper title, albeit catchy, doesn’t really concern the paper’s key observations, methods or results and is rather a snappy message which could apply to several papers.\n\nThanks, we will reconsider the title of the paper.\n\n> Q4: it would be great to have an intermediate model-selection process that doesn’t require training the teacher model to convergence.\n\nThis is a good suggestion. In the case of limited computing resources, we empirically suggested that the half-way teacher model can suffice for KD. In the case of sufficient computing resources, we proposed the optimal intermediate model selection algorithm to find an appropriate checkpoint to achieve better performance. The current algorithm indeed requires training the teacher model to convergence, which does not save training cost. In the next, we will improve the algorithm and also hope that the followers can propose wiser algorithms.\n\n> Q5: some typos need fixes.\n\nWe have fixed them (see the rebuttal vision pdf).", " Thanks for your valuable comments. We carefully respond to the weaknesses you have pointed out.\n\n> how significant are the results?\n\nWe apologize for the lack of standard deviations. In fact, we had calculated the standard deviations, but we removed them from the manuscript in consideration of the fact that the limited width of the table causes the data font to be too small. Due to limited space, we have supplemented the whole standard deviation data in the rebuttal revision pdf. We show some typical data in Table 1 and Table 2. **Overall, the standard deviation data does not obscure the advantage of the intermediate model over the convergent model. Even if some of the improvements are not significant, the significant decrease in training cost is still valuable.**\n\nTable 1. Evaluation of distillation performance of all checkpoints on CIFAR. Results within 95% confidence interval of the best results are in bold.\n|Teacher|Student|$T^{20}$|$T^{40}$|$T^{60}$|$T^{80}$|$T^{100}$|$T^{120}$|$T^{140}$|$T^{160}$|$T^{180}$|$T^{200}$|\n|:---------|:---------|:---------|:--------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|WRN-40-2|WRN-40-1|71.65±0.28|72.21±0.13|72.34±0.1|72.85±0.15|72.76±0.24|72.83±0.06|73.08±0.05|**73.26±0.03**|72.91±0.27|72.68±0.1|\n|ResNet-110|ResNet-32|70.68±0.49|70.74±0.18|70.98±0.07|71.44±0.13|72.49±0.32|**72.63±0.13**|**72.56±0.30**|72.49±0.28|**72.53±0.13**|72.48±0.22|\n|WRN-40-2|MobileNetV2|67.77±0.26|67.86±0.22|68.21±0.33|**68.94±0.3**|**68.99±0.12**|68.74±0.21|68.54±0.07| 68.58±0.34|68.19±0.35| 68.03±0.34|\n|ResNet-110|MobileNetV2|66.38±0.29|67.88±0.17|67.84±0.26|68.66±0.07|68.79±0.17|**68.99±0.33**|**69.01±0.20**|**69.05±0.27**|**68.84±0.52**|68.63±0.35|\n\nTable 2. Comparison results of ''Full Ensemble vs. Snapshot Ensemble'' on CIFAR. Distillation results within 95% confidence interval of the best results are in bold. The best ensemble results are italic.\n\n|Teacher|Student|Normal KD|Full Ensemble KD|Snapshot Ensemble KD|Full Ensemble|Snapshot Ensemble|\n|:---------|:---------|:---------|:--------|:---------|:---------|:---------|\n|WRN-40-2|WRN-40-1|71.68±0.10|73.05±0.16|**73.70±0.04**|*79.44*|76.29|\n|ResNet-110|ResNet-32|72.48±0.22|**72.88±0.14**|**73.03±0.24**|*76.92*|73.23|\n|WRN-40-2|MobileNetV2|68.03±0.34|68.69±0.32|**69.20±0.31**|*79.44*|76.29|\n|ResNet-110|MobileNetV2|68.63±0.35|69.69±0.25|**70.19±0.25**|*76.92*|73.23|\n\nIn Table 3 of the paper, $T^*$ has an average performance of 70.87%, which is better than $T^{0.7}$ at 70.77%. The improvement looks minor because $T^{0.7}$ is a strong baseline. As stated in our paper, the optimal checkpoints are usually found in the upper right corner of the IB curve. We think it is more valuable that both $T^{*}$ and $T^{0.7}$ are clearly better than $T^{full}$ (70.46%). \n\nAs you said, \"consider investigating how a smaller training budget for the teacher network is enough to give good performance in student networks. That may be a useful result.\" We agree with it and have claimed that in the conclusion section. **In the case of limited computing resources, we empirically suggested that the half-way teacher model (i.e., $T^{0.5}$) can suffice for KD. In the case of sufficient computing resources, we further proposed an optimal intermediate model selection algorithm to find an appropriate checkpoint to achieve better performance.**\n\n> Some minor points on clarity.\n\nWe have polished the paper, including changing the color of the lines in Figure 2, replacing footnotes with citations, and eliminating spelling errors. We use some floating figures to ensure the close connections between texts and images so that readers easily read them.\n", " Thanks for your valuable comments. We carefully respond to the weaknesses and questions.\n\n> W1: what if it is just the fact that the pretraining objective helps in model distillation?\n\nSome mentioned terms (\"pretraining objective\", \"closer weights\", \"pretrain-then-finetune\", etc.) are not so suitable to our task setting, because the settings of several tasks: classical KD, language model pretrain-finetune and language model distillation are quite different. We try to use mathematical language to explain the differences. Firstly, we define $M_{t}$ as the teacher model, $M_s$ as the student model, $T_I$ as the image classification task, $T_{LP}$ as the language model pretraining task, $T_{LF}$ as the language model fine-tune (downstream) task.\n\n(1) Classical KD (proposed by Hinton [13], our setting). It follows a \"pretrian-distillation\" paradigm. Firstly, a randomly initialized teacher model $M^0_t$ is trained on $T_I$ with the cross entropy loss $L_{ce}$ to obain $M^1_t$. Secondly, a randomly initialized student model $M^0_s$ is also trained on $T_I$ with $L_{ce}$ and the distillation loss $L_{dis}(M^1_t)$ to obain $M^1_s$. This task can be formalizad as: $M^0_t\\xrightarrow[T_I]{L_{ce}} M^1_t, M^0_s\\xrightarrow[T_I]{L_{ce}, L_{dis}(M^1_t)} M^1_s$. (1)\n\n(2) Language model pretrain-finetune task. Firstly, a randomly initialized model $M^0_t$ is trained on $T_{LP}$ with a self-supervised pretrain objective $L_{pre}$ to obtain $M^1_t$. Secondly, $M^1_t$ is fine-tuned on $T_{LF}$ with the downstream task-sepcific objective $L_{down}$ to obtain $M^2_t$. There is no student model in the task. This task can be formalizad as:\n$M^0_t\\xrightarrow[T_{LP}]{L_{pre}} M^1_t, M^1_t\\xrightarrow[T_{LF}]{L_{down}} M^2_t$. (2)\n\n(3) Language model distillation task (such as DistillBert, TinyBert). There has been a pretrained large teacher $M^1_t$ (a full Bert), a lightweight student $M^1_s$ with fewer layers of the same weights. In the pretrain distillation stage, $M^1_s$ is distilled by $M^1_t$ on $T_{LP}$ to obtain $M^2_s$. In the fine-tune distillation stage, $M^1_t$ is fine-tuned on $T_{LF}$ to obtain $M^2_t$, and then $M^2_s$ is ditilled by $M^2_t$ on $T_{LF}$ to obtain $M^3_t$. This task can be formalizad as:\n$M^1_s\\xrightarrow[T_{LP}]{L_{pre},L_{dis}(M^1_t)} M^2_s, M^2_s\\xrightarrow[T_{LF}]{L_{down},L_{dis}(M^2_t)} M^3_s$. (3)\n\n**Comparing the formulas (1)(2)(3), the three task settings are clearly different.** Our setting does not follow the \"pretrain-then-finetune\" paradigm. The \"pretraining objective\" in our task is the cross-entropy loss which obviously doesn't help distillation.\n\n> W2: the fact that IB theory shows correlated behavior with distilled model performance does not mean IB is the cause or the explanation behind the result.\n\n**In our view, IB theory is tailored to explain KD, and it is not farfetched to link IB with KD.** Looking at formula (1), $L_{ce}$ contains complete information of the category label Y, which means sufficient $I(F; Y)$. $L_{dis}(M^1_t)$ helps to improve students' performance because it contains information about input X, i.e., $I(X; F)$. Therefore, It is logical that using $I(X; F)$ to explain \"dark knowledge\". A good distillation means a good balance between $I(X; F)$ and $I(F; Y)$. Therefore, It is reasonable to connect IB with KD and search better teacher checkpoints by IB cures. Figure 4 are not trivial and obvious because our task setting does not follow \"pretrian-then-finetune\" paradigm.\n\n> Q1: evaluate all checkpoints.\n\nIn fact, the distillation performance of all checkpoints had been shown in Figure 2. We also show it in table style (Due to limited space, please see Table 2 in \"Responses to Reviewer encu\"). **The distillation performance of teacher checkpoints generally increases first and then decreases. This trend is consistent with the trend of I(X; F) in the IB curve, which is one of the intuitive reasons why we connect IB theory with KD.**\n\n> Q2: lack of the standard deviation.\n\nIt is a good suggestion to show the standard deviation data. Due to limited space, we have supplemented the whole standard deviation data in the rebuttal revision pdf. You can see some typical data in Table 2 of \"Responses to Reviewer encu\". **Overall, the standard deviation data does not obscure the advantage of the intermediate model over the convergent model. Even if some of the improvements are not significant, the significant decrease in training cost is still valuable.**\n\n> Q3: worth exploring checkpoint distillation with language models.\n\nAlmost all of the papers included in our related work did not investigate language models. We agree that it is tempting to apply our approach to large-scale language models. However, exploring language models is too costly for us in terms of computational resources and training time, because we need to train large-scale language models from scratch, such as Bert. It is too difficult to investigate the results in rebuttal time.", " Thanks for your positive comments. We carefully respond to the weaknesses.\n\n> W1: the differences between \"Revisiting Knowledge Distillation via Label Smoothing Regularization\" [36] and our work. The novelty of the intermediate model part.\n\nThe previous work [36] is one of the most related works to our research. We had done in-depth researches and analyses on this article, and obtained some conclusions. \n\n(1) We agree that the exploring of Defective KD which uses an early teacher checkpoint to distill the student is similar to our first exploring experiment. **However, they only claimed that Defective KD could improve students' performance, but could not achieve or exceed normal KD.** Table 1 deriving from [36] (see table 1 and 4 of [36]) supports our viewpoint. Furthermore, [36] adopted almost the same temperature setting (temperature 20, see table 10 and 11 of [36]), which was detrimental to show the performance of baselines. In contrast, we investigated the distillation performance of all teacher checkpoints by searching the optimal hyperparameters (see section 1 of our supplementary material), and found that some specific intermediate models performed better than the convergent model, which was not discovered by [36]. The reversed-KD and teacher-free KD are orthogonal to our exploring.\n\nTable1. Partial results of De-KD (Defective KD) from [36]\n\n| Teacher |Student |Normal KD |De-KD|\n|----------------|-------------------------------|-----------------------------|-----------------------------|\n|ResNet18: 75.87|MobileNetV2: 68.38 |71.05±0.16 |70.65±0.35|\n|ResNet18: 75.87 |ShufflfleNetV2: 70.34 |72.05±0.13 |71.82±0.11|\n|ResNeXt29: 81.03 |MobileNetV2: 68.38|71.65±0.41|71.52±0.27|\n|ResNeXt29: 81.03 |ResNet18: 75.87 |77.84±0.15 |77.28±0.17|\n\n(2) In fact, **we had evaluated the performance of De-KD in our paper.** For example, the distillation performance of all checkpoints has been shown in Figure 2, where the distillation using earlier teacher checkpoints (fewer than 60 epochs) can be considered as De-KD. Obviously, De-KD can not achieve the performance of normal KD. We also list the distillation performance of all checkpoints in Table 2 (after optimal hyperparameter search). **Obviously, the best checkpoints do not appear in the early stage. It means De-KD < Normal KD < Ours.**\n\nTable 2. Evaluation of distillation performance of all checkpoints. The best results are bold.\n\n|Teacher|Student: acc|$T^{20}$|$T^{40}$|$T^{60}$|$T^{80}$|$T^{100}$|$T^{120}$|$T^{140}$|$T^{160}$|$T^{180}$|$T^{200}$|\n|:---------|:---------|:---------|:--------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|:---------|\n|WRN-40-2|WRN-40-1:70.38|71.65±0.28|72.21±0.13|72.34±0.1|72.85±0.15|72.76±0.24|72.83±0.06|73.08±0.05|**73.26±0.03**|72.91±0.27|72.68±0.1|\n|ResNet-110|ResNet-32:70.16|70.68±0.49|70.74±0.18|70.98±0.07|71.44±0.13|72.49±0.32|**72.63±0.13**|72.56±0.30|72.49±0.28|72.53±0.13|72.48±0.22|\n|WRN-40-2|MobileNetV2:64.49|67.77±0.26|67.86±0.22|68.21±0.33|68.94±0.3|**68.99±0.12**|68.74±0.21|68.54±0.07| 68.58±0.34|68.19±0.35| 68.03±0.34|\n|ResNet-110|MobileNetV2:64.49|66.38±0.29|67.88±0.17|67.84±0.26|68.66±0.07|68.79±0.17|68.99±0.33|69.01±0.20|**69.05±0.27**|68.84±0.52|68.63±0.35|\n\n(3) The main contribution of [36] is establishing the connection between KD and LSR (Label Smoothing Regularization), and propose that KD is equivalent to LSR when the temperature value is large. Different from their contribution, **we find that specific intermediate models are more effective than the convergent model from the perspective of retaining the category correlation information, further use IB to explain this phenomenon and establish the connection between IB and KD.**\n\n\n**Therefore, [36] did not fully discover the importance of intermediate models in KD.** On the contrary, we also explained the connection between LSR and KD by using IB theory (see section 2.3 of our supplementary material), which further supported and complemented [36]. We accept the good suggestion of \"put more effort into the IB theory section\", consider to reorganize and expand this part.\n\n> W2: The error bar is missing.\n\nWe apologize for the lack of standard deviations on CIFAR dataset. In fact, we have calculated these data, but we removed them from the manuscript in consideration of the fact that the limited width of the table causes the data font to be too small. Due to limited space, we have supplemented the whole standard deviation data in the rebuttal revision pdf. Partial data has been shown in the Table 2 above. **Overall, the standard deviation data does not obscure the advantage of the intermediate model over the convergent model, while the decrease in training cost is quite clear and valuable.**", " # Shared responses to all ACs and reviewers\n\nWe thank all reviewers for their careful review and valuable comments which help us improve our work. We are delighted to see that **all four reviewers approve our novelty**. This encourages us a lot. \n\nR1: \"I enjoyed reading the paper and think that the paper would be of interest to other members in the research community.\" R2: \"The idea of applying mutual information to understand KD and “dark knowledge” is very novel to the community.\" R3: \"The connections between the information bottleneck theory and model distillation seem intriguing.\" R4: \"The work is reasonably original. It is the first application of the IB principle to Knowledge Distillation (KD), yielding an observation that improves the state of KD.\"\n\nHere, we wish to restate **the inspiration and potential implications of our work**. Several recent works [8,20,21,32] in the KD field have found a similar phenomenon: high performing teachers may not necessarily lead to better students. Some researchers guessed that the model capacity gap between strong teachers and weak students degrades knowledge transfer. However, they did not explain theoretically why gap exists and how gap affects KD. It has been troubling researchers of KD field.\n\nIncidentally, we found that **a half-trained teacher gained distillation ability beyond that of a convergent teacher**. We then investigated the distillation performance of all teacher checkpoints and extended more model types, yielding consistent results. Further, we proposed and validated that **the snapshot ensemble distillation dramatically surpasses the widely-used full ensemble distillation**, which may change the state of ensemble distillation (**Contribution 1**).\n\nWe thought about why the optimal checkpoints tend to be in the middle, neither too early nor too late. Fortunately, the interpretation of deep networks by IB theory [26] gave us inspiration. They claimed that deep networks tend to obtain an efficient representation of the input, capturing the features relevant to the output and compressing those irrelevant. However, those features that are not relevant to the output (non-target category features) are exactly what KD needs. This prompted us to explore the connection between IB and KD. Formally, the objective function of KD usually includes two terms: the cross-entropy loss $L_{ce}$ (related with the category label $Y$) and the distillation loss $L_{dis}$ (related with the output of the teacher model). $L_{ce}$ contains complete information of the category label Y, which means sufficient $I(F; Y)$. $L_{dis}$ helps to improve students' performance because it contains information about input X, i.e., $I(X; F)$. Therefore, It is logical that using $I(X; F)$ to explain \"dark knowledge\". **From our view, the reason leading to the decline of KD ability of high-performing teachers may not be the model gap, but the excessive compression of non-target category information to the teacher models (i.e., $I(X; F)$).** Our proposed intermediate checkpoint distillation is a simple and effective way to avoid the excessive compression of $I(X; F)$. The link between IB and KD can help community researchers think further about the fundamentals of KD, analyse the unexplained phenomena in the past, and choose or design efficient teacher models in KD scenarios (**Contribution 2**).\n\nA good distillation method means a good balance between $I(X; F)$ and $I(F; Y)$. Therefore, we proposed to search the optimal teacher checkpoints by IB curves. Honestly, the current algorithm is not satisfying to us. It considers the information entropy of the teachers but ignore the variation of the student structures, which can not ensure the optimal KD performance for all teacher-student pairs. In the next step, we will improve the algorithm and also hope that the followers can propose more general algorithms. In addition, we also empirically suggested that the half-way teacher model can suffice for KD in the case of limited computing resources, which can be viewed as a practical trick for KD applications. (**Contribution 3**)\n\nAlthough our current research work is not perfect, we hope it will be helpful to the research community of KD. ", " **A brief summary**: The paper makes an interesting observation that an intermediate checkpoint of the (teacher) model is just as good for the purposes of knowledge distillation (i.e., training a student model). The paper provides ample evidence to support their observations. They explain this observation using information theory: intermediate checkpoints retain more mutual information (I(X, F)) between the input and the representations than the converged models, which have compressed I(X, F) to maximize target-related information (I(Y, F)). Put simply, intermediate models retain more information about the inputs and other non-target classes. The paper further provides a simple scheme to identify intermediate checkpoints most suited for distillation and demonstrate the efficacy of checkpoints identified through their protocol.\n\nEDIT: Updated my score from 6 $\\rightarrow$ 7, based on the responses during the discussion phase. **My assessment**: Overall, I enjoyed reading the paper and think that the paper would be of interest to other members in the research community. Their results also suggest that one could significantly reduce training cost of the teacher models. The key result is intriguing: an intermediate teacher model—checkpoint corresponding to 50% of full convergence—is less accurate by about 8-14 points, **yet yields comparable distillation performance**. This result is supported through a number of experiments on various student-teacher pairs. Their protocol to select the optimal checkpoint is also simple, reasonable and effective.\n\nDespite my overall positive assessment, I have one major and a few minor concerns. My biggest concern is that selecting 200 or 120 epochs upfront is unsettling. In practice, most training runs use an early stopping criterion, where models are trained to a point till they improve validation performance. I would be more confident in the results and my recommendation if the T_{full} is a model based on early stopping rather rather than a model after 120 or 200 epochs. I am curious to see how the key results presented in Table 1 and 2 change upon making this change?\n\nSome of my minor concerns—largely concerning the presentation of the paper—are as follows:\n\nThe figure and table captions in the paper are often incomplete and sow confusion. For instance, after reading Figure 2 the impression a reader gets is that we are looking at teacher performance over epochs. Only later in section 4, it becomes clear that the y-axis is the distillation performance of student models for various intermediate teacher models trained for that many epochs (in x-axis). Similarly, Figure 1 is a cartoon representation of the key result and can be misleading (the KD performance is not necessarily as high as figure 1 would have you believe. Further, it is never clarified if the figure stems from real values or is just a high-level cartoon depiction). Similarly, Figures 4 and 5 leave out important details. \n\n\nThe paper title, albeit catchy, doesn’t really concern the paper’s key observations, methods or results and is rather a snappy message which could apply to several papers. \n\n\nThis is more of a suggestion than a concern: it would be great to have an intermediate model-selection process that doesn’t require training the teacher model to convergence (which means that we don’t save any training cost). This could be future work, and therefore could be discussed in the conclusions and limitations section. \n\n\nSome typos that I noticed and need fixes:\n\n\n- Line 73, Rethinking → rethinking\n- Line 171, comparable or even better (the word or is missing) \n- Line 329, s in missing from the limitations\n- Line 345, We → we\n How would the results in Table 1 and Table 2 look like if T_{full} is a model based on early stopping rather rather than a model after 120 or 200 epochs?\n\nPlease also see the main review for other suggestions on presentation and optimal model-selection algorithm. (Copy pasting my concerns from my main review).\n\nSome of my minor concerns—largely concerning the presentation of the paper—are as follows:\n\nThe figure and table captions in the paper are often incomplete and sow confusion. For instance, after reading Figure 2 the impression a reader gets is that we are looking at teacher performance over epochs. Only later in section 4, it becomes clear that the y-axis is the distillation performance of student models for various intermediate teacher models trained for that many epochs (in x-axis). Similarly, Figure 1 is a cartoon representation of the key result and can be misleading (the KD performance is not necessarily as high as figure 1 would have you believe. Further, it is never clarified if the figure stems from real values or is just a high-level cartoon depiction). Similarly, Figures 4 and 5 leave out important details.\n\nThe paper title, albeit catchy, doesn’t really concern the paper’s key observations, methods or results and is rather a snappy message which could apply to several papers.\n\nThis is more of a suggestion than a concern: it would be great to have an intermediate model-selection process that doesn’t require training the teacher model to convergence (which means that we don’t save any training cost). This could be future work, and therefore could be discussed in the conclusions and limitations section.\n\nSome typos that I noticed and need fixes:\n\n- Line 73, Rethinking → rethinking\n- Line 171, comparable or even better (the word or is missing)\n- Line 329, s in missing from the limitations\n- Line 345, We → we\n\n\n", " This paper observes that the intermediate models (checkpoints in the middle of the training procedure) can serve as better teachers than fully trained teacher models in knowledge distillation (KD). \nMeanwhile, this paper proposes the Snapshot Ensemble, which ensembles several intermediate models and one full model, and observes that the snapshot ensemble has better KD performance and low training cost than the full teacher ensemble. \nFurthermore, this paper applies the Information Bottleneck (IB) theory to understand KD and “dark knowledge”. Moreover, this paper proposes an algorithm to find the optimal intermediate teacher based on IB theory. \n Strengths. \n- The idea of applying mutual information to understand KD and “dark knowledge” is very novel to the community. \nThe curves of mutual information between the input and learned features clearly reveal the training process of a neural network and dark knowledge. \n\n- Experiments are sufficient. \n\n- This paper is in general well written and easy to follow. \n\nWeaknesses\n\n- The findings of the intermediate checkpoints are not very exciting to the community. \nMy most concern is about the novelty of the intermediate model part. This paper spends a lot of effort demonstrating the advantages of the intermediate checkpoints. However, previous works [36] already suggest that the Reversed KD (“student teach the teacher”) or even Defective KD (“poorly-trained teacher teach the student”) can achieve comparable results with normal KD. Thus, the observations on intermediate models in this paper should be naturally correct as well and not very exciting. Although this paper claims that the training cost of a teacher is significantly lower, the defective teacher [36] can further reduce training effort. To this end, I would suggest this paper compare their methods with methods in [36], such as defective-KD, reversed-KD, and teacher-free KD. Specifically, this paper can compare the accuracy and total training cost with [36] on the experiments in Tables 1, 2, and 3. With these additional experiments, this paper can convincingly claim the advantages of the intermediate checkpoints. Otherwise, I would suggest this paper turns the tune down on the intermediate models and put more effort into the IB theory section. \n\n- The error bar is missing. \nAlthough this paper claims that the results is evaluated in five independent experiments in CheckList 3(c), this paper only reports the average value, while the standard deviation (error bar) is missing. Since the improvement in many experiments is relatively small (less than 0.5), the experimental results can not be significant until this paper demonstrate that the improvement is large than the error bar. \n\n\n\n The idea of applying IB theory to KD is in general novel. Nevertheless, the above weaknesses refrain me from giving a better score. I would be happy to improve my rating as long as the authors address the above weaknesses in the rebuttal. \n Yes, this paper does mention some limitations in Section 6. ", " The authors propose a new model distillation method utilizing teacher's training checkpoints, showing performance gain over a handful of benchmarks. The authors connect their methods with information bottleneck theory and hypothesize that the performance gain partly comes from the fact that checkpoints save a lot of representation information about inputs, without overfitting to the task. Although the findings are somewhat interesting, I have some concerns with the core methods I discuss below. **Strength**:\n- The connections between the information bottleneck theory and model distillation seem intriguing. However, I am not convinced that this is the real cause of the performance gain.\n- It is interesting to see how checkpoints (i.e., not overfitting the training data) generalize better in model distillation.\n\n**Weakness**:\n- **The fact that the information bottleneck theory shows correlated behavior with distilled model performance does not mean IF is the cause or the explanation behind the result**\n\nThe core contribution of the paper is providing a checkpoint selection method based on IF for the teacher model. The core claim behind this contribution is that IF offers a way to measure how good a checkpoint is in preserving \"dark information\" which leads to better image representations for model distillation. However, it is not convincing to me that there is a causal link between IF and checkpoint distillation. Fig.4 is somewhat trivial to any pretrain-then-finetune paradigm, i.e., in the initial phase of finetuning, the pretrained model begins with poor in task performance but with a good representation to reconstruct outputs; at the end, the model becomes overfitting to the task with sacrificed reconstruction accuracies. The fact that IF seems to correlate with downstream distilled model performance does not necessarily prove IF is the reason behind this phenomenon. I layout another very simple theory which could be another explanation behind this, which is in fact, proved in the literature as well.\n\n\n- **What if it is just the fact that the pertaining objective helps in model distillation? i.e., earlier checkpoints simply provide closer weights to the pretrained model, which provides more information about the pertaining objective**\n\nUsing the checkpoints to distill can simply be viewed as a way of preventing the student to overfit the downstream finetuning task. Specifically, it asks the student model to essentially forget task-specific information and asks the student model to recover pretrained information. This is indeed proved by some recent works in language model distillation, i.e., while distilling with the task objective, the student model is also co-trained with the teacher's pretraining objective. With this, there could be many other baselines to run, what if we simply distill with lower learning rates, or with larger learning rate decays? What if we do weighted distillation loss by combining task-specific loss as well as pertaining loss? - **Standard deviation is really useful for gaining real insights about performance improvements**\nThe authors show that multiple experiments over 5 random seeds are conducted. Can you include the standard deviation in all main results tables? Since the performance increment looks small numerically, grounding them with SD could be very useful in interpreting the results.\n\n- **Instead of showing the T^{0.3} T^{0.5} and T^{0.7}, would it be possible to evaluate all checkpoints as in Fig. 4**\nIt would be still interesting to observe that IF curves completely correlate with the downstream distilled model performance.\n\n- **Worth exploring checkpoint distillation with language models**\nIf time permitted, I would be interested to see if this works with language models as well. The authors adequately addressed the limitations and potential negative societal impact.", " The paper deals with knowledge distillation, which is the training of smaller “student” models from larger “teacher” ones in an effort to reduce the model size while preserving as much performance as possible.\n\nIn this paper, the authors challenge the notion that better performing (and closer to convergence) models make better teachers, instead showing that checkpoints sampled throughout training make better teachers. This counterintuitive finding is explained using the information bottleneck (IB) principle, which suggests that the mutual information between the parameters of a neural network and the model input increase at the start and then decrease. Informally, this corresponds to the observation that models begin by learning correlates at the start, then learn to discard information about class correlates that would be helpful for knowledge distillation. The authors show that teachers are improved by constructing them from “intermediate” models (i.e. checkpointed versions of a model sampled as it is trained) instead of using converged models.\n\nFurther, they use the IB principle to identify the optimal intermediate model for knowledge distillation. They show that selecting a model that maximizes the sum of the mutual information between the parameters and input and the mutual information between the parameters and output yields better intermediate teachers than simpler schemes that sample models at fixed training intervals.\n # Originality\nThe work is reasonably original. It is the first application of the IB principle to Knowledge Distillation (KD), yielding an observation that improves the state of KD.\n\n# Quality\nThe interpretation of IB presented in the paper is compelling, but the experiments don’t provide enough information to confirm the significance of the results. I explain this in the Limitations section of the review.\n\n# Clarity\nThe writing is clear and proceeds logically from section to section. I particularly like the intuitive introduction to IB (lines 44-63), which clearly gives readers sufficient intuition to understand the work. I also really appreciate how the related work section summarizes the rather extensive KD literature as a coherent narrative.\n\nMinor points:\n - In Figure 2, the yellow line (ResNet-110/MobileNetV2) is unreadable. Consider sampling colors of different hues with a fixed saturation and value/lightness to prevent such issues.\n - Avoid floating figures on one side of the page, use the default setting of a full-width figure instead. This greatly improves the flow of text.\nFootnote 2 (link to tiny-imagenet) can be replaced by a citation – bibtex supports URL citations. This should also save you a little vertical space for the publication-ready version of the paper.\n - Line 152 “Fistly” should read “Firstly”\n\n# Significance\n\nThe results are sufficiently significant, as it is likely to spawn a line of follow-up work expanding our understanding of KD using IB principles. This is conditional on the results being statistically significant, though.\n How significant are the results? (Equivalently, if you were to randomly initialize your experiments and run them again and again, how often would $T^*$ be better than the alternatives.) Improvement is shown in percentage points, but there is no information presented on the standard deviation of these results. This means that we cannot conclude that the change in results is statistically significant. To make this concrete: in Table 3, $T^*$ has an average performance of 70.87%, which is presented as better than $T^{0.7}$ at 70.77%. It is not clear to me that this difference of 0.1% is better, particularly on the CIFAR-100 dataset.\n\nThe way to fix this is to run each experiment multiple times, present the result and standard deviation, and bold the results that are within some confidence interval of the top result. I suspect that repeating the analysis with this addition will show that the results are less significant than claimed.\n\nUnfortunately, the experiments showing that the use of snapshot teachers/ensembles is better than full, and the experiments establishing the usefulness of their proposed algorithm are also vulnerable to this criticism. I believe this is a fatal flaw in the paper, and needs remediation before publication.\n\n(Similar changes should also be incorporated into the construction of Figure 4. It is not clear to me how significant the shift in the blue vs gray lines are unless some measure of deviation can be plotted in the chart.)\n\nConsider running these experiments again to establish that they are significant. If you cannot show that, consider investigating how a smaller training budget for the teacher network is enough to give good performance in student networks. That may be a useful result." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "Fid-rh3xJ4M", "HyYOh1maYBa", "4L2qCccfUa", "eGRSC3pLjp9Y", "XuRnXiClpmQ", "oYAYWWHmKuC", "IOxq5X5tgVP", "kAJl-9lNR4f9", "TWXE3Sy67cq", "IOxq5X5tgVP", "AGRNAXj1rzh", "XuRnXiClpmQ", "oYAYWWHmKuC", "nips_2022_0ltDq6SjrfW", "nips_2022_0ltDq6SjrfW", "nips_2022_0ltDq6SjrfW", "nips_2022_0ltDq6SjrfW", "nips_2022_0ltDq6SjrfW" ]
nips_2022_wS23xAeKwSN
PolarMix: A General Data Augmentation Technique for LiDAR Point Clouds
LiDAR point clouds, which are usually scanned by rotating LiDAR sensors continuously, capture precise geometry of the surrounding environment and are crucial to many autonomous detection and navigation tasks. Though many 3D deep architectures have been developed, efficient collection and annotation of large amounts of point clouds remain one major challenge in the analytics and understanding of point cloud data. This paper presents PolarMix, a point cloud augmentation technique that is simple and generic but can mitigate the data constraint effectively across various perception tasks and scenarios. PolarMix enriches point cloud distributions and preserves point cloud fidelity via two cross-scan augmentation strategies that cut, edit, and mix point clouds along the scanning direction. The first is scene-level swapping which exchanges point cloud sectors of two LiDAR scans that are cut along the LiDAR scanning direction. The second is instance-level rotation and paste which crops point instances from one LiDAR scan, rotates them by multiple angles (to create multiple copies), and paste the rotated point instances into other scans. Extensive experiments show that PolarMix achieves superior performance consistently across different perception tasks and scenarios. In addition, it can work as a plug-and-play for various 3D deep architectures and also performs well for unsupervised domain adaptation.
Accept
The proposed augmentation method for LIDAR Scans is to crop, cut, and mix two 3D scans at both scene-level and instance-level. The approach is not novel and a simple extension of the idea of the mix of 3D scenes and rotating bounding boxes. Another limitation is that the method cannot be applied to general 3D scenes. The reviews include A(7), WA(6), BA(5), two BR (4). After carefully checking out the rebuttals and discussions, I recommend the paper to be presented for the NeurIPS community.
test
[ "Fii_gLvQY0G", "IFPM5-Ca2z0", "SQJ6foE6CGX", "4_wir9XSdJS", "xdoU4mGVBM68", "SFPolvdomAj", "4SIJU6RtI_", "LJiO8y_AcmN", "9a_hAvigzUb", "tIhr0vlkEIi", "dRuVEqcSCh9", "Z-gJyKUxw1_", "IvWJFbiKQo", "EPrBDQzp3qC", "Sw7wN2jXODE", "Hm4TOQrbev", "RfG9A7s1Y6q" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer gjSm:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, \nAuthors", " Dear Reviewer cqy3:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, \nAuthors", " Dear Reviewer NzUE:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, \nAuthors", " Thanks for your further comments. Below please find our clarifications.\n\nQ4-1: Is this dataset setup widely used in other UDA papers?\n\nDomain adaptive LiDAR point cloud segmentation is a relatively new research task. To the best of our knowledge, the dataset setup has been adopted in three studies [a, b, c]. Note we didn't review [b] and [c] as they were publicly accessible after the submission deadline of NeurIPS 2022. In addition, [b] focuses on UDA and [c] focuses on model adaptation. Our proposed PolarMix instead focuses on point cloud augmentation under fully supervised setups. The UDA experiment is just one possible extension of PolarMix as presented in our submission.\n\n[a] Transfer Learning from Synthetic to Real LiDAR Point Cloud for Semantic Segmentation. AAAI 2022.\n\n[b] CoSMix: Compositional Semantic Mix for Domain Adaptation in 3D LiDAR Segmentation. ECCV 2022.\n\n[c] GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D LiDAR Segmentation. ECCV 2022.\n\n---\n\nQ4-2: Implementation details?\n\nWe adopted the typical self-training strategy which treats confident pseudo labels of target data as ground truth (i.e., the top 20\\% of the highest prediction scores initially) and then applies them to re-train the segmentation networks. The whole training repeats the two processes for five rounds (2 epochs in each round) with a gradually increasing confidence threshold (adding 5\\% after each round).\n\n---\n\nQ5: Revised manuscript?\n\nThe current manuscript is what we originally submitted and we didn't upload any revised versions after the submission deadline. The rebuttal contains several new experiments that we just conducted to respond to your comments. We plan to incorporate them into the manuscript/appendix to be updated later. Thanks.", " Thank you for the precious rebuttals. I have some further questions after reading the rebuttals.\n\n4. Domain setup\n\nI checked the manuscript of xMUDA [H] and the proposed dataset setup (Table H of this rebuttal) is different from the one that xMUDA proposed. I also agree that there is a modality difference between xMUDA and PolarMix, which makes it difficult to exactly align the same experimental setup. However, still, it does not necessarily change the dataset setup. \n\n4-1. Is this dataset setup widely used in other UDA papers? \n\n4-2. Can authors further specify how they train the PolarMix in the Table H of this rebuttal? In L240-L242 of the manuscript, the authors use pseudo labels to train the network using target domain data. Did you naively treat pseudo labels as ground truth labels without any filtering process?\n\n5. Revised manuscript?\n\nI downloaded the manuscript after reading the rebuttals but have no idea whether this is the 'revised' manuscript. Did the authors revise the manuscript regarding the reviewers' comments? Commonly, as an intermediate revision, the authors can colorize the modified sentences during the discussion period.", " W2-Q1: Is PolarMix a superior DA method?\n\n- Yes PolarMix is a superior DA method which is model-agnostic and achieves excellent point segmentation performance gains. As shown in the table below (extracted from Table 1 of the submitted manuscript), SPVCNN w/ PolarMix achieves a mIoU of 66.2\\% mIoU over the validation set of SemanticKITTI dataset, which outperforms the state-of-the-art Cylinder3D [41] at 65.9\\% as reported in Table 3 in [41]. The experiments show that PolarMix is indeed a superior and SOTA augmentation method for LiDAR point cloud learning.\n- As commented by other reviewers, we tested over many baselines, datasets, and tasks to verify the generalization of PolarMix, which involved a huge amount of training resources. Since training Cylinder3D takes a very long training time (more than 2 weeks as in Lines 178-180), we only report the performance gain of PolarMix over Cynlinder3D with 10\\% of training data, as shown in Table 3 in the submitted manuscript. We will report the performance of Cylinder3D+PolarMix with full training data in the updated manuscript. \n\n| Method | mIoU |\n| :------------------- | :-------- |\n| Cylinder3D[41] | 65.9 (copied from [41]) |\n| SPVCNN [30] | 60.7 |\n| SPVCNN+PolarMix | **66.2** |\n\n---\n\nW2-Q2: Comparison with other augmentation techniques, such as copy-paste, in Table 4?\n\n- Thanks for your suggestion. We conducted the suggested experiment and include the performance of PointPillar with Copy-Paste over validation set of nuScenes dataset. As the results in the table below, PolarMix achieves a clearly performance gain and surpasses Copy-Paste, which is consistent with our experimental results in the segmentation task (Tables 1,2 in the submitted manuscript).\n\n| Methods | mAP | NDS |\n| :------ | :---- | :---- |\n| PointPillar [18] | 41.8 | 54.9 |\n| +CopyPaste [13] (newly-included) | 42.9(+1.1) | 55.2(+0.3) |\n| +PolarMix | **43.7(+1.9)** | **55.7(+0.8)** | \n\n--- \n\nW2-Q3: Is the ablation study supportive?\n\n- As shown in the table below (Table 6 in manuscript), both approaches in PolarMix improve segmentation performances of SPVCNN clearly: The scene-level swapping improves segmentation performance with 1.9\\% and instance-level pasting improves 4.3\\%. These two approaches are complementary and the complete PolarMix achieves the best results at 54.8\\%(+5.9\\%).\n\n| Methods | mIoU |\n| :------ | :------ |\n| SPVCNN (baseline) | 48.9 |\n| w/ Scene-level swapping | 50.8(+1.9)|\n| w/ Intance-level pasting (simple-pasting) | 50.9(+2.0) |\n| w/ Intance-level pasting (rotate-pasting) | 53.2(+4.3) |\n| w/ PolarMix (complete) | **54.8(+5.9)** |\n\n---\n\nW3: Minor problems w.r.t. the elaboration?\n\n- Thanks for your meticulous effort in details. The $\\sigma_1$ and $\\sigma_2$ in L185 are typos and should be $\\delta_1$ and $\\delta_2$ in the algorithm 1. We will revise the manuscript for clearer representation.\n- PolarMix mitigates the domain shift by creating new intermediate domains of point clouds from two different domains. It improves pseudo-label accuracy for self-training.", " Thank you for your confirmation of the value of our proposed method as well as the impressive experimental results. Below please find our responses regarding your concerns.\n\nW1-Q1: Why PolarMix maintain high fidelity of augmented LiDAR data \\& our design insights?\n\n- We would clarify that the \"high-fidelity\" means that PolarMix allows augmenting LiDAR point clouds without impairing two typical LiDAR point properties (Lines 41-47) including: \n\t&emsp; 1: Objects in LiDAR scans are incomplete where only object sides facing the LiDAR sensor are scanned with points as illustrated in Fig. 1(a) in the submitted manuscript. \n &emsp; 2: Point density decreases with the increase of depth as illustrated in Fig. 1(b) in the submitted manuscript. \nPrevious data augmentation methods impair these two data properties severely which affects their effectiveness. E.g., CutMix[37] and Copy-paste[13] from 2D images work in the Cartesian coordinate system, Mix3D [21] mixes 3D scenes globally in an out-of-context manner. Extensive experiments in Section 4 of our submitted manuscript show that the PolarMix-augmented LiDAR data preserve the data fidelity and help train better segmentation models consistently.\n- In addition, multiple cars scattered around the center-vehicle may happen in real scenarios, e.g., in a traffic jam or accident. These situations are crucial for autonomous driving because they are closely related to dangerous situations. Please note that for data augmentation, we do not ensure that instances only appear in the \"frequently\" appeared locations. On the contrary, we enrich the diversity by allowing instances to appear at different locations. PolarMix generates new training samples with abundant instances in different locations thus enlarging the training distribution which enhances the robustness of recognition models. \n- We would clarify that some pasted instances overlapping with the background points is not a problem for point data augmentation. Recent studies in Mix3D [21] showed that such mixing would not downgrade but augment point-based models. Similar discontinuities can also be found in augmented images of CutMix[37] in 2D vision. \n\n--- \n\nW1-Q2: Mixing in depth and inclination directions?\n\n- Thank you for your suggestion! We also considered mixing in the depth and inclination directions in the stage of methodology design. After some preliminary studies, we found that mixing in these two dimensions either impairs LiDAR data fidelity or requires very complicated point rendering design to preserve LiDAR characteristics, e.g., for preserving partial visibility and density variation along the depth. As a comparison, cutting and mixing along the azimuth direction preserves point fidelity greatly and efficiently as it is well aligned with the point capturing process.\n\n--- \n\nW1-Q3: Does PolarMix work for range view networks?\n\n- We conducted the suggested experiment and evaluated PolarMix over SalsaNext [a], one of the most advanced projection-based methods. Since training SalsaNext with full SemanticKITTI training set takes more than one week, we report performances of 10\\% of training data for fast experiments. The mIoU of SalsaNext w/o and w/ PolarMix are 52.2 and 54.7, respectively, indicating the effectiveness of PolarMix in augmenting range view networks.\n- Our PolarMix works in the input space and it is model-agnostic. The range projection of the PolarMix-augmented takes the same process for raw data, e.g. spherical projection for SalsaNext.\n\n[a] Cortinhal T, Tzelepis G, Erdal Aksoy E. SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds.\n\n--- \n\nW1-Q4: Why randomly swapping sectors of 45 degrees downgrades?\n\n- Swapping small angular ranges such as $45^\\circ$ may damage semantic layouts of LiDAR scenes, leading to the downgraded performances of segmentation models. The experiment in Table 1 of the supplementary material shows that wider ranges of sectors should be selected for swapping.", " Thanks for your appreciation of our proposed approach, impressive experimental results, as well as the clearness of presentation. Below please find our responses regarding your concerns.\n\nQ1: How the baseline approaches are selected?\n\n- Thanks for the insightful comment! The baseline in this study is very limited as data augmentation for LiDAR semantic segmentation is a relatively under-explored task (as stated in Lines 80-102 in Related Works). To address this constraint, we selected the highly-related mixing-based methods including Cut-Mix and Copy-paste in 2D vision and the pioneering work Mix3D for point cloud augmentation. We will further clarify the baseline issue in the experiment part in the updated manuscript.\n\n---\n\nQ2: Additional training time of MinkUNet after using PolarMix?\n\n- Thank you for the great suggestion! We would clarify that PolarMix introduces less than 1\\% of extra training time only as compared with the vanilla MinkNet. The superb efficiency is largely attributed to two major factors:\n - The swapping and rotate-pasting process in PolarMix can be achieved by simple dot products, slicing, and concatenation which are extremely efficient (Line 135-137).\n - Most LiDAR-based models sub-sample a fixed maximum number of points as input (e.g., 80k for SemanticKITTI in vanilla MinkNet and SPVCNN training). Hence, the network processes a similar amount of points (and takes little extra training time) although the augmentation introduces more points by cutting and mixing multiple copies of instances across scans.\n \n---\n\nQ3: Mixing more scans?\n\n- We conducted the suggested experiments by increasing the mixed LiDAR scans and benchmarking them with no mixing. The experiments were conducted with SPVCNN that is trained with sequence 00 of SemanticKITTI. As Table I shows, mixing two scans produces clearly the best performance. We examined the mixed data and found that mixing more scans introduces more hardly distinguishable objects. The experimental results are well in line with other mixing-based augmentation works [7, 32, 38, 21].\n\n Table I (with newly conducted experiments): Varying number of mixed scans by PolarMix. 'no mixing' represents the vanilla training without augmentation of PolarMix.\n \n| \\#Scans | no mixing (baseline)| 2 | 3 | 4 |\n| :----: | :----: | :----: | :----: | :----: |\n| mIoU | 48.9 | **54.8** | 52.2 | 51.3 |\n\n---\n\nQ4: More analysis of the performance gains versus the distribution of depth?\n\n- Thank you for the constructive suggestion! We evaluated the performance gains of PolarMix in recognizing points across different depths. Specifically, we split points of different ranges of depth and report segmentation performances of MinkNet over each split. The experimental results, as shown in Table J, are aligned with your thinking - the performance gains decrease with the increase of depth. We will include this experiment in the updated manuscript/appendix.\n\nTable J (with newly conducted experiments): Segmentation performances of MinkNet over points in different ranges of depth.\n\n| Depth (in meter) | [0,20) | [20, 40) | [40, 60) | [60, 80] |\n| :----- | :----- | :----- | :----- | :----- |\n| MinkUNet(baseline) | 61.0 | 48.6 | 29.6 | 47.7 |\n|+PolarMix | 66.8(+5.8) | 54.8(+6.2) | 34.0(+4.4) | 48.8(+1.1) |\n", " Q2-1: Fusing more scans?\n\n- We conducted the suggested experiments by increasing the mixed LiDAR scans and benchmarking them with no mixing. The experiments were conducted with SPVCNN that is trained with sequence 00 of SemanticKITTI. As Table G shows, mixing two scans produces clearly the best performance. We examined the mixed data and found that mixing more scans introduces more hardly distinguishable objects. The experimental results are well in line with other mixing-based augmentation works [7, 32, 38, 21].\n\n Table G (with newly conducted experiments): Varying number of mixed scans by PolarMix. 'no mixing' represents the vanilla training without augmentation of PolarMix.\n \n| \\#Scans | no mixing (baseline)| 2 | 3 | 4 |\n| :----: | :----: | :----: | :----: | :----: |\n| mIoU | 48.9 | **54.8** | 52.2 | 51.3 |\n\n---\n\nQ2-2(1): What if there are misalignment for mixing?\n\n- LiDAR data are captured in a local coordinate system and the origin of LiDAR scans is the LiDAR sensor, which means the misalignment of two scans is negligible.\n\n\nQ2-2(2): Why cropping point clouds along the azimuth axis?\n\n- As responded in Q1, we crop LiDAR points along the azimuth direction for maintaining the fidelity of the augmented point cloud data and preserving two unique and important LiDAR data properties (Lines 39-56): 1) objects in LiDAR scans are incomplete where only object sides facing the LiDAR sensor are scanned with points as illustrated in Fig. 1(a) in our submitted manuscript; 2) point density decreases with the increase of depth as illustrated in Fig. 1(b) in our submitted manuscript.\n\n---\n\nQ2-3: Lack of comparison against recent UDA methods?\n- We evaluated PolarMix over the latest and challenging UDA benchmarks including SynLiDAR$\\rightarrow$SemanticKITTI and SynLiDAR$\\rightarrow$SemanticPOSS. As Table 5 in our submitted manuscript shows, PolarMix outperforms the recent UDA method PCT (AAAI2022) by a large margin. We copied the results in Table 5 in our submitted manuscript for your quick reference, please find in Table H below.\n- Thanks for sharing the xMUDA. We note that xMUDA is a multi-modal UDA method that consists of cross-modal and uni-modal UDA. We compare PolarMix with the uni-modal xMUDA since PolarMix is designed for the single-modal LiDAR data. As Table H shows, PolarMix clearly surpasses xMUDA in both benchmarks, indicating the effectiveness of PolarMix in mitigating domain gap of LiDAR point clouds. We will update manuscript and include the comparison with xMUDA.\n\nTable H: Experiments on unsupervised domain adaptation with SynLiDAR (as source) and SemanticKITTI and SemanticPOSS (as target). PolarMix achieves clearly the best semantic segmentation across both unsupervised domain adaptation setups.\n\n| Methods | Publication | SynLiDAR $\\rightarrow$ SemanticKITTI | SynLiDAR $\\rightarrow$ SemanticPOSS |\n| :------ | :-------: | :-------: | :-------: |\n| Source Only | - | 20.4 | 20.1 |\n| ADDA [31] | CVPR2017 | 22.8 | 24.9 |\n| Ent-Min [33] | CVPR2019 | 25.5 | 25.5 |\n| ST [43] | CVPR2019 | 26.5 | 27.1 |\n| PCT [35] | AAAI2022 | 28.9 | 29.6 |\n| xMUDA [H] (newly included) | CVPR2020 | 28.5 | 28.9 |\n| PolarMix(Ours) | - | **31.0** | **30.4** |\n\n---\n\nQ3: Writing issues?\n- Thanks for your suggestion. MinkUNet is a sparse voxel-based convolutional network while SPVCNN is a hybrid network with sparse voxel-based convolutional layers and point-based neural layers. We will revise relevant text to make it clearer.\n- We agree that qualitative results are helpful for the understanding and we provided them in the supplementary material due to the finite spaces of the manuscript. We will include qualitative results in the updated manuscript.\n\n---\n\nQ4: Limitation of PolarMix?\n- PolarMix is specifically designed for outdoor LiDAR point cloud learning and is not applicable to the dense indoor point set datasets. We will make clear representations about it.", " Thanks for your acknowledgment of the value of our technical method and impressive experimental results across various tasks. Below please find our responses regarding your concerns.\n\nQ1-1: Why PolarMix is novel and brings significant performance improvements across several target tasks?\n\n- As described in Lines 48-56, PolarMix works excellently because it enriches the diversity of LiDAR point data yet ensures their fidelity concurrently. As a comparison, most existing methods such as Mix3D [21] can improve the data diversity as well but they impair the data fidelity which affects their effectiveness.\n- In addition, Lines 39-46 provide detailed analysis and discussion. We copy and summarize the relevant text below for your quick reference.\n - As compared with other augmentation methods (e.g. CutMix [37], Copy-paste [13], Mix3D [21]), PolarMix preserves unique properties of augmented LiDAR data by mixing points along azimuth direction, i.e. 1) partial visibility which means objects in LiDAR scans are incomplete where only object sides facing the LiDAR sensor are scanned with points as illustrated in Fig. 1(a); 2) point density decreases with the increase of depth as illustrated in Fig. 1(b). As a result, the augmented LiDAR scans of PolarMix are more realistic.\n- The two favorite properties have been experimentally verified in Section 4.1.2, where extensive experiments show that PolarMix outperforms the state-of-the-art point cloud augmentation methods consistently with clear margins.\n\n---\n\nQ1-2: More deep analysis for PolarMix?\n\n- Thanks for your constructive comments. We conducted the suggested new experimental analysis which will be included in the updated manuscript. We show that **PolarMix increases the recognition robustness in both spatial locations and scene layout**.\n- Firstly, we rotate instances in the testing LiDAR scans and report segmentation performances of MinkNet w/ or w/o using PolarMix, which evaluates how models recognize instances appearing in different spatial locations. \n - Table E below shows experimental results. It can be seen that the baseline performance drops significantly while rotating instances by different angles. This is largely because the baseline is very sensitive to the spatial location of instances that is often severely imbalanced in most existing datasets (due to LiDAR data collection and annotation constraints).\n - As a comparison, PolarMix is robust to the instance spatial location without much performance drop. The experimental results show that PolarMix effectively improves the generalization of the trained LiDAR model (with respect to the instance spatial location) by generating lots of training samples at different spatial locations.\n- We then swap sectors of two testing scans with different angles, which generates new testing LiDAR scans with different layouts of road scenes. Similarly, we report segmentation performances of MinkNet w/ or w/o using PolarMix.\n - The results are summarized in Table F below. We can observe that the baseline performance drops significantly while swapping sectors in the testing set with different angular ranges. However, the models trained with PolarMix are more robust with much less performance drop, indicating that PolarMix improves the robustness of LiDAR models with respect to the scene layout effectively.\n \nTable E (with newly conducted experiments): Segmentation with MinkUNet over the validation set of SemanticKITTI with rotated instances. PolarMix improves the robustness of the baseline clearly with respect to the angular variations of instances (i.e. spatial location variations).\n\n| method | $0^\\circ$ | $45^\\circ$ | $90^\\circ$ | $135^\\circ$ | $180^\\circ$ |\n| :----- | :-------: | :-------: | :-------: | :-------: | :-------: | \n| baseline | 58.9 | 58.0(-0.9) | 57.6(-1.3) | 57.9(-1.0) | 57.5 (-1.4) |\n| +PolarMix | 65.0 | 64.9(-0.1) | 64.9(-0.1) | 65.0(-0.0) | 64.8(-0.2) |\n\nTable F (with newly conducted experiments): Segmentation result of MinkUNet over validation set of SemanticKITTI. We swap sectors of testing LiDAR scans to diversify layouts of road scenes and report mIoU performances. PolarMix significantly increases the robustness of the segmentation model.\n\n| method | $0^\\circ$ | $45^\\circ$ | $90^\\circ$ | $135^\\circ$ | $180^\\circ$ |\n| :----- | :-------: | :-------: | :-------: | :-------: | :-------: | \n| baseline | 58.9 | 58.0(-0.9) | 56.4(-2.5) | 56.6(-2.3) | 57.6(-1.3) |\n| +PolarMix | 65.0 | 64.5(-0.5) | 64.5(-0.5) | 64.2(-0.8) | 64.4(-0.6) |", " Thank you for your appreciation of the novelty and simplicity of our proposed methods, the clearness in presentation, and thorough evaluation across various tasks. Below please find our responses regarding your concerns.\n\nQ: Will the authors release code after publication?\n\n- Yes, we are committed to open-source research and will release our codes upon the acceptance of this work.", " Thanks for your confirmation of the value of our proposed approach, impressive experimental results as well as clearness of presentation. Below please find our responses to your concerns.\n\nQ1: PolarMix on the latest baseline models?\n\n - We would clarify that SPVCNN [30] and Cylinder3D [41] tested in our submitted manuscript are two of most advanced open-source segmentation networks for LiDAR point clouds. SPVCNN with conventional global augmentation ('SPVCNN+CGA' in Table 1 in our submitted manuscript) achieves competitive segmentation performance (mIoU of 60.7\\% over validation set of SemanticKITTI) with very fast training speed (less than 1 day) while Cylinder3D achieves SOTA performance (mIoU of 65.9\\% as reported in Table 3 in [41]) with relatively longer training time. Both networks are widely used in the 3D point cloud community. In addition, PolarMix significantly boosts SPVCNN and achieves a mIoU of 66.2\\% as shown in table below (extracted from Table 1 in our submitted manuscript), indicating that it is indeed a SOTA augmentation method for LiDAR point cloud learning.\n\n| Method | mIoU |\n| :------------------- | :-------- |\n| Cylinder3D[41] | 65.9 (copied from [41]) |\n| SPVCNN [30] | 60.7 |\n| SPVCNN+PolarMix | **66.2** |\n\n---\n\nQ2: Is our contribution limited?\n\n - We would clarify that the proposed PolarMix is a new data augmentation method as compared with previous methods including CutMix [37], Copy-Paste [13] and Mix3D [21]: CutMix and Copy-Paste from 2D vision work in Cartesian coordinates system; Mix3D globally mixes 3D scenes in an out-of-context manner. All three methods disregard the LiDAR data properties and the augmented data thus lack of fidelity (Lines 39-46). \n - Differently, PolarMix mixes points within the polar coordinate system which ensures the high-fidelity of the augmented point cloud data (Lines 48-50). Specifically, PolarMix consists of two innovative augmentation designs including scene-level swapping and instance-level copy-and-paste, both operating in the azimuth direction that is well aligned with the unique scanning mechanism of LiDAR sensors (Line 37-38) and preserving specific properties of LiDAR scans including partial visibility and density variation along the depth (Lines 42-47 and Figure 1).\n\n---\n\nQ3: Refer to LiDAR-Aug (CVPR2021)?\n\n - Thank you for suggesting this paper. We cited LiDAR-Aug in the related works of the manuscript [12] (Lines 93-94) and will provide a more detailed review of this work. We note that the two methods are essentially different and incomparable. LiDAR-Aug uses additional synthetic CAD models (e.g. cars and pedestrians) to augment LiDAR scans while our PolarMix does not use any additional data sources. In addition, LiDAR-Aug does not release CAD resources and code which makes reproduction and comparison almost impossible.\n\n\n---\n\nQ4: Subdividing it along inclination direction?\n \n - Thank you for your suggestion! We also considered mixing in the inclination direction in the stage of methodology design. After some preliminary studies, we found that subdividing in the inclination dimension either impairs LiDAR data fidelity or requires very complicated point rendering design to preserve LiDAR characteristics, e.g., for preserving partial visibility and density variation along the depth. As a comparison, cut and mix along the azimuth direction preserves point fidelity greatly and efficiently as it is well aligned with the point capturing and LiDAR scanning mechanism.\n \n---\n\nQ5: Mixing more than two scans?\n \n - Thanks for the suggestion. We conducted the suggested experiments by increasing the mixed LiDAR scans and benchmarking them without using PolarMix. The experiments were conducted with SPVCNN that is trained with sequence 00 of SemanticKITTI. As Table A shows, mixing two scans produces clearly the best performance. We examined the mixed data and found that mixing more scans introduces more hardly distinguishable objects. The experimental results are well in line with other mixing-based augmentation works [7, 32, 38, 21].\n \n Table A (with newly conducted experiments): Varying number of mixed scans by PolarMix. 'no mixing' represents the vanilla training without augmentation of PolarMix.\n \n| \\#Scans | no mixing (baseline)| 2 | 3 | 4 |\n| :----: | :----: | :----: | :----: | :----: |\n| mIoU | 48.9 | **54.8** | 52.2 | 51.3 |\n\n---\n\nQ6: Writing issues?\n- Sorry for the typo. $\\sigma_1$ and $\\sigma_2$ in Line 185 should be $\\delta_1$ and $\\delta_2$ as in the Algorithm 1. We will revise them in the updated manuscript.\n\n---\n\nQ7: Implementation details?\n- We randomly choose rotation angles from different angular ranges when generating multiple copies. There is no specific considerations for these parameters in Lines 183-185. ", " This paper presents a polar coordinate system-based data augmentation approach, PolarMix, for scanning LiDAR point cloud understanding. PolarMix generates augmented data by mixing two different scans with scene-swapping and instance rotate-pasting. Extensive experiments and ablation studies validate the performance gain by PolarMix for three tasks(semantic segmentation, object detection, domain adaption)on three datasets. Strengths:\n- The paper is well-written and easy to follow.\n- The augmentation strategy is beneficial not only for semantic segmentation tasks but also for object detection and domain adaption. I like the idea of mixing two domains’ data scan-wisely to bridge the domain gap.\n- Since PolarMix is a DA strategy, generally it is model-agnostic and applicable to any model.\n\nWeaknesses:\n- The baselines for comparison of w/ and w/o PolarMix are not the latest ones. I am wondering how is the performance gain by DA of PolarMix under some latest backbones with higher baseline performance, the improvement will still significant?\n- The contribution is a little limited, considering the relevant work. PolarMix is kind of the combination of CutMix, Copy-Paste and Mix3D from the perspective of idea. \n- At least one relevant work is not mentioned and compared for object detection task. [1]\n\n[1]J. Fang, X. Zuo, D. Zhou, S. Jin, S. Wang and L. Zhang, LiDAR-Aug: A General Rendering-based Augmentation Framework for 3D Object Detection, CVPR2021 Besides the detailed ablations studies in the work, I am also curious about the following questions:\n- This work tried cutting and swapping points along azimuth directions, how about further subdividing it along inclination direction.\n- Is that possible to mix more than two scans? For example, generate a new can by mixing each quarter from 4 different scans.\n- What do σ_1 and σ_2 indicate (L. 185)?\n- For the rotation angles of rotate-pasting, any reason about why choosing values as L.183-185 for three datasets? The authors didn’t discuss the limitations of their research and potential negative societal impacts.", " This paper presents a data augmentation method specifically designed for LiDAR point clouds. Two layers of data augmentation are introduced: scene level augmentation and instance level augmentation. The authors demonstrated the enhancement brought by this data augmentation method on various applications, and showed that the proposed data augmentation is superior than the state-of-the-art LiDAR data augmentation methods.\n + The idea is simple, in a good way\n+ The evaluation is thorough: the authors demonstrated three application, namely semantic segmentation, object detection, domain gap reduction\n+ The proposed method outperforms the state-of-the-art LiDAR data augmentation methods\n Will the authors release code after publication?\n This paper presented a neat idea and showed extensive experiments to justify its contribution. I do not see any strong limitations.", " This paper proposes an augmentation method for point clouds, especially captured in road environments. The proposed augmentation method is to crop, cut, and mix two 3D scans at both scene-level and instance-level. This concept can be extended to unsupervised domain adaptation (UDA) by fusing source domain point cloud (known labels) and target domain point cloud (unknown labels). Series of experiments demonstrate its superiority in comparison with other point cloud augmentation methods. 1. Strength\n- Simple and straightforward ways of augmenting point clouds. \n- Potential extension to UDA tasks.\n- Consistent performance improvements in various tasks.\n\n2. Weakness\n- Lack of analysis and intuition behind such a design.\nOverall, the authors present extensive experiments to demonstrate the superiority of the proposed data augmentation scheme. However, I wonder why such data augmentation is helpful for point-based recognition tasks. For instance, is PolarMix effective in making the networks more robust to point cloud density? or point cloud noise? The lack of such analysis makes me suspicious of the novelty of this work. Currently, this is the dominant worrying point of this paper.\n 1. Intuition\n\n1-1. I understand that this method effectively improves the quality of several target tasks. However, what is the fundamental intuition behind the proposed data augmentation? Why do such strategies bring performance improvement? While authors demonstrate lots of experiments that consistently increase the accuracy, I cannot expect why such an augmentation scheme is effective and novel. \n\n1-2. What kinds of wrong estimations are fixed after applying the proposed augmentation strategy, PolarMix? For instance, is the estimation of the far-away points more accurate? Or, is this method robust to the density of 3D scans? Even though I read the additional results in the supplementary material, I cannot find any deep analysis of this issue. \n\n2. Experiments\n\n2-1. What if we fuse more than 3 scans?\n\n2-2. What if there is a misalignment after data augmentation? In Fig2 of the manuscript, PolarMix crops point clouds in a certain range of azimuth. It means that the origins of the two-point clouds are aligned. I wonder why such assumption or alignment should be required for this augmentation? Why should we crop point clouds along the azimuth axis? Do you have any reason? (This is also related to Q.1-1)\n\n2-3. Lack of comparison against recent UDA methods in Table5 of the manuscript.\nIn recent days, there are papers about UDA for 3D semantic segmentation, such as xMUDA [H]. In my opinion, the limited demonstration in Table5 of the manuscript does not fully support the authors' claims about the UDA setup. Technically speaking, PolarMix can be applicable to UDA and I agree with the authors' claims. However, I cannot judge whether this augmentation scheme is much more effective for the UDA for the 3D semantic segmentation task in comparison with previous methods [H]. Accordingly, the authors overclaim their novelty in UDA for the 3D semantic segmentation due to their limited comparisons.\n\n3. Writing\n3-1. (L208) As far as my understanding, MinkUNet also consists of sparse convolutional layers as SPVCNN did. However, the sentence can mislead readers as MinkUNet processes dense voxel-based architectures in contrast to SPVCNN.\n\n3-2. I wonder why the authors did not present their qualitative results in the main manuscripts. Not just the quantitative results, qualitative results can help readers' understanding of this work. I recommend authors to revise the paper if it is accepted. \n(I found the results in the supplementary materials. However, for completeness, I still think that the qualitative results should be listed in the main manuscripts.)\n\n> If authors relieve my worries, I am willing to change my score.\n\nReference\n- [H] xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation, CVPR 2020\n 1. While authors only focus on the LiDAR scans that are captured in the road environments, there are other types of point cloud datasets, such as S3DIS [I] or ScanNet [J]. In my understanding, this method is not applicable to the indoor point set datasets. If so, authors should have included this problem as their limitation. \n\nReference\n- [I] 3d semantic parsing of large-scale indoor spaces, ICCV 2016\n- [J] Scannet: Richly-annotated 3d reconstructions of indoor scenes, CVPR 2017\n", " - This paper introduces an approach to augmenting cylindrical LiDAR point cloud to acquire boosted performance on 3D semantic segmentation and 3D detection. The proposed approach called PolarMix enables cut, edit, and mix point clouds along the scanning direction. The augmentation happens on the scene-level and instance-level so that the augmented data provides a variety of combinations of the augmented scenes. The proposed approach is superior to conventional global rotation and scale augmentation, CutMix [37], Copy-Paste [13], and Mix3D [21]. **Strengths**\n1. The paper comprehensively summarizes the related work in the point cloud augmentation field. The paper is self-contained, so the readers readily follow the problem and the recent advances.\n2. The paper is straightforward to understand. I enjoyed reading the paper. The idea of mixing the 3D scenes is already known, but mixing on the polar coordinate seems to be very effective on the cylindrical LiDAR point clouds. \n3. The approach shows compelling results on the SemanticKITTI [1], nuScenes-lidarseg [2], SemanticPOSS [22], SynLidar datasets. The proposed augmentation approach is applied with recent 3D semantic segmentation networks, such as MinkNet [8], SPVCNN [30], RandLA-Net [15], Cylinder3D [41]. For the task of 3D detection, PointPillar [18], Second [36], CenterNet [10] are applied. The gain is clear, and it outperforms other baselines.\n4. The approach shows the effectiveness of the unsupervised domain adaptation as well. Since the approach can mix labeled source data and unlabeled target data, the approach can be readily applied to the various combinations of the domains.\n5. The proposed augmentation approach improves data efficiency, as demonstrated in the experiment section. With the small amount of 3D scans, the PolarMix can produce a similar performance.\n6. The paper explains the proposed idea in detail. The supportive figures (such as Figures 1 and 2) help to understand the approach better.\n\n**Weakness**\n1. The approach is a simple extension of the idea of the mix of 3D scenes and rotating bounding boxes (limited to the azimuth angles). The idea is not entirely new, and the target domain is limited to the cylindrical LiDAR datasets, not the general 3D scenes. However, the cylindrical LiDAR domain is one of the exciting domains for the task of intelligent mobility systems, and the custom design of 3D detection for the cylindrical LiDAR data also forms a research field. Therefore, I think it is not a critical weakness.\n2. It is unclear how the baseline approaches for the semantic segmentation are selected. In addition to the CutMix [37], Copy-Paste [13], Mix3D [21], there are possible options to be applied. Similarly, approaches to augment the 3D detection task, such as GT-Aug [36, 12], CutMix, or approaches of [6, 11, 13] could be applied. The paper needs some clarification on how the baseline approaches were selected.\n3. It is recommended to indicate the computational overhead when the proposed approach is applied. For instance, when training MinkNet, compared with the vanilla MinkNet training, what percentage of the total time is added to use PolarMix? Depending on the additional computation burden, the baseline approaches could be reevaluated. 1. Did the authors try to mix more than two scenes? The proposed Algorithm only shows how to mix two scans, but I wonder if the idea could be generalized.\n2. It is interesting to see that the rotated instances help for the data augmentation. However, I presume that the instances in a highly cluttered or distant scene would not be that effective. Such instances would make unrealistic samples because they are partially observed in a particular viewing direction. Some analysis of the data effectiveness or performance gain versus the distribution of the distant objects would be interesting.\n3. Please check the weakness section and answer the questions there too.\n - The paper does not address the limitation of the proposed approach. The concern about the computational overhead is clarified and stated. If it has drawbacks in any facts, the limitation section needs to describe them.", " This paper proposed a data augmentation method named PolarMix for LiDAR point cloud perception. It includes two separate operations: a scene-level swapping that first cuts point cloud sectors of two LiDAR scans w.r.t. the point azimuth values and then switches the cut sectors to form a new sample for training; and an instance-level copy-paste which selects instance points for certain classes from one scan, rotates them along the LiDAR scanning direction and pastes the rotated points to another scan. Experimental results show that PolarMix yields improvements for both LiDAR semantic segmentation and 3D object detection. Strengths:\n1. This paper introduces a new data augmentation method for LiDAR point clouds.\n\n2. The proposed PolarMix is tested for both LiDAR semantic segmentation and 3D object detection and the results show that the proposed method can improve the baselines for both tasks.\n\nWeaknesses:\n1. This paper explains mostly the “what” and “how” but not much “why” for their specific augmentation operations. More design insights and analysis can improve this paper a lot.\n\n2. The experimental analysis is insufficient to support all conclusions stated in this paper.\n\n3. The elaboration of this paper is not good enough. Some assumptions are made without clear logical relations.\n Regarding weakness 1, there are some concerns w.r.t. the idea proposed in this work:\n1. In L56, the authors state that PolarMix can maintain high fidelity for the augmented samples. However, the augmented scan shown in Figure 1 (d) contains lots of instances like cars that are scattering around the center-vehicle with arbitrary directions, which might not happen in real-world scenarios. Also, the pasted instances seem to be overlapped with the background classes. As instances like cars, bicycles, and pedestrians tend to appear only on the road surface, how to ensure that they are pasted to a proper position with high fidelity?\n\n2. In L109, the LiDAR points in the polar coordinate system are defined by azimuth theta, depth r, and inclination phi. Why select only the azimuth to split point cloud sectors, but not depth or inclination or even the combination of all three?\n\n3. In L207, the authors state that the proposed method can work across different LiDAR point cloud representations. Can it be applied to range view networks? How does it ensure a proper projection after concatenating or pasting new points in the current LiDAR point cloud?\n\n4. In suppL19, randomly swapping sectors of 45 degrees could downgrade the segmentation performance. Any possible explanation on why this problem happens?\n\nRegarding weakness 2, there are some concerns w.r.t. the experiment part of this work:\n1. The main results for LiDAR semantic segmentation, e.g., Table 1, are from relatively old segmentation methods MinkNet (55.9% mIoU) and SPVCNN (58.0% mIoU), which do not yield competitive performance over state-of-the-art ones, such as Cylinder3D (65.9% mIoU). Although the authors involved RandLA-Net and Cylinder3D in later analysis, i.e., Table 3, they are only tested under the 10% data setting. Therefore, the performance gains of PolarMix from the relatively lower-score methods are not representative enough to demonstrate superiority.\n\n2. The results for 3D object detection, i.e., Table 4, only include comparisons with the baselines. Other augmentation techniques, such as copy-paste, are recommended to include for a more comprehensive comparison.\n\n3. The ablation studies in Section 4.4 is not comprehensive enough to support the effectiveness of each of the components in PolarMix. Besides, the scene-level swapping seems less effective compared to the rotate-pasting.\n\nRegarding weakness 3, there are some minor problems w.r.t. the elaboration:\n1. Omega and C should be properly defined after their first appearance, i.e., Eq. (1) in L116.\n\n2. The delta_1 and delta_2 symbols in Algorithm 1 are not defined.\n\n3. The sigma_1 and sigma_2 symbols in L185 are not defined.\n\n4. In L247, why the cut-and-mix strategy in PolarMix can mitigate the domain discrepancy? Since PolarMix does not include any domain alignment components like the domain discriminator, the performance gains seem more related to the augmentation effect but not the adaptation effect.\n This paper does not include a specific description of potential limitations. In L396, the author state that the effectiveness of the proposed method might be influenced by the parameter setting, which should be regarded as and included in ablation analysis. More general limitations are recommended to be discussed, such as the applicability of the proposed method for different types of LiDAR point clouds and networks." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5, 4 ]
[ "RfG9A7s1Y6q", "Sw7wN2jXODE", "IvWJFbiKQo", "xdoU4mGVBM68", "9a_hAvigzUb", "4SIJU6RtI_", "RfG9A7s1Y6q", "Hm4TOQrbev", "tIhr0vlkEIi", "Sw7wN2jXODE", "EPrBDQzp3qC", "IvWJFbiKQo", "nips_2022_wS23xAeKwSN", "nips_2022_wS23xAeKwSN", "nips_2022_wS23xAeKwSN", "nips_2022_wS23xAeKwSN", "nips_2022_wS23xAeKwSN" ]
nips_2022_sde_7ZzGXOE
Is Out-of-Distribution Detection Learnable?
Supervised learning aims to train a classifier under the assumption that training and test data are from the same distribution. To ease the above assumption, researchers have studied a more realistic setting: out-of-distribution (OOD) detection, where test data may come from classes that are unknown during training (i.e., OOD data). Due to the unavailability and diversity of OOD data, good generalization ability is crucial for effective OOD detection algorithms. To study the generalization of OOD detection, in this paper, we investigate the probably approximately correct (PAC) learning theory of OOD detection, which is proposed by researchers as an open problem. First, we find a necessary condition for the learnability of OOD detection. Then, using this condition, we prove several impossibility theorems for the learnability of OOD detection under some scenarios. Although the impossibility theorems are frustrating, we find that some conditions of these impossibility theorems may not hold in some practical scenarios. Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios. Lastly, we also offer theoretical supports for several representative OOD detection works based on our OOD theory.
Accept
This paper studies generalization and learnability questions in the realm of out-of-distribution (OOD) detection. Specifically, it applies PAC learning to the theory of OOD detection. The contributions include new conceptual definitions of agnostic PAC learnability of OOD detection. Then, the authors argue for studying prior-unknown spaces under certain necessary conditions. This leads to a number of novel results, both in theory and in terms of possible practical impact (e.g., when OOD detection will succeed vs. fail). The reviewers found the paper sound, insightful, clearly-written, and novel. This paper benefits the community because it is one of the few theoretical studies of OOD detection. For the final version, the reviewers have many comments regarding definitions, terminology, and some of the technical details. I encourage the authors to incorporate as much of this feedback as possible to make the paper easier to read for future audiences. For example, please - add the full proof of how Eq. (2) relates to PAC-learnability, - add and clarify the realizability assumption in the revision, - use the description of Theorem 4 in appendix G.2 to replace Theorem 4 in main text. The authors should also provide proof sketches for the main results (either in the main paper or the appendix). This paper contains many theoretical results, as well as ways to unpack them in the context of more practical scenarios. All of this would benefit from clear exposition. There are also a handful of typos to fix (in the notation/equations and in the exposition). Given the large number of small questions/issues, it is important to address these in the final version of the paper. The reviewers all vote positively toward acceptance of this paper, and therefore, I also recommend acceptance.
train
[ "76I8YaF2LK", "MeSTGZRiBuE", "vi1hXR6LBgx", "Qvxyy5SnnsZ", "_183zvTc8M", "u3eKVG3Zi6N", "2mD6XzVS0H", "OaHRN-hvs5", "5_9sfokjkK", "_FHF-gIUWjt", "U1IOGByRwM", "-RvrWLeYR4Y", "4Ep1FKIoPb_", "TWaPSS0iHc" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer KK6q\n\nMany thanks for your kind support!\n\nDo you have more suggestions to improve the quality of our paper? We are glad to discuss our paper with you.\n\nBest,\n\nAuthors of Paper485", " Thanks for the comprehensive response. I have to say sorry for giving an unfair score at first. Since I am not familiar in this area and time is quite limited in reviewing period, I wasn't able to go through all the proofs and give more constructive suggestions. After carefully reading the full paper together with the supplemental materials and other reviewers' comments, I would love to see this paper published and I increased my score. \nThanks again for your responses to my trivial questions.", " Thanks for your comments! We will answer them as follows: \n\n${\\bf Q5.}$ What is the ''+\" means in $D_{X}:=(1-\\pi^{\\rm out}) D_{X_I}+\\pi^{\\rm out} D_{X_O}$?\n\n${\\bf A5.}$ Thank you for your question. \n\nFor convenience, let $P=(1-\\pi^{\\rm out}) D_{X_{\\rm I}}$ and $Q=\\pi^{\\rm out} D_{X_{\\rm O}} $. \n\nIt is clear that $P$ and $Q$ are measures. Then $P+Q$ is also a measure, which is defined as follows: for any measurable set $A\\subset \\mathcal{X}$, we have\n$\n (P+Q)(A)=P(A)+Q(A).\n$\n\nFor example, when $P$ and $Q$ are discrete measures, then $P+Q$ is also discrete measure: for any $\\mathbf{x}\\in \\mathcal{X}$,\n$\n (P+Q)(\\mathbf{x})=P(\\mathbf{x})+Q(\\mathbf{x}).\n$\n\nWhen $P$ and $Q$ are continuous measures with density functions $f$ and $g$, then $P+Q$ is also continuous measure with density function $f+g$: for any measurable $A\\subset \\mathcal{X}$,\n\\begin{equation*}\n P(A) = \\int_A f(\\mathbf{x}) {\\rm d} \\mathbf{x},~~~Q(A) = \\int_A g(\\mathbf{x}) {\\rm d} \\mathbf{x},\n\\end{equation*}\nthen\n\\begin{equation*}\n (P+Q)(A) = \\int_A f(\\mathbf{x})+ g(\\mathbf{x}) {\\rm d} \\mathbf{x}.\n\\end{equation*}", " Thanks for your comments! We will answer them as follows:\n\n${\\bf Q1.}$ Some notations and expressions can be refined in Section 2. For example, $S$ and $D_{X_{\\rm I}Y_{\\rm I}}^n$ in eq.2 can be explained (minor).\n\n${\\bf A1.}$ Thank you for your helpful comments. In the revision (Appendix D.4), we add explanations to refine some notations and expressions in Section 2.\n\n In the example, \n\n$S=\\{(\\mathbf{x}^1,{y}^1),...,(\\mathbf{x}^n,{y}^n)\\}$ is training data drawn independent and identically distributed from $D_{X_{\\rm I}Y_{\\rm I}}$.\n\n$D_{X_{\\rm I}Y_{\\rm I}}^n$ denotes the probability over $n$-tuples induced\nby applying $D_{X_{\\rm I}Y_{\\rm I}}$ to pick each element of the tuple independently of the other\nmembers of the tuple.\n\nBecause these samples are i.i.d. drawn $n$ times, researchers often use ''$S\\sim D_{X_{\\rm I}Y_{\\rm I}}^n$\" to represent a sample set $S$ (of size $n$) whose each element is drawn i.i.d. from $D_{X_{\\rm I}Y_{\\rm I}}$.\n\n The notation $S\\sim D_{XY}^n$ is common used in learning theory and can be found in page 38 in machine learning book [21].\n\n${\\bf Q2.}$ Some typos. In section 2 Definition 1. \"if there exist an algorithm\" $->$ \"if there exists an algorithm\".\n\n${\\bf A2.}$ Thank you for your helpful suggestions. We will revise these typos in the revision.\n\n${\\bf Q3.}$ Some experiments can be added to show the correctness of the theorems.\n\n${\\bf A3.}$ Thank you for your comments. The mathematical proofs are logical experiments. Compared to empirical experiments, the proofs are more rigorous and comprehensive. In general, when we cannot get mathematical proofs, we often verify our results via empirical experiments. However, when we already have rigorous mathematical proofs, empirical experiments are not necessary.\nOther reviewers also check our proofs and believe that our proofs are solid and correct. Since we have provided rigorous mathematical proofs, it is unnecessary to conduct empirical proofs (i.e., experiments) for our theoretical results.\n\n${\\bf Q4.}$ The practical impacts may not be large enough.\n\n${\\bf A4.}$ Thank you for your comments. \n\nWe still argue that our study is not of purely theoretical interest; it has also practical impacts. \n\nIn this paper, we are the first to provide the agnostic PAC theory for OOD detection.\n\nFirst, when we design OOD detection algorithms, we normally only have finite ID datasets, \ncorresponding to\nthe finite-ID-distribution space. In this case, Theorem 8 provides necessary and sufficient conditions to the success of OOD detection. This theorem is very useful and can give practice guidances. \n\n\nSecond, our theory also provides theoretical support (Theorems 10 and 11) for several representative OOD detection works [7,8,23]. \n\nThird, our theory shows that OOD detection can be addressed in image-based distributions as long as ID images have clearly different semantic meanings from OOD images. \n\nFourth, we should not expect a universally working algorithm. It is necessary to design different algorithms in different scenarios.\n\nFifth, our theory reveals many necessary and sufficient condition for the learnability of OOD detection, hence opening a door to studying the learnability of OOD detection.\n\nAdditionally, the other reviewers also agree with us and think that our theory has large practical impacts.\n\nReviewer FqYr: The scenarios that the authors consider are not too technical but highly relevant to practical OOD detection methods. Hence, it gives useful insights for practitioners as well.\n\nReviewer KYDH: These assumptions are practical and mild, and can be satisfied by many practical cases, for example, FCNNs, CNNs and kernel space. Therefore, the theory can be tightly connected with practical applications. \n\nReviewer KYDH: I think the contribution are significantly important and this work can give a good guidance for the development of OOD detection. This paper has the potential to achieve a long term impact to OOD learning field.\n\nReviewer iXvy: From the practical part, several theorems are considered using networks or finite in-distribution domains, making the whole paper also fit the taste of practitioners. In many practical scenarios, we cannot expect OOD data is the ones we have already seen, which is exactly the problem this paper studies. Besides, the theorem regarding finite ID distributions is also practical. If I understand correctly, in this practical scenario, this paper gives a better result, which is very interesting to me and significant to the field (we often only have finite ID distributions in practice).", " Thanks for your comments! We will answer them as follows:\n\n${\\bf Q1.}$ It is better for the author to provide proof sketch and intuitions for important theorems.\n\n${\\bf A1.}$ This is a very good suggestion. We will add the proof sketch for main theorems (e.g., Theorems 5, 8, 9 and 10) in the final version.\n\n${\\bf Q2.}$ I suggest that the author should use the description of Theorem 4 in appendix G.2 to replace Theorem 4 in main text.\n\n\n${\\bf A2.}$ This is a very good suggestion. Your suggestion is correct. In the revision, we revise Theorem 4 according to your suggestions.\n\n${\\bf A3.}$ Typos/grammar:\n\n1) In line 305, $K$ should be $\\lambda$ ?\n\n2) In line 340, $D_{XY|Y}^{\\rm in}$ should be $D_{X_IY_I}$ ?\n\n3) In line 171. $D_{X_I}$ should be $D_{X_IY_I}$ ?\n\n\n${\\bf A3.}$ Thank you for your detailed checking. In the revision, we revise all typos according to your suggestions.\n\n${\\bf Q4.}$ After checking your proof, I think Condition 2 can be removed from Theorems 7 and 10. Although Condition 2 is weak and meaningful, I still think it is better to remove Condition 2. The idea about how to remove Condition 2 can be motivate from the proof of Theorem 9 (the second part).\n\n${\\bf A4.}$ Thank you for your constructive comments! Your idea is correct, when $K=1$. However, when $K>1$, Condition 2 can not be removed. Because the techniques used in Theorem 9 require that $\\inf_{h\\in \\mathcal{H}} R_D(h)=0$. When $K=1$, we can ensure that the approximate error $\\inf_{h\\in \\mathcal{H}} R_D^{\\alpha}(h)=0$ in Theorems 7 and can also find FCNN to ensure that $\\inf_{h\\in \\mathcal{H}} R_D^{\\alpha}(h)=0$ in Theorems 10. However, when $K>1$, we cannot guarantee this. Therefore, the techniques developed in Theorem 9 can only be used to remove the Condition 2 when $K=1$.\n\n", " Thanks for your comments! We will answer them as follows:\n\n${\\bf Q5.}$ The paper refer to distributions as \"domains.\" Is this a common way of saying it in the literature?\n\n${\\bf A5.}$ In the classical statistical learning theory papers, researchers directly use \"distribution\" since there is only one used distribution. When there are more than one distribution used in papers, some researchers tend to use domain to represent the distribution, e.g., transfer learning field, domain adaptation field . Because there are two main distributions used in the OOD detection: ID distribution and OOD distribution, we use \"domain\" to represent the distributions $D_{XY}$ in our paper. Additionally, we also note that paper [24] related to open set learning also use the word \"domain\" to represent the distributions $D_{XY}$.", " Thanks for your comments! We will answer them as follows:\n\n${\\bf Q4.}$ The conditions of Theorem 3, $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$, look similar to Realizability assumption and compatibility condition. However, the conditions of Theorem 3 seem to be prohibiting the learnability in Theorem 3 while Realizability and compatibility conditions are making the learning possible in Theorem 8 and Theorem 9. Should we consider such conditions as good ones or bad ones?\n\n${\\bf A4.}$ This is a very good question. \n\nFirst, the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$ and $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ is very different from Realizability assumption. \n\n1) Realizability assumption requires that there is $h^* \\in \\mathcal{H}$ such that $R_{D}(h^*)=0.$\n Hence, when the unknown class-prior probability $\\pi^{\\rm out}>0$, Realizability assumption implies the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$.\n\n2) But the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ doesn't imply Realizability assumption. The proof of Theorem 3 has shown that when ID and OOD distributions have overlap, then when $\\pi^{\\rm out}>0$, we have $\\inf_{h\\in \\mathcal{H}}R_D(h)>0$. Hence, we cannot find hypothesis function $h^* \\in \\mathcal{H}$ such that $\n R_{D}(h^*)=0.$\n\n3) The difference between condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ and Realizability assumption is that Realizability assumption requires that we can find a ${\\bf common}$ hypothesis function $h^*$ such that $R_D^{\\rm in}(h^*)=0$\nand $R_D^{\\rm out}(h^*)=0$, but the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ does not imply that we can find a ${\\bf common}$ function $h^*$ such that $R_D^{\\rm in}(h^*)=0$\nand $R_D^{\\rm out}(h^*)=0$.\n\nSecond, the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ is different from the compatibility condition. \n\n1) The compatibility condition doesn't imply $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$.\n\n2) The difference between condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$, $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ and compatibility condition is that compatibility condition requires that we can find a ${\\bf common}$ hypothesis function $h_{\\epsilon}$ such that $R_D^{\\rm in}(h_{\\epsilon})$\nand $R_D^{\\rm out}(h_{\\epsilon})$ can approximate $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)$, respectively. But the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ does not imply that we can find a ${\\bf common}$ hypothesis function $h_{\\epsilon}$ such that $R_D^{\\rm in}(h_{\\epsilon})$\nand $R_D^{\\rm out}(h_{\\epsilon})$ can approximate $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)$, respectively.\n\nThird, the main aim to develop Theorem 3 is different from that of Theorems 8 and 9.\n\n1) Theorem 3 discusses how the overlap between ID and OOD affects the learnability of OOD detection. According to Theorem 3, we know that if the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ holds, then overlap between ID and OOD results in the failure of OOD detection. In fact, Theorem 3 is deeply related to Theorems 4 and 12 and is the necessary lemma of Theorems 4 and 12.\n\n2) Theorem 9 discusses when the Realizability assumption holds, OOD detection can be learnable in some cases. When the realizability assumption holds, the overlap will not happen. So there is no contradiction between Theorem 9 and Theorem 3.\n\n3) Theorem 8 discusses that the compatibility condition is necessary and sufficient condition for the learnability of OOD detection in the finite-ID-distribution space. There is no any relation between Theorem 8 and the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$.\n\nFourth, it is difficult to say whether the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$\nand $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ is good or bad. \n\n1) This condition is practical. When $K=1$, FCNN-based hypothesis space, score-based hypothesis space and kernel-based hypothesis space satisfy this condition. Thus, in the one-class novelty detection\nand semantic anomaly detection cases, it may be inevitable to meet this condition. So, from the practical perspective, this condition is good. \n\n2) This condition implies that overlap between ID and OOD distributions can result in the failure of OOD detection. Thus, this condition restricts the scope of application of OOD detection. From this view, this condition is bad.", " ${\\bf Q1.}$ l.118, \"they are equivalent by Markov's inequality\": I can see that Eq. (2) implies the standard form of PAC-learnability by Markov's inequality, but I cannot see how we can confirm the converse. Could the authors provide a proof or a reference for that?\n\n${\\bf A1.}$ Thanks for your helpful comments. You are right. Eq. (2) implies the standard form of PAC-learnability by Markov's inequality, but the converse is proven by another techniques or inequalities. We give a proof in Appendix D.3. Additionally, Reviewer iXvy also proposed a similar question and believes that exercise 4.5 in [21] implies the answer. We give a brief proof as follows:\n\nWe need to prove the standard form of PAC-learnability implies the Eq. (2).\n\nPAC-learnability: for any $\\epsilon>0$ and $0<\\delta<1$, there exists a function $m(\\epsilon,\\delta)>0$ such that when the sample size $n>m(\\epsilon,\\delta)$, we have that with the probability at least $1-\\delta>0$,\n$\nR_D(\\mathbf{A}(S))-\\inf_{h\\in \\mathcal{H}} R_D(h) \\leq \\epsilon.\n$\n\nNote that the loss $\\ell$ defined in line 104-105 is bounded (because $\\mathcal{Y}_{\\rm all}$ is a finite set). We assume the bound of $\\ell$ is $M$, i.e., $|\\ell|\\leq M.$ Hence, according to the definition of PAC-learnability, when the sample size $n>m(\\epsilon,\\delta)$, we have that \n\n$\n E_S [ R_D(\\mathbf{A}(S))-\\inf_{h\\in \\mathcal{H}} R_D(h)] \\leq \\epsilon(1-\\delta)+2M\\delta < \\epsilon+2M\\delta.\n$\n\nIf we set $\\delta = \\epsilon$, then when the sample size $n>m(\\epsilon,\\epsilon)$, we have that \n\n$\n E_S [ R_D(\\mathbf{A}(S))-\\inf_{h\\in \\mathcal{H}} R_D(h)] < (2M+1)\\epsilon.\n$\n\nthis implies that\n\n$\n\\lim_{n \\rightarrow +\\infty} E_S [ R_D(\\mathbf{A}(S))-\\inf_{h\\in \\mathcal{H}} R_D(h)] =0.\n$\n\nwhich implies the Eq. (2). We have completed this proof. The key of this proof is that the loss $\\ell$ is bounded.\n\n${\\bf Q2.}$ ll.260-261, ''Since researchers can only collect finite ID datasets as the training data in the process of algorithm design, it is worthy to study the learnability of OOD detection in the finite-ID-distribution space\": I am not sure how to relate the fact to finite-ID-distribution space. Does \"finite ID datasets\" here mean datasets of finite samples or a finite variety of datasets? If it's for the latter sense, do the authors assume $|\\mathcal{X}|<+\\infty$ here?\n\n${\\bf A2.}$ Thank you for your comments. Here the \"finite ID datasets\" means the number of ID datasets is finite. For examples, in the classical OOD detection paper [23], the authors use the SVHN, CIFAR-10 and CIFAR-100 datasets as the ID-distribution data to conduct experiments. There are three ID datasets, so the domain space, which only contains SVHN, CIFAR-10 and CIFAR-100 as ID-distribution, can be regarded as the finite-ID-distribution space. Additionally, we needn't to assume that $|\\mathcal{X}|<+\\infty$. Theorem 8 is the key theorem related to finite-ID-distribution space, but we don't assume $|\\mathcal{X}|<+\\infty$ in Theorem 8. In Theorem 8, we only assume that $\\mathcal{X}$ is a bounded set, which means that there exists a constant $M$ such that for any $\\mathbf{x}\\in \\mathcal{X}$, $||\\mathbf{x}||<M$. The assumption that $\\mathcal{X}$ is a bounded set is very weak and can be satisfied in most cases.\n\nNote that\n$|\\mathcal{X}|<+\\infty$ means the number of elements in set $\\mathcal{X}$ is finite. However, the assumption that $\\mathcal{X}$ is a bounded set means that there exists a constant $M$ such that for any $\\mathbf{x}\\in \\mathcal{X}$, $||\\mathbf{x}||<M$. Hence, the two assumptions that $|\\mathcal{X}|<+\\infty$ and $\\mathcal{X}$ is a bounded set are very different.\n\n${\\bf Q3.}$ Could the authors define the realizability assumption explicitly?\n\n${\\bf A3.}$ Thank you for your helpful comments. We will demonstrate realizability assumption in the revision (line 278-279) and give the strict definition in Appendix D.2. The definition of realizability assumption is from definition 2.1 in [21]. \n\nRealizability Assumption: if for any domain $D_{XY}\\in \\mathscr{D}_{XY}$, there is a hypothesis function $h^*\\in \\mathcal{H}$ such that \n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$ $R_{D}(h^*)=0.$", " ${\\bf Q5}.$ The density-based space is very important and interesting. Especially, the theorem 11 is one of the spotlights. Can you\ngive more explanations or applications regarding density-based space (theorems 9 and 11)?\n\n${\\bf A5}.$ Thank you for your comments. The density-based space can be widely used. I give two practical examples to explain how to use the density-based space. Example 1, if ID distribution and OOD distribution are mixture truncated normal distributions, then we can check that the generated domains belong to some density-based spaces. Example 2, for any domain space $\\mathscr{D}_{XY}$, which contains a density-based space such that the equivalence classes between the domain space and the density-based space are same, then we can check that Theorems 9 and 11 still hold for this domain space. \n\n${\\bf Q6}.$ The mathematical expression in Definition 1 about PAC learnability is different with the normal expression of PAC learnability. Although line 118 has told us that they are equivalent and I also realize that they are equivalent by paper [21,30] (exercise 4.5 in [21] can prove it?) , the paper will be improved and more clear if a brief proof for the equivalent descriptions is given in the final version.\n\n${\\bf A6}.$ Thanks for your comments and suggestions. You are right. The exercise 4.5 in [21] implies the answer. Reviewer FqYr also proposed a similar question. In the response for Q1 of Reviewer FqYr, we give a proof to show that the standard form of PAC-learnability implies the learnability. In the revision, we also provide a proof in the Appendix D.3 to show why Definition 1 about PAC learnability is equal to the normal expression of PAC learnability. \n", " Thanks for your comments! We will answer them as follows:\n\n$\\bf{Q1.}$ I have read some papers regarding PQ learning and feel that PQ learning is totally different from OOD detection. PQ learning focuses on scenarios where OOD data are somehow available, yet OOD detection focuses on the opposite scenarios. However, it is better to demonstrate their difference deeply. Does PQ learning have limitations when meeting different OOD data in the future? I am interested to see some discussions regarding this part.\n\n$\\bf{A1.}$ Thank you for your comment. This is a good comment. [49, 50] focus on PQ learning theory. In PQ learning, $P$ corresponds to ID distribution $D_{X_{\\rm I}}$ in OOD detection. $Q$ corresponds to marginal distribution $D_{X}$. $f$ is the labeling function. Using the same notations in [50], PQ learning aims to achieve the following estimation: for algorithm $\\mathbf{A}$,\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ \\mathbb{E}_S~[{\\rm rej}_P(\\mathbf{A}(S)) + {\\rm err}_Q(\\mathbf{A}(S);f)]<\\epsilon(n).$\n\nIn the separate space and $f$ is only defined over ${\\rm supp}~P$, our task aims to achieve the following estimation: for any $h\\in \\mathcal{H}$,\n\n$~~~~~~~~~~~~~~~~$$\\mathbb{E}_S~[(1-\\pi^{\\rm out}){\\rm rej}_P(\\mathbf{A}(S)) $$+ {\\rm err}_Q(\\mathbf{A}(S);f)]< $$[\\{(1-\\pi^{\\rm out}){\\rm rej}_P(h) + {\\rm err}_Q(h;f) \\} ]+ \\epsilon(n).$\n\nPQ learning aims to give PAC estimation or estimation under the realizability assumption. But, we study the agnostic PAC. Under some conditions, PQ learning can be regarded as the PAC theory for OOD detection in the semi-supervised (SS) or transductive learning (TL) cases. When the OOD distribution is different with Q, PQ learning has limitations when meeting different OOD data.\n\n\n\n\n$\\bf{Q2}.$ Similar to PQ learning, classification with reject option could be deeply compared to OOD detection instead of just\ncomparing both using plain words. I know they are very different and OOD detection theory is more difficult. But giving more\ndetailed comparison is better for this paper.\n\n$\\bf{A2.}$ Thank you for your comment. This is a good comment. Many papers [42, 43, 44, 45, 46, 47, 48] have discussed Classification with Reject Option (CwRO). There are two main differences between CwRO and OOD detection.\n\nThe first difference is that CwRO only focuses on the ID risk estimation: for any $h\\in \\mathcal{H}$,\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$\\mathbb{E}_S~R_D^{\\rm in}(\\mathbf{A}(S))<R_D^{\\rm in}(h)$\n\nHowever, OOD detection theory not only focuses on the ID risk estimation, but also focuses on the OOD risk estimation: : for any $h\\in \\mathcal{H}$,\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$\\mathbb{E}_S~R_D^{\\rm out}(\\mathbf{A}(S))<R_D^{\\rm out}(h)$\n\nThe second difference is that CwRO focuses on constructing special hypothesis spaces to reject the outlier, however, the hypothesis spaces used in our paper are more general and practical. For example, our hypothesis spaces are FCNN-based, Score-based and kernel-based.\n\n$\\bf{Q3.1}.$ In Figure.1, do we expect that the estimated lines (dash lines) get closer to the solid line? \n\n$\\bf{A3.1}.$ If we hope that OOD detection is learnable, then we except that the estimated lines (dash lines) get closer to the solid line.\n\n$\\bf{Q3.2}.$ If so, when overlap exists, why is the solid line not straight? Can you bring me to the specific part regarding this?\n\n$\\bf{A3.2}.$ When the condition $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm in}(h)=0$ and $\\inf_{h\\in \\mathcal{H}} R_D^{\\rm out}(h)=0$ holds, then we can ensure that when overlap exists, the solid line is not straight. This has been proven in Theorem 3. You can find he detailed proof of Theorem 3 in Appendix.\n\n$\\bf{Q3.3}.$ It seems that the solid line will be straight if there are no overlaps, which makes OOD detection learnable. Is that correct? \n\n$\\bf{A3.3}.$ For some special OOD domain space, this is correct. Theorems 2, 8 and 11 imply that under some mild conditions, if the solid line is a line, then OOD detection is learnable for the single-distribution space, finite-ID-distribution space and density-based space.\n\n$\\bf{Q4}.$ More explanation, like Figure 1, could be added for understanding the theorems better. Brief proofs might be also useful.\n\n$\\bf{Q4}.$ Thank you for your comments. This is a very good suggestion. We will add the proof sketch for main theorems (e.g.,\nTheorems 5, 8, 9 and 10) in the final version.\n\n\n", " This paper provides theory on PAC-learnability of out-of-distribution (OOD) detection.\nOOD detection is classification task but test data may come from unknown classes.\nIf test data come from classes known during training, we want to classify them into those classes, but otherwise, we need to detect they belong to unknown classes.\nThe authors provide a series of theorems about conditions for OOD detection in several interesting setups.\nTheir results imply that we should not hope for finding an OOD detection algorithm that works in general cases, but we can still design algorithms for special cases. # Strengths\n- The paper provides rigorous theory on an important machine learning task.\n- The paper is excellently-written and easy to follow despite its technical content although all the proofs are in the supplemental material.\n- The scenarios that the authors consider are not too technical but highly relevant to practical OOD detection methods. Hence, it gives useful insights for practitioners as well.\n\n# Weaknesses\n- Most results are negative ones showing impossibility of OOD detection in general cases, and the paper does not provide concrete algorithms. - l.118, \"they are equivalent by Markov's inequality\": I can see that Eq. (2) implies the standard form of PAC-learnability by Markov's inequality, but I cannot see how we can confirm the converse. Could the authors provide a proof or a reference for that?\n- ll.260-261, \"Since researchers can only collect finite ID datasets as the training data in the process of algorithm design, it is worthy to study the learnability of OOD detection in the finite-ID-distribution space\": I am not sure how to relate the fact to finite-ID-distribution space. Does \"finite ID datasets\" here mean datasets of finite samples or a finite variety of datasets? If it's for the latter sense, do the authors assume $\\vert \\mathcal{X} \\vert < \\infty$ here?\n- Could the authors define the realizability assumption explicitly?\n- The conditions of Theorem 3, $\\inf_{h\\in\\mathcal{H}} R_D^{\\mathrm{in}}(h) = 0$ and $\\inf_{h\\in\\mathcal{H}} R_D^{\\mathrm{out}}(h) = 0$, look similar to the realizability assumption and the compatibility condition. However, the conditions of Theorem 3 seem to be prohibiting the learnability in Theorem 3 while the realizability and the compatibility conditions are making the learning possible in Theorem 8 and Theorem 9. Should we consider such conditions as good ones or bad ones?\n- The paper refer to distributions as \"domains.\" Is this a common way of saying it in the literature? It is a little confusing to me, and I do not see good motivation for the choice of the word. The theory only handles a few special combinations of distributions and hypothesis spaces although I do not consider this as a very strong limitation because they cover many common practical situations.\n", " The out-of-distribution detection problem is defined as follows: after training on an ID joint distribution $D_{X_{ I}Y_{ I}}$ with random variables from $\\mathcal{X}$ and labels in $\\mathcal{Y}$, we need to learn a classifier which can detect a test sample as OOD if the sample is drawn from outside of $D_{X_{ I}Y_{ I}}$, while predicting the correct label if the test sample is drawn from ID distribution.\n\nThis paper mainly answers the agnostic PAC learnability of out-of-distribution detection in different scenarios, which is known as an open problem in out-of-distribution learning theory. \n\nThis paper firstly defines the basic concepts of agnostic PAC learnability of OOD detection, which are natural extensions of agnostic PAC learnability of supervised learning. Then, considering the imbalance issue of OOD detection, the author proposes the prior-unknown spaces and indicates that researchers should focus on agnostic PAC learnability of OOD detection in the prior-unknown spaces.\n\nBy discovering a necessary condition (Condition 1), the author shows that the condition cannot hold in the total space and separate space. Based on this observation, the paper proves that in most general setting (total space and separate space), OOD detection is not agnostic PAC learnable. \n\nNext, the author proves the necessary and sufficient conditions to show that the separate space can be learnable if and only if the hypothesis space contains almost all classifiers, while the paper proves that in the finite-ID-distribution space, Condition 3 is the necessary and sufficient condition for the learnability of OOD detection. The paper also proves that in the realizability assumption case, OOD detection is learnable in density-based space.\n\nLastly, the author considers OOD detection in some practical hypothesis space—FCNN-based and score-based. The paper shows that in the separate space, OOD is learnable in FCNN-based spaces or score-based spaces iff the feature space is finite. In Theorem 11, the paper shows that Condition 1, condition 3 and realizability assumption and learnability are equivalent. In Theorem 12, the author also reveals that overlap will lead to the failure of OOD detection.\n\nThis paper is important to understand when and how OOD can work in real applications, as this also gives insight and guidance to OOD detection algorithm designing.\n\n Strengths:\n1.\tThe issue is definitely relevant to the NeurIPS as well as ICML, ALT and COLT. When OOD detection can be learnable is an open issue in OOD learning. Due to missing necessary information from the OOD data, the learnability of OOD detection is very difficult. Despite plenty of applied work, there is still few theory to be established for this issue. To address this issue, it requires the author to dig and discovery unknown necessary conditions from scratch. This paper does make an effort to address this problem and make great progress. \n2.\tThis paper is sound. I am interested in this topic, but the paper is long. So I spend several days to check the proofs carefully. All of the results in this paper are supported by proofs. From what I have checked, all proofs are correct.\n3.\tThe paper answers negatively and positively the question of agnostic PAC learnability of OOD, and introduces sufficient assumptions to recover it (such as assumption 1). These assumptions are practical and mild, and can be satisfied by many practical cases, for example, FCNNs, CNNs and kernel space. Therefore, the theory can be tightly connected with practical applications.\n4.\tPlenty applied work has been proposed to address this OOD, but theoretical works discussing when OOD detection work is lacking. The paper theoretical shows when OOD can work in practical cases. I think the contribution are significantly important and this work can give a good guidance for the development of OOD detection. This paper has the potential to achieve a long term impact to OOD learning field.\n5.\tThe paper is written well enough to understand.\n\n\nWeaknesses:\n\n1.\tThe appendix is long and the proofs are complicated. Although I have check almost all important proofs and believe they are correct, I still spend three days to check them. It is better for the author to provide proof sketch and intuitions for important theorems. \n2.\tIt seems that the description of Theorem 4 in main text is slightly different from the description of Theorem 4 in appendix. I have checked it and found that the description Theorem 4 in appendix is more rigorous. Although you have explained why they are different (because of the space limitation) in appendix G.2, I still suggest that the author should use the description of Theorem 4 in appendix to replace Theorem 4 in main text, because the description in appendix is correct. \n3.\tTypos/grammar:\n1) In line 305, $K$ should be $\\lambda$.\n2) In line 340, $D_{XY|Y}^{ in}$ should be $D_{X_{I}Y_{I}}$.\n3) In line 171, $D_{X_{I}}$ should be $D_{X_{I}Y_{I}}$?\n4. After checking your proof, I think Condition 2 can be removed from Theorems 7 and 10. Although Condition 2 is weak and meaningful, I still think it is better to remove Condition 2. The idea about how to remove Condition 2 can be motivate from the proof of Theorem 9 (the second part).\n\n See the weakness 1,2,3,4. The paper focuses on theory for OOD detection and gives the first theoretical support to understand when OOD detection can work. There is no any potential negative social impact.", " This paper explores the theoretical foundation of learnability of out-of-distribution detection. Based on the PAC learning theory, the paper proved several impossibility theorems for the learnability of OOD detection under some scenarios, and finds some conditions that OOD detection is PAC-learnable. Also, the paper demonstrate the theory in real practice using FCNN and OOD scores as examples. Recently there are loads of papers proposed empirical methods for OOD detection, but the theory is rarely explored.This paper is the first to investigate the theory of OOD detection so thoroughly, which is meaningful to this field. Strengths:\n- The paper is clear and well-written. And the proofs are generally correct.\n- This paper is one of the few theoretical works focusing on OOD detection, which plays a significant role in this field.\n- The theory is intuitive and have some practical impacts. It can somewhat guide the design of OOD detection algorithms.\n\nWeakness:\n- Some notations and expressions can be refined in Section 2. For example, $S$ or $D_{XY}^n $ in eq.2 can be explained (minor). \n- Some typos. In section 2 Definition 1. \"if there exist an algorithm\" -> \"if there exists an algorithm\".\n- Some experiments can be added to show the correctness of the theorems.\n- The practical impacts may not be large enough. - What is the \"+\" means in $D_X := (1-\\pi^{out}) D_{X_1} + \\pi^{out} D_{X_O}$ ? (line 82) Yes.", " Recently, reliable AI plays important role in designing an intelligent machine learning system. How to let AI system tell “do not know” is critical for reliable AI systems, which is the focus of this paper. In this paper, the authors consider a practical scenario where out-of-distribution data (the system should not know) is unseen during the training process. In this scenario, the authors want to investigate if the OOD detection is learnable. \n\nThe theoretical part is easy to follow. I find that the theoretical contributions are completed and interesting. At first, this paper shows that OOD detection is not learnable in the most general case, which does make sense due to the unavailability of OOD data. Then, this paper points out a necessary condition (sometimes as a necessary and sufficient condition) of the learnability of OOD detection, which directly induces a lot of necessary and sufficient conditions of learnability of OOD detection. In my opinion, this is a significant contribution to the field. Finding necessary and sufficient conditions is always a core and the most important part when studying a problem.\n\nFrom the practical part, several theorems are considered using networks or finite in-distribution domains, making the whole paper also fit the taste of practitioners. In many practical scenarios, we cannot expect OOD data is the ones we have already seen, which is exactly the problem this paper studies. Besides, the theorem regarding finite ID distributions is also practical. If I understand correctly, in this practical scenario, this paper gives a better result, which is very interesting to me and significant to the field (we often only have finite ID distributions in practice).\n Pros:\n\n1. This paper is the first to characterize the learnability of OOD detection, which makes a significant contribution to the field. There are many OOD detection papers targeting the problem this paper considers. The problem is very difficult yet very important in practice. Previously, no theoretical works are proposed for this problem. In this paper, a completed theory is proposed for this problem, including when OOD detection will fail and when OOD detection will succeed. A lot of necessary and sufficient conditions of learnability of OOD detection are exciting to this field.\n\n2. For practitioners, this paper relieves some big concerns regarding existing OOD detection methods. Before this work, one could intuitively think that OOD detection is not learnable (which is true in the most general case, yet our common datasets are not such general). However, this paper gives a theoretical boundary between learnability and unlearnability of OOD detection by proving some necessary and sufficient conditions. Thus, we can know, on what kind of datasets, OOD detection is learnable.\nThis contribution is significant and meaningful. \n\n3. Fig. 1 is very helpful in understanding the key necessary condition of OOD detection, which seems that it can motivate a bunch of papers in this research direction.\n\n4. I can see that there are three research topics regarding that “let AI say don’t know”: 1) classification with reject option; 2) PQ learning; and 3) OOD detection. The first two have already had some theories but the last one does not have. This paper fills up this gap, making OOD detection method (which might be more practical than the other two) possible in theory.\n\n5. Although the proofs of this paper are not easy to follow, the logic and organizations of proofs are clear. I have read most proofs and have not found unrecoverable errors for important results. The proofs are soundness.\n\nCons:\n\n1. I have read some papers regarding PQ learning and feel that PQ learning is totally different from OOD detection. PQ learning focuses on scenarios where OOD data are somehow available, yet OOD detection focuses on the opposite scenarios. However, it is better to demonstrate their difference deeply. Does PQ learning have limitations when meeting different OOD data in the future? I am interested to see some discussions regarding this part.\n\n2. Similar to PQ learning, classification with reject option could be deeply compared to OOD detection instead of just comparing both using plain words. I know they are very different and OOD detection theory is more difficult. But giving more detailed comparation is better for this paper.\n\n3. I have some questions regarding Figure 1, which I hope that the authors can confirm with me. In my opinion, the solid line is the ground-truth line. Do we expect that the estimated lines (dash lines) get closer to the solid line? If so, when overlap exists, why is the solid line not straight? Can you bring me to the specific part regarding this? It seems that the solid line will be straight if there are no overlaps, which makes OOD detection learnable. Is that correct?\n\n4. More explanation, like Figure 1, could be added for understanding the theorems better. Brief proofs might be also useful.\n\n5. In line 26, there are too many separate citations. In my opinion, it is not necessary.\n\n6. Line 148 should not be a new paragraph.\n\n7. The density-based space is very important and interesting. Especially, the theorem 11 is one of the spotlights. Can you give more explanations or applications regarding density-based space (theorems 9 and 11)? \n\n8. The mathematic expression in Definition 1 about PAC learnability is different with the normal expression of PAC learnability. Although line 118 has told us that they are equivalent and I also realize that they are equivalent by paper [21,30] (exercise 4.5 in [21] can prove it?) , the paper will be improved and more clear if a brief proof for the equivalent descriptions is given in the final version.\n Please answer/revise your paper according to the questions proposed in weaknesses 1,2,3,4,7,8. It is a pure theoretical paper. So I think there is no negative social impacts.\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 1, 5 ]
[ "MeSTGZRiBuE", "Qvxyy5SnnsZ", "4Ep1FKIoPb_", "4Ep1FKIoPb_", "-RvrWLeYR4Y", "U1IOGByRwM", "U1IOGByRwM", "U1IOGByRwM", "TWaPSS0iHc", "TWaPSS0iHc", "nips_2022_sde_7ZzGXOE", "nips_2022_sde_7ZzGXOE", "nips_2022_sde_7ZzGXOE", "nips_2022_sde_7ZzGXOE" ]
nips_2022_1qXIyIxLbEu
Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis
Although recent point cloud analysis achieves impressive progress, the paradigm of representation learning from single modality gradually meets its bottleneck. In this work, we take a step towards more discriminative 3D point cloud representation using 2D images, which inherently contain richer appearance information, e.g., texture, color, and shade. Specifically, this paper introduces a simple but effective point cloud cross-modality training (PointCMT) strategy, which utilizes view-images, i.e., rendered or projected 2D images of the 3D object, to boost point cloud classification. In practice, to effectively acquire auxiliary knowledge from view-images, we develop a teacher-student framework and formulate the cross-modal learning as a knowledge distillation problem. Through novel feature and classifier enhancement criteria, PointCMT eliminates the distribution discrepancy between different modalities and avoid potential negative transfer effectively. Note that PointCMT efficiently improves the point-only representation without any architecture modification. Sufficient experiments verify significant gains on various datasets based on several backbones, i.e., equipped with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN, respectively.
Accept
The paper focuses on the problem of distilling semantic/representation knowledge from 2D images to help further enrich 3D point cloud representation. It received four detailed reviews and a healthy interaction between authors and reviewers ensued. In that back-and-forth, the reviewers clearly stated the weaknesses/issues they saw in the paper, which the authors resolved for the most part through their additional analysis, explanation/clarification, and experiments (e.g. on ShapeNetPart). As such, some reviewers raised their initial review score. Overall, the paper targets an interesting topic in 3D representation learning and it exceeds the bar of contribution and impact expected in NeurIPS papers. The authors are expected to include their additional experiments and discussions in the final version of the paper.
train
[ "EOACATLBXgN", "KkbhAvoHMe2", "EF672tOSnLU", "0QTivIw4ImB", "UUKY-hZMfwX", "CO6vuJkFW0c", "ax5SmI3zAOG", "8pOKe8Bx9h", "sLr-9ZA7t4C", "SEdO81Y4-Wc", "5AlaQiPRJLP", "WawDVXOhXj", "EpuV85Z7bd_", "cIITVr40Kcg", "deziy14nKk3", "UwzBykJRjbM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank for the additional experiment, after reading the author's response and other reviewers' comment, now I raised my score to 5", " The response addressed my concerns. I will keep my Accept score.", " We thank the reviewer’s beneficial suggestions. It seems the deadline of the author-reviewer discussion period. Does our response solve your questions well? Looking forwarding to any feedback.", " We appreciate reviewer's feedbacks! \n\nPointCMT is sensitive to beta (classifier enhancement), but **robust** for alpha (feature enhancement) as previous works. The reason is that, during our classifier enhancement, **the output logits gained by image features are not supervised by the ground truths in the stage III**. Since the classifier is randomly initialized during the stage III, if a very large beta is given, the network will focus on aligning the two output logits but ignoring to regress the ground truths. This makes the network hard to optimize during the initial training phase. Inversely, since the teacher's logits are also supervised by the ground truths in previous works, they have more robust performances with different beta. \n\nThough this seems to be a limitation of our method, it can be easily avoided by giving ad-hoc values in a reasonable range, such as [0.1, 0.5]. For instance, we only tune our alpha and beta on ModelNet40 dataset. Nevertheless, when we train the network on the ScanObjectNN without further tuning, it still greatly improves performances on the ScanObjectNN (3.1% and 2.9%), which illustrates that the classifier enhancement works if beta in a reasonable range.\n\nIf you have any problem, please feel free to let us know.", " I appreciate the additional experiments. I have raised my score to 5. \n\nWith regards to ShapeNetPart, please continue working on it, and put the results in revision. \n\nAlso, please note that fair comparison with other loss functions is very important, extremely because Rebuttal Table 3 shows the results being **sensitive** to alpha and beta. It is *not fair that you finetune your parameters while leaving other methods using the default parameters* for ImageNet experiments. It is **very possible** that the proposed losses are NOT necessary and the use of Cross-Modal Point Generator is NOT a must. It is also possible that a more fine-grained tuning of previous loss functions (e.g. Yang. etal) can work well in cross-modal point cloud training. \n\n\nHere are my further concerns: why your methods are sensitive to alpha and beta, while Yang's method does not? Is this the limitation of your methods?", " Dear reviewer J8fY\n\nWe appreciate reviewer's encouraging feedbacks and beneficial suggestions. Below are results of two experiments.\n\n**ShapeNetPart**\n\nTo illustrate the superiority of our PointCMT, we set three experiments as shown in **Table A**. (a) PointNet++ trained from scratch; (b) PointNet++ with pre-trained encoder on ModelNet40; (c) PointNet++ with pre-trained encoder trained by PointCMT on ModelNet40.\nAs shown in Table, utilizing pre-trained encoder trained by PointCMT effectively improve the performance, especially for the more challenging metric of *Class avg IoU*. Here, *Inctance avg IoU* and *Class avg IoU* denote the IoU averaged by all instances and each class, respectively. These results would be included in the final version.\n\n**Table A. Results on ShapeNetPart with metrics of instance average IoU and class average IoU.**\n| Method | Inctance avg IoU | Class avg IoU |\n|--|--|--|\n| PointNet++ (official) | 85.1 | 81.9 |\n| Pre-trained PointNet++ w/o PointCMT | 85.3 | 82.0|\n| Pre-trained PointNet++ w/ PointCMT | **85.6** (+ 0.3) | **82.6** (+ 0.6) |\n\n**Fair comparison with Yang etal [57]**\n\n**(1) What are alpha and beta?** As understanding of the reviewer, the alpha and beta are weights for losses of feature and logits alignments. If the method only has a single alignment, we see the irrelevant hyperparameters as zero, *e.g.,* in Hinton et al., the alpha should be zeros.\n\n**(2) How to set alpha and beta?** The alpha and beta in our manuscript are chosen by the best hyperparameters in their official papers, *e.g.,* the best hyperparameters of alpha=1, beta=1 in Yang etal [57] on ImageNet. We admire that tuning different alpha and beta would improve the result. We thank reviewer's suggestion and the results in final version would be tuned with different hyperparameters.\n\n**(3) Results of different alpha and beta.** As the suggestion of the reviewer, we compare results using different alpha and beta in Yang etal [57], which is illustrated in the **Table B**. As shown in the table, tuning different alpha and beta only slightly change the results, especially for the more challenging ScanObjectNN. In our final version, we will tune hyperparameters of Hilton etal and Huang etal with another alpha and beta, and update the results.\n\n**Table B. Results of Yang etal [57] with different alpha and beta.**\n| Method (Alpha, Beta) | OA (ModelNet40) | OA (ScanObjectNN) | \n|--|--|--|\n| Yang (1, 1) | 93.7 | 81.1 | \n| Yang (5, 5) | 93.9 | 81.0 | \n| Yang (5, 1) | 93.5 | 80.7 | \n| Yang (1, 5) | 93.8 | 81.1 | \n| PointCMT | **94.4** | **83.3** | ", " Dear authors, thanks for the reply. It solves my concerns about theoretical arguments. \nHowever, you did not convince me about the experiments. I expect experiments in at least ShapeNetPart and fair comparisons with previous methods. Below are the detailed concerns:\n\n\n1. Table 5 clarification. If I understand correctly, you applied all previous distillation methods on both features and logits part. If so, please add these technical details in the revision and also provide the alphas and betas for the previous methods. \n2. Fair comparisons with previous methods. In Rebuttal Table 3, alphas and betas significantly impact the performance. What are the alphas and betas for previous methods? Are they the same as PointCMT. You did not reply my major concern #5: Is that possible that you outperform other distillation techniques simply because the parameters (alphas and betas) are not good for them in Table 5. Ablation studies for alphas and betas for previous methods are encouraged. Consider the short rebuttal time, you can work on just one previous method (e.g. Yang etal [57]). \n3. Experiments in ShapeNetPart are still missed. Only classification experiments in small datasets are not enough. All experiments in the paper were conducted in very small datasets (ModelNet, ScanObjectNN), where results can be very unstable and random. Even the papers you mentioned (Point-MAE, Point-BERT, etc), all at least showed experiments in part segmentation in ShapeNetPart. I expect the authors show the benefits of the proposed method in this benchmark as well. Your methods can be directly applied there since ShapNetPart contains only objects not scenes, so *why not just show the results*? \n\n\nI **am happy to increase my score to accept** only if I see (i) the **results in ShapeNetPart**, and (ii) **fair comparisons with previous methods** (e.g. play with alphas and betas).\n\n", " We thank the reviewer for the encouraging comments and will further improve this work.\n\n**For Weakness**\n\n* We admire reviewer’s comment that the complementary features from real images cannot be accessed through point clouds in the test time. However, it is not in conflict with our motivation of **improving the performance of point cloud model in the training stage**. **a)** Without additional view or projection images, 3D model sometime cannot learn discriminative features through only using point cloud as input. After conducting regularization through image features, the features of 3D model will be more diversified and robust. **b)** Though additional input is not available in the test time, they inherently improve optimization of the 3D model through fully end-to-end training, encoding the prior information in the parameters of the 3D model.\n\n* In L34, we emphasized the paired-images are usually unavailable during the **test time**. For instance, we can train point cloud models with PointCMT on existed public dataset with both images and point clouds. However, when we deploy on the real-world cases, such as an indoor robot that only collects point clouds as input, gaining the additional pair-images for the robot seems to be *potentially difficult*. In this case, PointCMT give us another choice that enhance point cloud analysis performance in the training phrase. Besides, PointCMT actually acts as an effective training technique that boosts the performance of the point cloud methods, which is also beneficial to multi-modal approaches if models for two modalities can be fused. More importantly, our PointCMT inherently boosts the robustness of single- or multi- sensor systems, especially for multi-sensor damage scenarios. \n\n\n**For Question**\n\n1. EMD loss is adopted since using CD loss make CMPG converge more slowly. \n2. We **cannot** reproduce the results of PointMLP on ModelNet40, as depicted in L249-251. However, their results on ScanObjectNN are easier to be reproduced.", " We thank the reviewer’s beneficial suggestions.\n\n**For Weakness**\n\n1. **a)** DGCNN and RSCNN with PointCMT cannot beat Simpleview partly because their baselines are too weak (only 92.9%), though PointCMT have already improved some of them by about 1%. Moreover, ModelNet40 dataset is a small synthesis dataset which is easily saturated in performance (reviewer J8fY). This is why we compare our methods on other two more challenging benchmarks on ScanObjectNN dataset, on which PointCMT improves the baseline by over 4%. **b)** The reason why color cannot improve the results is that OBJ_ONLY dataset only contains 2,902 objects, and image networks are easier to overfit when using the color information. The worse performance of the teacher makes the point cloud model improved little, which has already provided in L335-337.\n2. We provide concrete training cost and analysis in **Common Reply**.\n\n**For Question**\n\nThe model trained with PointCMT also keeps the high generalization ability, and the results are shown in **Table 4**.\n\n**Table 4. Results of generalization test with ModelNet40 to ScanObjectNN (M40 to SONN) and inverse.**\n| Method | M40 to SONN | SONN to M40 |\n|--|--|--|\n| PointNet++ (**official**) | 47.8 | 30.1 |\n| PointNet++ with PointCMT | **49.5** (+ 1.7) | **31.1** (+ 1.0) |", " **For Weakness (Majors)**\n\n1. We respectfully **disagree** with the reviewer’s comment. \n**a)** Since the permutation invariance nature of EMD loss, **the features of two networks are not identical when the EMD loss is equal to zero**. In contrast, we compare multiple traditional knowledge distillation methods that directly make teachers’ and students’ features identical, where our method significantly performs better. Moreover, we also illustrate that simply regularization in the feature space cannot improve the point cloud methods in section B.1 in the supplementary.\n**b)** We formulate the cross-modal learning problem as a knowledge distillation problem, in which the target of most methods is to make the features or outputs of the teacher and student identical. However, due to the discrepancies of data or network architecture, the above problem is generally hard to optimize, and cannot be solved by **a simple regularization term**. This is the reason why previous knowledge distillation methods are proposed and also the motivation of our PointCMT.\n**c)** In spite of the difficulty in optimization, we theoretically provide the lower bound of probability in our formulation and the proof is shown in supplementary material as well.\n\n2. Thanks for your suggestion, and we will make the illustration of Table 5 clearer. The Table aims to compare the full model of PointCMT (Feature Enhancement and Classifier Enhancement losses) with traditional knowledge distillation techniques using the same **baseline PointNet++**, as depicted in the caption. During the implementation of traditional KD methods, we strictly follow their official paper and apply the full architectures. As shown in table, the traditional KD cannot work well in the cross-modal scenario, i.e., Hinton’s method even makes performance of baseline PointNet++ decrease on ModelNet40 dataset.\n3. The ablation results of before and after using feature enhancement (FE) though CMPG and classifier enhancement (CE) have already shown in the Table 4 in our manuscript.\n4. The same of the 3.\n5. In our experiment, we tune our hyperparameters (alpha and beta) through fix the other to zero. The results are shown in **Table 3**. \n\n**Table 3. Ablated results with different Alpha and Beta on ModelNet40 dataset with PointNet++ baseline.**\n| Alpha (Beta=0)| 1 | 3 | 10 | 30|\n|--|--|--|--|--|\n| OA (%) | 93.4 | 93.5 | 93.5 | **93.8** |\n\n| Beta (Alpha=0) | 0.1 | 0.3 | 1| 3|\n|--|--|--|--|--|\n| OA (%) | 93.7 | **94.0** | 70.1 | 60.2 |\n\n6. As discussed in L116-117, though Liu etc [30] was also proposed for cross-modal knowledge transfer, it uses the contrastive learning manner for 3D pre-training that is not relative with our formulation of knowledge distillation.\n7. We respectfully **disagree** with the reviewer’s comment of **classification is the most trivial task in point cloud understanding and the most unimportant task**. In contrast, it is the most fundamental and important field for point cloud analysis, in which numerous downstream applications are inspired from it. There are several pioneer works only designed for classification [1][2]. However, designed for but not limited to classification as recent works [3][4], the pre-trained model of our PointCMT (e.g., PointNet++ on ModelNet40 and ScanObjectNN) can be used in other downstream applications (e.g., part segmentation on ShapeNetPart and semantic segmentation on ScanNet). Also, we admire reviewer’s perspective that some dataset such as ModelNet40 is easily saturated in performance. Nevertheless, we compare our methods on other two challenging benchmarks on ScanObjectNN dataset, where we improve our baseline PointNet++ from very low accuracy of 79% to 83% with a large improvement of 4%. \n\n[1] Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline, ICML2021\n\n[2] PointCLIP: Point Cloud Understanding by CLIP, CVPR2022\n\n[3] Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling, CVPR2022\n\n[4] Masked Autoencoders for Point Cloud Self-supervised Learning, ECCV2022\n\n**For Weakness (Minors)**\n\nWe thank the reviewer’s beneficial suggestions. We will make our final version clearer according to the advice. Belows are something we want to emphasize.\n1. We provide concrete training cost and analysis in **Common Reply**.\n2. All baseline and baseline with PointCMT **use same training strategies**, which is illustrated in L243-245, eliminating any tricky improvement such as augmentation.\n3. The improvement of PointNet++ is not more than others (e.g., 1.0 for PointNet++ and 0.9 for RS-CNN) on ModelNet40. For ScanObjectNN dataset, the less improvement of PointMLP is because it contains 10x parameters and already achieves SoTA performance.", " We thank the reviewer’s careful consideration and beneficial suggestions.\n\n**For Weakness**\n\n* We provide concrete training cost and analysis in **Common Reply**. Moreover, a potential way to reduce the cost of multi-stage training is to jointly train both image and point cloud analysis models in a fully end-to-end manner, which would be investigated in the future work.\n* The measurement of **mean +- std** on ScanObjectNN dataset is provided in **Table 2**, where the current SoTA PointMLP with PointCMT achieves the highest mean with lower std scores compared with their official results. These results would be included in the final version.\n* The results of the teacher network on two datasets have already been shown in **Table 3 (B.3 section) of Supplementary**. As shown in the table, the teacher network does not always perform better than point cloud methods. For instance, on ScanObjetNN dataset, when the teacher only takes projections as inputs, it merely achieves lower accuracy of 80.8% compared with SoTA point cloud methods PointMLP (85.7%). Nevertheless, it still adds an effective regularization upon the point cloud method and brings noticeable improvement (+1.0%). Therefore, PointCMT provides an alternative solution to enhance the point cloud shape analysis when the additional rendered images are not accessible.\n\n**Table 2. The results of PointCMT on ModelNet40 and ScanObjectNN with mean +- std measures.**\n| DataSet | Method | mAcc | OA |\n|--|--|--|--|\n| ModelNet40 | DGCNN w/ Point CMT | 90.5+-0.3 | 93.4+-0.1 |\n| ModelNet40 | RS-CNN w/ Point CMT | 89.9+-0.2 | 93.6+-0.2 |\n| ModelNet40 | PointNet++ w/ Point CMT | 91.0+-0.2 | 94.2+-0.2 |\n| ScanObjectNN | PointMLP (**official**) | 83.9+-0.5 | 85.4+-0.3 |\n| ScanObjectNN | PointNet++ w/ Point CMT | 79.9+-0.3 | 83.1+-0.2 |\n| ScanObjectNN | PointMLP w/ Point CMT |84.4+-0.4 | 86.4+-0.3|\n", " We sincerely thank the reviewers’ feedbacks. We will further polish our final version. Below is the response for the common questions.\n\n**Table 1. We demonstrate the training cost of each stage with the form of time for per sample (ms) and total epochs (h).**\n| Stage I (Image) | Stage II (CMPG) | Stage III (PointNet++) |\n|--|--|--|\n| 27.35ms /4.36h | 2.3ms / 0.46h | 5.32ms / 18.19h |\n\n\nAs shown in the **Table1**, the additional training stage of I (image encoder and image classifier) and II (CMPG) actually introduce little extra cost in the entire training phrase since the small epoch numbers for stage I and few parameters of CMPG. Moreover, once the stage I has been trained, we fixed pre-trained image network to generate objects’ features offline, which can be directly exploited in the Stage II and III without repeatedly forwarding the image network.\n\n**Novelty**\n\nWe want to re-emphasize our PointCMT is the first to formulate cross-modal learning on point cloud analysis as a knowledge distillation problem, and we theoretically show and prove it lower bound. Through a simple but effective design, PointCMT greatly boosts several baseline models on three datasets **without extra structural modification, computation burden and data**. ", " The paper proposes a knowledge distillation strategy to improve point cloud classification. It uses images of point clouds to train an image classification network. The representation from this network can be distilled into any existing point cloud network. The distillation requires multi-stage training including -- training the image classifier, training point generator from images and training the point-cloud network assisted by the image classifier. Experiments show the effectiveness of this scheme. Strengths:\n\n- The paper achieves impressive performance on both the ModelNet40 and ScanObjectNN datasets.\n\n- Gains due to knowledge distillation seem to consistent across various networks (Table 1, Table 2)\n\nWeakness:\n\n- The method archives performance improvements; however, the training pipeline seems to become more complicated with many training stages. This could be a potential limitation. It would be useful if the paper discusses this and potential ways to mitigate it.\n\n- It would be nice if the paper could provide some mean +- std measures. This could be done by running the same experiments multiple times (with random initialization) and reporting the mean and variance. This is particularly important as point-based benchmark methods can have significant variations across runs.\n\n- What is performance of the teacher networks on the two datasets? Also, is the teacher network potentially better because of the additional image to point cloud task? I asking this because as shown in the paper 3D projections could be used and these 3D projections can be created from the point cloud, which is available at test time. Refer to the weakness section for questions. Overall, I am (weakly) positive about the paper. I will update the score based on the rebuttal. NA", " This paper shows a 2D-to-3D distillation framework PointCMT that improves 3D classification through cross-modal training. Despite the performance improvement in two classification datasets, the reviewer has concerns about the effectiveness of PointCMT: (1) the theoretical argument of this approach; (2) some important ablation studies are missed; (3) more serious applications like segmentation should be considered. PointCMT shows cross-modal training that improves performance. Check questions for the weakness. \n #### Majors:\n\n1. PointCMT might not be theoretically sound. Feature loss (Eqn. 5) will be zero if global features of image and point cloud are equal, since the same CMPT is used for both inputs. Classifier loss (Eqn. 7) will be zero if global features of image and point cloud are equal, since the same classifier (Cls^{pts}) is used. Therefore, the whole regularization term in PointCMT is just to make two global features identical, which means the PointCMT can be replaced by a simple regularization term that makes two features closer to each other.\n2. Table 5 needs more clarification. It is not clear which component is ablated. Take row 1 (baseline) and row 2 (Hinton) for example, is it only the logits distillation part be ablated? In other words, are you using eqn. 6 as a replacement of eqn. 7. Or, is the feature distillation ablated such that you are studying the different possibilities of Eqn. 5? Or, both?\n3. An ablation study on with/without CMPG should be added if Table 5 is not for it.\n4. An ablation study on using Eqn. 6 instead of Eqn. 7 should be added if Table 5 is not for it.\n5. An ablation study on values of tradeoff parameters (alpha and beta) in L217 Eqn. should be added. 30 and 0.3 are not common values. How much variance can the parameters cause to the performance? Is that possible that you outperform other distillation techniques simply because the parameters are not good for them in Table 5.\n6. Overclaim. There is already a work in 2d-to-3d knowledge distillation. [1]\n7. Application is limited to classification only. Classification is the most trivial task in point cloud understanding and the most unimportant task. The classification benchmark datasets in this work are rather small-scale and networks are easily saturated in performance. More serious applications such as segmentation should be considered.\n\n[1] Yueh-Cheng Liu, etc. Learning from 2D: Contrastive Pixel-to-Point Knowledge Transfer for 3D Pretraining\n\n#### minors:\n\n1. Revise CMPG description. For L181, you should revise to something like: the cross-modal point generator (CMPG) is used to map the global feature representation acquired from images/points into the Euclidean space. Note that CMPG will be used to transform both images and point clouds after pretraining. Since it is the first time you define CMPG, the function of CMPG should be made clear.\n2. Better illustration for Fig. 2. Fig.2 (a) dash lines from global feature to feature Enhancement can be illustrated in other colors, \\eg blue, since it requires the CMPG module and thus should be highlighted differently for easier understanding. In (b), image features should be changed to global features from images or points.\n3. Highlight training cost instead of inference speed. It is weird to highlight the inference speed in L257, since PointCMT only influences training speed and has no effect on inference speed. The inference speed is totally dependent on the point cloud network used in PointCMT. The author should highlight training costs instead.\n4. The authors should explicitly highlight that for all experiments, they trained baseline with and without PointCMT using the same optimization techniques, data augmentation, and evaluation techniques. For example, SimpleView found these tricks matter a lot in performance.\n5. Any clue why PointCMT improves PointNet++ more than other networks? The authors did not include limitations and potential negative societal impacts. Limitations, e.g. only for classification, can be added. ", " This paper proposes a cross-modal training scheme called PointCMT which utilizes both 3D point cloud and synthetically rendered or view projected 2D image to boost point cloud classification performance. The cross-modal training is formulated as a knowledge distillation problem and can be easily combined with existing 3D point cloud based algorithms such as PointNet++ and PointMLP. Experiment shows that PointNet++ with PointCMT can achieve 1.0% and 4.4% accuracy improvements on ModelNet40 and ScanObjectNN benchmark dataset, respectively. Strength\n1. The idea of using view projected 2D image for 3D point cloud classification has been presented in SimpleView[14], but the scheme of cross-modal training using both 3D point cloud and view projected 2D image is novel and interesting.\n2. A cross-modal point generator is proposed to solve the cross-modal knowledge distillation problem when the feature distribution of 3D point cloud and 2D image is different and complementary\n3. The analysis and proof on the discrepancy between the discriminative image and point cloud features is good.\n\nWeakness:\n1. There are some concerns on the experiment: \n a) In table 1, it seems that even equipped with PointCMT, The PointNet++/RS-CNN/DGCNN seems no better than SimpleView [14] which achieves 93.9% overall accuracy with speed of 2208 samples/second, does this mean that cross-modal information fusion (3D point cloud + 2D view image) may not help on the ModelNet40 dataset and only 2D view image is enough? \n b) In table 6, the performance of \"with projection\" achieves 91.8% accuracy on OBJ_ONLY, which is about +4.3% improvement according to Table 2, but the performance of \"with Project with color\" only achieves 90.7% accuracy, dropped by 1.1%, does this mean the use of point color information hurts performance of cross-modal training? \n2. Since the cross-model training may increase the training time, it would be good to include both training cost and performance gain in the comparison experiment\n 1. How about the generalization capability of cross-modal training? for example, the performance of training on ModelNet40 and test on ScanObjectNN or vice versa, as similar experiment has been done in [14] There is no potential negative societal impact found", " The authors present a method to enhance point cloud classification models by exploiting training on images. Features extracted from images may be able to capture complementary details to what is typically learned by training on point clouds alone. A distillation procedure is presented so that such complementary features can be integrated in a model that only processes a point cloud at test time. Overall, the method is sound and generally well explained. A few mistakes in the use of English are present and could be fixed (for example page 2 line 55 \"Effectively\" --> \"Effectiveness\").\n\nStrenghts:\n- sensible approach to improve feature learning by leveraging complementary features that are more easily extracted from images\n- the distillation approach allows to still work in a single-modal setting at test time\n- good results showing a benefit from the proposed technique on state of the art architectures\n- good ablations to validate each proposed component (CPMG/Feature enhancement)\n\nWeaknesses:\n- distillation allows to train the point cloud encoder to extract those features that are easily extracted from images but generally escape classifiers trained on point clouds, despite those features being present in the input point cloud. However, because at test time only the point cloud is used, it is not possible to exploit truly complementary features that an image might carry and that are not present in the point cloud data. This would only be possible in a truly multi-modal setting, also at test time\n- the training of CPMG requires paired data with the image and point cloud of the same object. This is somewhat constrasting with the motivation of the work in page 1 line 34 highlighting the potential difficulty in having paired multimodal data - why have chosen the EMD instead of the Chamfer distance as a metric for your loss function? \n- it seems you reference the published results for PointMLP. Have you been able to reproduce them? Based on experience and online discussions (https://github.com/ma-xu/pointMLP-pytorch/issues/1), it seems they are quite tricky to reproduce I would like to see an extended discussion about the single-modal setting, in which only point cloud data are used at testing time (although multiple modalities may be available for training) considered in this paper against a fully multi-modal setting." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "EF672tOSnLU", "8pOKe8Bx9h", "deziy14nKk3", "UUKY-hZMfwX", "CO6vuJkFW0c", "ax5SmI3zAOG", "SEdO81Y4-Wc", "UwzBykJRjbM", "deziy14nKk3", "cIITVr40Kcg", "EpuV85Z7bd_", "nips_2022_1qXIyIxLbEu", "nips_2022_1qXIyIxLbEu", "nips_2022_1qXIyIxLbEu", "nips_2022_1qXIyIxLbEu", "nips_2022_1qXIyIxLbEu" ]
nips_2022__efamP7PSjg
Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs
3D-related inductive biases like translational invariance and rotational equivariance are indispensable to graph neural networks operating on 3D atomistic graphs such as molecules. Inspired by the success of Transformers in various domains, we study how to incorporate these inductive biases into Transformers. In this paper, we present Equiformer, a graph neural network leveraging the strength of Transformer architectures and incorporating SE(3)/E(3)-equivariant features based on irreducible representations (irreps). Irreps features encode equivariant information in channel dimensions without complicating graph structures. The simplicity enables us to directly incorporate them by replacing original operations with equivariant counterparts. Moreover, to better adapt Transformers to 3D graphs, we propose a novel equivariant graph attention, which considers both content and geometric information such as relative position contained in irreps features. To improve expressivity of the attention, we replace dot product attention with multi-layer perceptron attention and include non-linear message passing. We benchmark Equiformer on two quantum properties prediction datasets, QM9 and OC20. For QM9, among models trained with the same data partition, Equiformer achieves best results on 11 out of 12 regression tasks. For OC20, under the same setting of training with IS2RE data only, Equiformer improves upon state-of-the-art models.
Reject
This paper proposes Equiformer networks for predicting quantum properties based on 3D atomistic graphs. At the outset of the discussion period, the paper's scores were decidedly below borderline and the reviewers were concerned (i) that the methodological contribution of the paper was thin and (ii) about weaknesses in the experiments. Over the course of the discussion period, the authors engaged vigorously, providing additional experiments and moving several reviewers to increase their scores. However, at the resolution of the discussion period, despite the increases in score the paper remains overall below borderline. In general, the authors were more convinced by the experiments but still had misgivings that the technical contribution was thin and were even unsure about what precisely the technical contribution was in light of the massive related literature.
train
[ "IpsdIJIgKn", "r2qN0QJDDM4", "Wf1q-dsBvW", "hrA3aYa3xHM", "sPYmkzBxBfQU", "fKdUisDTftr", "llZSxZZ8oeP", "QWf8Wg0GVaW", "AK4w1mboT9s", "MVFUFaQyjWA", "E0yy4S4UvvV", "KW1QLwSknzT", "q0uOPEKQwdN", "PQXwcgO3jN7", "RGrHCQCO5i-", "u9wO3OXD1cM", "WH-_ZGZth1-", "zg-Vvns4vVA", "4A-SQhT14Aa", "1hPoA85H4C", "fiaIYC_qd_b", "5ajWccxDjL", "Zzah3p80u0x", "DiVlrae79l", "om_ojFrDpUD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the response.\nSorry for having the tricks.", " We thank the reviewer for the response and for acknowledging the thorough experimental evaluation.\nWe will improve the presentation of this paper and highlight the differences between existing works and this work.", " \nI thank the authors for their responses, and I look forward to the discussion with the other reviewers. \n\nP.S. I understand the pressure to try to achieve an higher score, but I am not a huge fan of tricks like repeating and highlighting in bold positive feedback. ", " Thank the authors for providing the detailed response. With these clarifications and additional experiments, I am now open to increase the score, barring any other concern in the reviewer-meta reviewer discussion period. As for now, I am mainly impressed by the exhaustive and thorough experimental evaluations. I still recommend the authors improve the presentation of the paper, by considering highlighting the differences between your approach and the existing works.", " We thank the reviewer for the response and update.\n\nFor the question, here is the detailed answer.\nWe transform scalar features $f_{ij}^{(0)}$ in Figure 1 into attention weights with typical Leaky ReLU, one linear layer and softmax. \nThe scalar features $f_{ij}^{(0)}$ is, however, obtained by taking tensor products of irreps features ($x_{ij}$ and $SH(\\vec{r_{ij}})$) and transforming with one typical linear layer as shown in Equation 3 in revision.", " Greatly thank the authors for the detailed responses to my individual concerns and those general ones. The revised version is clearer. I am willing to raise the score to 5. \n\nHowever, I confirm that the technical contribution is still weak. I totally understand that MLP attention + non-linear message is the most desirable combination as supported by this paper. Extending current methods to the form proposed in this paper is not that surprising (the authors are unnecessarily suggested to further provide further explanations on this point). Hence, I can only suggest a borderline paper (tending to marginal acceptance) given the respectable experimental evaluations (QM-9, OC-20, MD-17) conducted by the authors. \n\nOne more minor question: Is the attention only computed based the scalar features other than Irreps features? \n", " We thank reviewers for valuable comments on the presentation of the work and have updated our paper.\n\nThe differences are in blue and are summarized below:\n1. We move the results in appendix (Table 7 and Table 8 in appendix) to the main text (Table 4 and Table 5 in revision).\n2. We add more details (the content is the same as mentioned in our response) on related works and move the section of related works right after introduction. Due to the limited space, parts of related works are still in appendix, but we will move them to the main text if given more space.\n3. We add a section of limitations in appendix (Section F).\n4. We simplify the section of background and remove the parts of Graph Neural Networks and E(3)-Equivariant Neural Networks.\n5. We move Figure 2 to the main text as suggested by reviewer qLqa and make some parts more clear based on reviewers' comments.\n6. We add a subsection of discussion on how some components affect computational complexity (Section C.4).\n7. We move appendix from supplementary material to the main text (revision).\n\n\nBesides, we believe we have addressed reviewers' comments.\nPlease let us know if you have other questions or comments.\n\n", " We update our MD17 results by tuning the ratio of energy loss to force loss for each molecule and summarize the results below.\nWe note that __Equiformer achieves better results for all molecules__.\n\n| Molecule | | | PaiNN | NequIP | TorchMD-Net | Equiformer |\n|:---------------:|:------:|---|:-----:|:------:|:-----------:|:----------:|\n| Aspirin | energy | | 0.167 | | 0.123 | **0.122** |\n| | force | | 0.338 | 0.348 | 0.253 | **0.167** |\n| Benzene | energy | | | | 0.058 | **0.051** |\n| | force | | | 0.187 | 0.196 | **0.151** |\n| Ethanol | energy | | 0.064 | | 0.052 | **0.051** |\n| | force | | 0.224 | 0.208 | 0.109 | **0.071** |\n| Malondialdehyde | energy | | 0.091 | | 0.077 | **0.075** |\n| | force | | 0.319 | 0.337 | 0.169 | **0.133** |\n| Naphthalene | energy | | 0.116 | | **0.085** | **0.085** |\n| | force | | 0.077 | 0.097 | 0.061 | **0.048** |\n| Salicylic Acid | energy | | 0.116 | | 0.093 | **0.092** |\n| | force | | 0.195 | 0.238 | 0.129 | **0.122** |\n| Toluene | energy | | 0.095 | | **0.074** | **0.074** |\n| | force | | 0.094 | 0.101 | 0.067 | **0.056** |\n| Uracil | energy | | 0.106 | | **0.095** | 0.096 |\n| | force | | 0.139 | 0.173 | 0.095 | **0.086** |", " \n> 5. [Weakness 3 and Question Q3] Comparison of running time and numbers of parameters with the most competitive baselines __TorchMD-Net__ on __QM9__ and __MD17__.\n\nWe directly use the code from TorchMD-Net to estimate the training time and summarize the results below.\n| | QM9 | | MD17 | |\n|-------------|----------------------|----------------------------|----------------------|----------------------------|\n| | Number of parameters | Training time | Number of parameters | Training time |\n| TorchMD-Net | 6.9M | 92 GPU-hours (3000 epochs) | 1.3M | 10 GPU-hours (3000 epochs) |\n| Equiformer | 3.53M | 60 GPU-hours (300 epochs) | 3.53M | 23 GPU-hours (1500 epochs) |\n\nEquiformer takes more training time per epoch, and this is because:\n1. Equiformer uses more expressive non-linear messages, which compared to linear messages used in other equivariant Transformers, doubles the number of tensor products and therefore almost doubles the training time.\n2. Equiformer incorporates tensors of higher degrees L (e.g., L=2), which improves performance but slows down the training.\n\nHowever, we note that:\n1. On QM9, Equiformer achieves 6 better, 1 equal and 5 worse results and takes 35% less training time.\n2. On MD17, Equiformer achieves overall better results for all molecules.\n\nAdditionally, we would like to emphasize:\n1. The proposed MLP attention is faster than dot product attention in this context (Line 325 - Line 326).\n\n2. __Equiformer achieves better performance but slower training time. This does not simply imply other works can spend more computation and get better performance.__ For example, if the performance gain lies in using higher degrees L (e.g., L = 2), TorchMD-Net cannot improve performance as it can use L up to 1. Moreover, non-linear messages slow down training, but linear messages cannot be as expressive as non-linear ones regardless of how much computation is involved or how many channels (larger networks) are used. Non-linear MLP attention and linear dot product attention are of the same case.\n\n3. __For some tasks, using larger networks (e.g., greater depths) does not translate to better performance.__ As mentioned by Noisy Nodes [1], for OC20 IS2RE when IS2RS is not adopted, using more layers results in the same error. We confirm with this by training Equiformer with 6 and 8 Transformer blocks and observe no difference between the two.\n\n4. __We do not tune the training efficiency for individual cases but instead focus on generally better architectures.__ For example, as shown in Table 5, using non-linear message does not improve the performance of $C_\\nu$. Thus, for this case, we can use a weaker model and save training time.\n\n\nReference:\n\n[1] Godwin et al. Simple GNN Regularisation for 3D Molecular Property Prediction & Beyond. ICLR 2022.\n\n\n> 6. Limitations.\n\nWe thank the reviewer for pointing out this and for mentioning the computational complexity. Please see our general response for details.\n", " > 3. [Weakness 3 and Question Q3] Discussion about modeling complexity and computational overhead. \n\n\nPlease see our general response for training time (computational overhead). \n\nWe also discussed why MLP attention is faster than dot product attention in our case (Line 325 - Line 326 in the main text). For the modeling complexity and computational overhead, we will add discussion on how and why MLP attention and non-linear message passing affect training time.\n\n> 4. [Weakness 3 and Question Q3] Comparison of running time and numbers of parameters with the most competitive baselines __SEGNN__ on __QM9__ and __OC20__.\n\nWe report the training time and numbers of parameters and summarize the results below.\n| | QM9 | | OC20 | |\n|-------------------------------------------------|----------------------|---------------|----------------------|---------------|\n| | Number of parameters | Training time | Number of parameters | Training time |\n| SEGNN | 1.03M | 81 GPU-hours | 4.21M | 79 GPU-hours |\n| Equiformer (MLP attention + non-linear message) | 3.53M | 55 GPU-hours | 9.12M | 87 GPU-hours |\n| Equiformer (MLP attention + linear message) | 3.01M | 33 GPU-hours | 7.84M | 61 GPU-hours |\n\nSEGNN is written with the same e3nn library and the comparison is fair.\n__Equiformer with MLP attention and non-linear message is faster than SEGNN on QM9 and has similar training time on OC20.__ \n\nAlthough we use more channels and more parameters, the training time is comparable. The reasons are:\n\n1. We use more efficient depth-wise tensor products (DTP), where one output channel depends on only one input channel. SEGNN uses more compute-intensive fully connected tensor products (FCTP), where one output channel depends on all input channels. \n\n2. SEGNN uses 4 FCTPs in each message passing block while Equiformer uses only 2 DTP in each block. \n\nMoreover, we note that Equiformer with MLP attention and linear messages also improves upon SEGNN (compare index 2 in Table 5 with SEGNN in Table 1 and index 2 in Table 6 with SEGNN in Table 2) and is more compute-efficient. Therefore, using only MLP attention in Equiformer improves both performance and training efficiency of using non-linear messages in SEGNN on QM9 and OC20.\n", " > 2. [Weakness 2 and Question Q2] Not that significant enhancement on QM9 and comparison to PaiNN and TorchMD-Net.\n\n1. As mentioned in our work (Line 266 - Line 267), we mainly compare models trained with the same data split (e.g., the number of training/validation/testing examples and the index of examples).\nPaiNN and TorchMD-Net use different data splits and both use 110k examples for training while our work uses 100k training examples. Therefore, directly comparing the results in Table 1 can be unfair due to different training sizes.\n\n2. We additionally train Equiformer with the same data split as TorchMD-Net and summarize the results as below:\n| | PaiNN | TorchMD-Net | Equiformer |\n|-----------------|:---------:|:-----------:|:----------:|\n| $\\mu$ | 0.012 | **0.011** | **0.011** |\n| $\\alpha$ | **0.045** | 0.059 | 0.046 |\n| $\\epsilon_{HOMO}$ | 27.6 | 20.3 | **15** |\n| $\\epsilon_{LUMO}$ | 20.4 | 17.5 | **14** |\n| $\\delta \\epsilon$ | 45.7 | 36.1 | **30** |\n| $R^2$ | 0.066 | **0.033** | 0.251 |\n| $ZPVE$ | 1.28 | 1.84 | **1.26** |\n| $U_0$ | **5.85** | 6.15 | 10 |\n| $U$ | **5.83** | 6.38 | 11 |\n| $H$ | **5.98** | 6.16 | 10 |\n| $G$ | **7.35** | 7.62 | 11 |\n| $C_\\nu$ | 0.024 | 0.026 | **0.023** |\n\n3. We note that __Equiformer achieves the best results on 6 out of 12 regression tasks__ and that for the task of $R^2$, PaiNN and TorchMD-Net use specialized architecture that takes into account the prior knowledge of the task and the comparison of this task is less fair. For the task of $U_0$, $U$, $H$ and $G$, we surmise that the training epochs are not enough for Equiformer to converge (the number of epochs is 300, which is 10X less than that of TorchMD-Net). Further tuning hyper-parameters can achieve better results, and we will investigate this. Compared to PaiNN, Equiformer achieves better results on 6 out of 12 regression tasks. Compared to TorchMD-Net, Equiformer achieves better results on 6 tasks, equal results on 1 task and worse results on 5 tasks. Overall, __Equiformer is still competitive to PaiNN and TorchMD-Net on QM9.__ \n\n4. Moreover, the improvement of the proposed attention depends on datasets. For QM9, we already showed that the improvement of replacing the typical dot production attention used in previous works of equivariant Transformers with theoretically stronger MLP attention is not very significant and discussed this in the main text (Table 5 and Line 321 - Line 324). For OC20, however, MLP attention clearly improves dot product attention (Table 6). This would suggest that on OC20, Equiformer with MLP attention could improve upon TorchMD-Net with dot product attention.\n\n5. Besides, we also report our results on MD17 and compare Equiformer with PaiNN [5] and TorchMD-Net [3]. __The comparison on MD17 dataset shows that Equiformer clearly improves upon both PaiNN and TorchMD-Net.__ Please see results in the general response.\n\n6. We additionally __compare Equiformer with PaiNN on OC20 IS2RE testing set__ and use the results of PaiNN reported by OC20 team. The results of energy MAE (eV) are summarized below, and __the improvement of Equiformer becomes significant when the dataset contains more atoms and more diverse atom types__.\n\n| Method | ID | OOD Ads | OOD Cat | OOD Both | Average |\n|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|\n| PaiNN | 0.575 | 0.783 | 0.604 | 0.743 | 0.676 |\n| Equiformer | **0.5037** | **0.6881** | **0.5213** | **0.6301** | **0.5858** |\n\n\n__In summary, Equiformer achieves overall better results on QM9, MD17 and OC20__.\n\n1. Comparison to PaiNN:\n* (a) QM9: comparable performance with 6 better and 6 worse results.\n* (b) MD17: Equiformer achieves better results for all 7 molecules.\n* (c) OC20: Equiformer achieves better results for all sub-splits.\n\n\n2. Comparison to TorchMD-Net:\n* (a) QM9: comparable performance with 6 better, 1 equal and 5 worse results.\n* (b) MD17: (For the first 4 molecules, Equiformer achieves better results. For the last 4 molecules, Equiformer achieves slightly higher energy errors but significantly lower force errors.) After tuning the ratio of force loss and energy loss, Equiformer achieves better results for all molecules.\n* (c) OC20: Both MLP attention and non-linear messages in Equiformer improve upon dot product attention and linear messages, which is used by TorchMD-Net (Table 6).\n\n3. Additional comparison to SEGNN:\n* (a) QM9: Equiformer achieves better results for all 12 tasks (Table 1).\n* (b) OC20: Equiformer achieves better results for all sub-splits (Table 2 and Table 3). Note that SEGNN achieves the second best results on OC20 and better results than PaiNN.\n\n\n", " We thank the reviewer 5yNV for the efforts and for acknowledging the __clear presentation of background and the proposed architecture and nice ablation studies to investigate the proposed attention__. We address the comments below.\n\n> 1. [Weakness 1 and Question Q1] Details on comparisons to equivariant networks based on irreducible representations (irreps) and equivariant Transformers. \n\nPlease see our general response for detailed comparisons to previous works.\n\nThe contributions are:\n\n1. We find the combination of MLP attention and non-linear message passing improves upon the original dot product attention (Line 12 - Line 14).\n\n2. We propose the equivariant architecture of MLP attention and non-linear messages (equivariant graph attention). Since features in equivariant networks contain not only scalars but also geometric tensors, the equivariant graph attention requires non-trivial and careful modifications in order to be general and capable of supporting tensors of any degree L.\n\n3. We show that this equivariant graph attention works well for QM9 and OC20. Particularly, when trained on OC20 IS2RE and IS2RS, the proposed network can improve upon competitive works from the industry. Equiformer achieves lower testing errors and takes 2.33X less training time compared to GNS + Noisy Nodes (ICLR 2022) and 15.5X less training time compared to Graphormer (champion of OC20 challenge in 2021) (Line 392 - Line 395, Table 7 and Table 8 and Line 626 - Line 629 in the appendix).", " > 5. Section 3.1 (line 154-184) is less informative than Figure 2, while occupying more space and harder to understand. Figure 2 is more important than the verbose background introduction in Section 2 (Section 2.3 is good, while other parts could be more compact or omitted). \n\nThanks for the valuable comments on the structure of the presentation.\n\nWe would like to clarify that Section 3.1 provides all mathematical or implementation details for all operations in Equiformer.\n\nHowever, we agree with the reviewer that Figure 2 is informative and will add this to the main text and shorten Section 2.\n\n> 6. Given line 174 has already said $C_L$ is the number of channels for type-$L$ vectors, line 174-175 says “input $x$ containing $C_0 + \\sum_{L=1}^{L_{max}} C_L$ type-0 vectors” is confusing. It might be better to replace this paragraph with Figure 2(c).\n\nThanks for pointing out this. \n\nWe would like to clarify that in Line 174, “$x$ containing $C_L$ type-L vectors with $0 < L <= L_{max}$” does not consider $L = 0$ and thus not type-$0$ vectors and should not be confusing as we already mention $L$ here is greater than $0$ and less than or equal to $L_{max}$. We will make this statement more clear and link the paragraph to Figure 2(c).\n\n\n> 7. All interactions between different type vectors and different elements within vector depend on DTP with SH(r_{ij}), while no experiment shows Self (Depth-wise) Tensor Product or Cross (Depth-wise) Tensor Product cannot improve the performance.\n\nThanks for suggesting a potentially new operation.\n\nPlease note that in our work, __we do not mention either self depth-wise tensor products or cross depth-wise tensor products.__ \nIf we understand correctly, the self depth-wise tensor product corresponds to taking tensor products of features at the same node.\nIf this is what self tensor product means, we do not incorporate this type of operations in our network, and therefore we cannot conduct experiments.\nHowever, incorporating this operation can be an interesting future work.\n\n\n> 8. Comparison with Dot Product Attention could be more complete, e.g., key can be obtained from $x_j$ with a linear layer and attention bias can be obtained from the scalar part of $f_{ij}$ with a MLP before Reshape (splitting heads) or learnable radial function(mentioned in line 235-239) encoding $|| r_{ij} ||$, etc.\n\nIt is our understanding that __we already consider similar cases in our comparison__.\n\nFor dot product attention, the key is generated by a linear layer, a depth-wise tensor product and one final linear layer. If we omit the tensor product, there will be less information exchanged across different degrees, and this can potentially lead to worse performance and make the comparison less fair.\n\nFor “attention bias obtained from the scalar part of $f_{ij}$ with MLP before Reshape”, this is exactly what we do to obtain MLP attention weights. The only difference is that here the reviewer suggests using the scalar part of both $f_{ij} ^ {(L)}$ and $f_{ij} ^ {(0)}$ in Figure 1 instead of only $f_{ij} ^ {(0)}$. However, note that $f_{ij} ^ {(L)}$ and $f_{ij} ^ {(0)}$ are obtained by applying linear layers (not tensor products) to the same feature and thus they contain similar information intuitively. Therefore, the comment suggests combining the proposed MLP attention with dot product attention and the original comparison is still fair. However, the combination can be an interesting future direction. \n\nFor learnable radial functions, their effect is already included in the depth-wise tensor products (DTP) as the weights of DTP are parametrized by radial functions. \n\n__We believe that our comparison is already complete and fair as we only change MLP operation to dot product operation and leave other components the same.__\n\n\n> 9. The authors didn’t describe the limitations of their work.\n\nPlease see our general response for clarification and additional limitations.\n", " > 2. No experiment or explanation shows the effectiveness/efficiency of DTP.\n\nThanks a lot for pointing out this. \n\nHere is the explanation and motivation of using depth-wise tensor product (DTP).\n\nIf we use a fully connected tensor product (FCTP), where each output channel depends on all input channels, the number of weights will be proportional to $C_{in} \\times C_{out}$, with $C_{in}$ being the number of input channels and $C_{out}$ the number of output channels. Note that in our network, the weights of tensor products are generated by scalar functions e.g., radial functions (Line 198 - Line 201) and the memory complexity is proportional to the number of weights.\nIf we use FCTP, the memory complexity is $C_{in} \\times C_{out}$. If we use DTP instead, the complexity is only $C_{in}$. Thus, using DTP can save memory significantly by $C_{out}$ times and in our experiments, using FCTP instead of DTP can result in out of memory error.\nThus, in our network, we can only choose to use DTP. \n\n\n> 3. Equiformer is only applied to scalar prediction tasks (except auxiliary task), which only need SE(3) invariance or trivial equivariance (i.e. identity representation). The experiments cannot show the significance of the equivariant nature of Equiformer.\n\nPlease see our general response for results on MD17.\n\nBesides, Nequip [1] and works by Rackers et al. [2] and Frey et al. [3] have shown that even in cases where the task is to predict invariants, including equivariant features leads to more accurate and generalizable models.\n\nReference:\n\n[1] Batzner et al. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications 2022. \n\n[2] Rackers et al. Cracking the quantum scaling limit with machine learned electron densities. https://arxiv.org/pdf/2201.03726.pdf\n\n[3] Frey et al. Neural scaling of deep chemical models. https://chemrxiv.org/engage/api-gateway/chemrxiv/assets/orp/resource/item/627bddd544bdd532395fb4b5/original/neural-scaling-of-deep-chemical-models.pdf\n\n> 4. Equiformer architecture is very complicated, while there isn’t a clear motivation or intuition or explanation or thorough experimentation to support it. If the embedding module and the parameterization of DTP weights are considered, Equiformer could be more complicated.\n\nThank you for pointing out this.\n\nWe would like to clarify that **all the components have their motivation, intuition or reason as below**.\n1. Equiformer just follows the typical pre-norm architecture of Transformers. For example, they place layer normalization before attention or feed-forward networks and have skip connections. The only difference is that we use equivariant versions of operations as features can include scalars, vectors and other geometric tensors.\n\n2. Each component in equivariant graph attention (Figure 1) has its intuition as follows.\n\n* (a) First, we combine features $x_i$ and $x_j$ in source and target nodes with linear layers and tensor products to obtain $f_{ij}$. $f_{ij}$ contains the information in the two nodes and considers geometric information by using tensor products.\n\n* (b) Second, since attention weights $a_{ij}$ should be invariant, we only consider the scalar parts of $f_{ij}$ and use MLP to compute MLP attention weights.\n\n* (c) Third, we want the feature sent from source node to target node to be non-linear, and therefore we apply gate activation to value vector $v_{ij}$. The reason that there is an additional DTP between $f_{ij}$ and $v_{ij}$ is that after gate activation, we want to mix non-linear features in different degrees. \n\n3. For embedding, the node embedding is the same as other works, and the edge-degree embedding is used to encode the information of degrees at the beginning.\n\n4. The parametrization of DTP is to make sure that when weights of DTP are generated from radial functions, the memory complexity is still manageable. Please see our response above for more details.\n\nIf possible, can you please specify which part is confusing or not clear?\n \n\n\n", " We thank the reviewer qLqa for the efforts and for acknowledging that __the proposed architecture is novel__ and __achieves competitive results on QM9 and OC20__. We address the comments below.\n\n> 1. Thus it might be necessary to compare MLP attention to Dot Product attention when Non-Linear Message is used.\n\nWe conduct ablation study and compare MLP attention and dot product attention when non-linear message is used. We summarize the new results (index 4) below and compare with the original results in Table 5 and Table 6. We can see that __MLP attention is more efficient and achieves equal or better results than dot product when non-linear message is used__.\n\n* QM9\n\n| index | Non-linear message | MLP attention | Dot product attention | $\\alpha$ | $\\Delta \\epsilon$ | $\\epsilon_{HOMO}$ | $\\epsilon_{LUMO}$ | $ \\mu$ | $C_\\nu$ |\n|:-----:|:-------------------:|:--------------:|:----------------------:|:------:|:----------------:|:-------------:|:--------------:|:----:|:----:|\n| 1 | Y | Y | | .056 | 33 | 17 | 16 | .014 | .025 |\n| 2 | | Y | | .061 | 34 | 18 | 17 | .015 | .025 |\n| 3 | | | Y | .060 | 34 | 18 | 18 | .015 | .026 |\n| 4 | Y | | Y | .056 | 33 | 17 | 16 | .014 | .025 |\n\nFor QM9, when non-linear message is used, MLP attention performs on par with dot product attention. This is somewhat expected as non-linear message enables non-linear edge features and that dot product attention is already able to capture attention patterns of relatively small dataset. However, we note that __when non-linear message is used, MLP attention is faster than dot product attention by about 39%__. \n\n* OC20\n\nOC20 energy MAE (eV)\n\n| index | Non-linear message | MLP attention | Dot product attention | ID | OOD Ads | OOD Cat | OOD Both | Average |\n|:-----:|:-------------------:|:--------------:|:----------------------:|:------:|:-------:|:-------:|----------|---------|\n| 1 | Y | Y | | 0.5088 | 0.6271 | 0.5051 | 0.5545 | 0.5489 |\n| 2 | | Y | | 0.5168 | 0.6308 | 0.5088 | 0.5657 | 0.5555 |\n| 3 | | | Y | 0.5386 | 0.6382 | 0.5297 | 0.5692 | 0.5689 |\n| 4 | Y | | Y | 0.5197 | 0.6289 | 0.5149 | 0.5520 | 0.5534 |\n\n\nOC20 EwT (%)\n\n| index | Non-linear message | MLP attention | Dot product attention | ID | OOD Ads | OOD Cat | OOD Both | Average |\n|:-----:|:-------------------:|:--------------:|:----------------------:|:----:|:-------:|:-------:|----------|---------|\n| 1 | Y | Y | | 4.88 | 2.93 | 4.92 | 2.98 | 3.93 |\n| 2 | | Y | | 4.59 | 2.82 | 4.79 | 3.02 | 3.81 |\n| 3 | | | Y | 4.37 | 2.60 | 4.36 | 2.86 | 3.55 |\n| 4 | Y | | Y | 4.45 | 2.85 | 4.43 | 2.94 | 3.67 |\n\nFor OC20, we can still observe that MLP attention performs better than dot product attention. The only exception is that on OOD Both, dot product attention performs slightly better. However, for average results, __MLP attention improves dot product attention when non-linear message is used__. We note that __dot product attention + non-linear message (index 4) takes about 56 hours and therefore is about 27% slower than MLP attention + non-linear message (index 1) (44 hours) and 86% slower than MLP attention (index 2) (30 hours)__.", " > 4. SEGNN is a recent paper that can be regarded as a generalization version of many equivariant methods including tensor-field, LieConv, NequIP, EGNN, SE3 transformer, etc. Clearly, this paper has followed the notations and presentations in SEGNN (see for example the denotation in Eq. (6)). When compared to SEGNN, the equivariant attention in this paper actually degenerates to a specific case of SEGNN, for example, the attentions are given by Eq. (14) in SEGNN paper when $\\alpha(f_i, f_j)$ becomes $\\alpha(f_{ij})$.\n\nWe would like to mention that the authors of Tensor-Field Networks [1] and 3D Steerable CNNs [2], and Clebsch-Gordan Nets [3] proposed the framework of SE(3)-equivariant networks based on irreducible representations of SO(3) in 2018. A subset of these authors have developed the e3nn library [4], which NequIP, SEGNN and this work are based on.\n\nEquivariant graph attention is a more generalized version of message passing used in SEGNN. By setting the attention weights to be equal, it becomes SEGNN. However, for other cases, it is not SEGNN. Therefore, Equiformer can approximate SEGNN (attention weights being equal), but SEGNN (static weights) cannot approximate Equiformer (dynamic attention weights). \n\nMoreover, as shown in our experiment, Equiformer with both MLP attention and non-linear message passing can improve upon SEGNN (Table 1, Table 2, and Table 3). \n\nReference:\n\n[1] Thomas et al. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds. 2018. \n\n[2] Weiler et al. 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. NeurIPS 2018.\n\n[3] Kondor et al. Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network. NeurIPS 2018.\n\n[4] https://github.com/e3nn/e3nn\n\n\n> 5. It is suggested to move the related work right after the introduction part, and provide more comparisons between different equivariant transformer methods.\n\nThanks for the comment. We provide detailed comparisons to previous works in general response and will make the modification to the main text.\n\n> 6. Line 107-118: does the type of inversion actually involved in the current implementation?\n\nYes, E(3) equivariance can be easily incorporated. We compared E(3) equivariance and SE(3) equivariance in Sec. D.1 and E.1 in the appendix and will make this more clear in the main text.\n\n> 7. Line 179-184: how to determine the pair (L, L’)? By random?\n\nL and L’ are determined based on the selection rules (i.e., Clebsch-Gordan coefficients) discussed in Line 127 to Line 129 (for SE(3)) and in Line 468 to Line 472 in the appendix (for E(3)). Additionally, L’ is chosen to be no greater than $L_{max}$ (Line 129 - Line 130).\n\nFor example, given two inputs, the first input X containing two type-1 vectors (L = 1) and the second input Y containing one type-1 vector, the depthwise tensor product will result in two type-0 vectors, two type-1 vectors and two type-2 vectors. Thus, L’ can be either 0, 1, or 2. For each degree L’, the first output vector (first channel) depends only on the first vector in X (first channel) instead of all vectors (all channels) in X, and the second output vector (second channel) depends only on the second vector in X (second channel).\n\nFigure 2 and Figure 3 in the appendix illustrate the input and output features of depth-wise tensor products. The first input contains 2 type-0 vectors, 2 type-1 vectors and 1 type-2 vector, and the second contains 1 type-0 and 1 type-1 vectors.\n\n> 8. Line 66-67: Not all GNNs are based on message passing based models.\n\nThanks for pointing out this. \nWe will mention “generally GNNs update features through message-passing layers” to take into account this fact.\n\nCan you please provide an example of performant GNNs not based on message passing?\n\n> 9. Line 254-258: the comparisons here are too shot to justify the contributions of this work.\n\nThanks for pointing this out. We provide detailed comparisons to previous works in general response.\n\n> 10. Limitations and negative social impact.\n\nPlease see our general response for clarification and additional limitations.\n\nAs for negative social impact, we described one potential negative impact in Line 344 - Line 348, which is that the ability to accurately approximate quantum properties can be used for adversarial purposes like identifying, designing and creating hazardous chemicals.\n\n\n", " We thank the reviewer QY2z for the efforts and for acknowledging that __the evaluation supports the benefit of the proposed method and that it is valuable to achieve good results on OC20, a new but more desirable dataset for performance comparison, in addition to QM9 (which is actually a well-explored dataset and could be overfitted by many recent methods).__ We address the comments below.\n\n\n> 1. It has been a common practice to explore transformers for graph data, such as Grover and Graphormer. And, even when considering equivariance, SE3-transformer and the works by [53,35] have investigated how to conduct attentions between equivariant features. While this paper does utilize certain slightly different components (such as LN, DTP), it is hard to see something essentially new in this submission compared to current papers.\n\nAlthough attention has been applied to graph data, __the main question is whether those adaptations have room for improvement in the context of equivariant graph networks.__\n\nCompared to SE(3)-Transformer and works [53, 35], we improve upon it with the proposed novel and __more expressive__ and __general equivariant graph attention__ (MLP attention and non-linear message passing). __MLP attention and non-linear message passing are more expressive and improve upon dot product attention used in previous works on equivariant Transformers__ (Table 5 and Table 6). In contrast to works [53, 35], __we use a more general architecture capable of including higher degrees of tensors__ (L > 1). __Higher degrees of tensors can improve performance__ as shown by NequIP and SEGNN. To the best of our knowledge, we are among the first to propose this more expressive and general attention in the context of equivariant Transformers, and this is what differentiates this work from previous works.\n\n__We also compare this work with Graphormer__, which is mentioned by the reviewer (Table 4 in the main text and Table 7 and Table 8 in the appendix), and show that we can __achieve lower errors even with less training time__ (Line 626 - Line 629 in the appendix). \n\nBesides, the use of LN is to simply follow the practice of original Transformers and the use of depth-wise tensor product (DTP) is to make tensor products more efficient. \n\n\n> 2. The authors claim the benefit of using MLP attention against the traditional dot-product counterpart. Yet, this replacement of attention computation just seems a trick other than a valuable contribution, particularly given that the MLP attention mechanism have already explored before in the graph learning field, such as GATv2 as cited by the authors. \n\nWe respectfully disagree with the statements. \n\nFirst, the proposed equivariant graph attention __consists of both MLP attention and non-linear message passing instead of merely MLP attention.__ Each of them is well justified in Table 5 and Table 6.\n\nSecond, __MLP attention__ is explored in GATv2 but __its equivariant architecture has not yet been proposed and explored.__ __The combination of MLP attention and non-linear messages has not been explored either.__ Since the features in equivariant networks contain not only scalars but also vectors and other geometric tensors, it needs other operations and therefore careful modifications to use MLP attention. Moreover, our adaptation of MLP attention is general and can support tensors of higher degrees. In contrast, the dot product attention in works [53, 35] cannot support tensors of degrees higher than 1.\n\nThird, in our experiment, we find the case where MLP attention can clearly improve upon dot product attention (Table 6), suggesting that __the importance of MLP attention in the context of 3D atomistic graphs has not yet been well explored.__\n\n\n> 3. Yet, it seems Equiformer obtains better performance in terms of MAE, but not regarding EwT. Does this mean Equiformer is over-parametric and fits the data well in some cases, but not that good on average? Experimentally, it is still unknown why the proposed attention can help promote the performance, and why it is better than other equivariant transformers such as SE3-transformer and [53,35]. \n\nWe want to clarify that __Equiformer achieves the best mean absolute error (MAE) averaged over all sub-splits and second best energy within threshold (EwT) in Table 2 and Table 3.__ \n\nSince Equiformer achieves the __lowest mean error__, it is __good on average__. \n\nHowever, the metric of EwT measures the percentage of predictions close enough to ground truth. Improving average errors would not always mean reducing errors of certain examples to below a threshold and would not always improve EwT metric. We have discussed the EwT metric in the appendix (Line 637 - Line 644).\n\nDetailed comparisons on equivariant networks based on irreducible representations (irreps) and equivariant Transformers can be found in our general response.", " We thank the reviewer YZ7s for the efforts and for acknowledging that __the paper is clear and well written, the assumptions are clearly stated, the algorithm is clearly described and that the work addresses an interesting topic.__ We address the comments below.\n\n> 1. It would have been interesting to report the number of parameters for the various experiments as well as the training time, since it is often the case that transformers are quite heavy. \n\nPlease see our general response for training time and numbers of parameters.\n\n> 2. Also, all the experiments have “scalar” tasks, and it would be very interesting how the model would perform for vectorial predictions. MD17 is nowadays a standard benchmark for equivariant models.\n\nPlease see our general response for results on MD17. Equiformer achieves better results than PaiNN and TorchMD-Net, which is also a previous work on equivariant Transformer.\n\n> 3. I would suggest the authors to restructure the manuscript such that all the relevant work information is present in the main text.\n\nThank you for the suggestion on the related work section. We provide detailed comparisons to previous works in general response and will incorporate all the discussion in related work in the main text.\n\n> 4. I wish the authors would discuss the limitation of their approach, for example whether the model becomes too parameter intensive for larger graphs.\n\nThank you for the comment. Please see our general response for clarification and additional limitations.\n\nAs for scaling to larger graphs, since the memory complexity of Equiformer is dominated by pairs of nodes, Equiformer can theoretically scale better than models based on triplet or quadruplets representations like DimeNet [1] and GemNet-Q [2]. Besides, as mentioned by the work of Allegro [3], we can restrict the features exchanged within a local neighborhood in order to scale to larger graphs. The method proposed in Allegro is complementary but orthogonal to the proposed equivariant graph attention. \n\nReference:\n\n[1] Gasteiger et al. Directional Message Passing for Molecular Graphs. ICLR 2020.\n\n[2] Gasteiger et al. GemNet: Universal Directional Graph Neural Networks for Molecules. NeurIPS 2021.\n\n[3] Musaelian et al. Learning Local Equivariant Representations for Large-Scale Atomistic Dynamics. 2022.\n\n", " > 5. Detailed comparison to equivariant networks based on irreducible representations (irreps) and equivariant Transformers.\n\nAlthough __we have mentioned the differences between this work and other equivariant models based on irreps and other equivariant Transformers__ (Line 250 - Line 258 in the main text), we provide more details on the comparisons and some analysis here and will update the main text to be more clear:\n\n#### __Equivariant networks based on irreducible representations (irreps)__:\n\n1. __Tensor Field Networks (TFN)__ [1] and __NequIP__ [2] use only linear messages without attention (Line 252 - Line 253). NequIP additionally uses node-wise gate activation derived from 3D Steerable CNNs [3], and the gate activation is the same as this work. \n\n2. __SEGNN__ [4] follows the practice of irreps and proposes to use non-linear messages (Line 256 - Line 257). The non-linear messages use gate activation mentioned above, and the difference from (a) is the additional usage of activation on edge features in addition to node features. As shown in their work, non-linear messages improve upon linear messages. \n\n3. __Equiformer__ combines non-linear messages with non-linear MLP attention (Line 257 - Line 258) and the combination is better than either pure non-linear messages (compared to SEGNN in Table 1, Table 2 and Table 3) or pure MLP or dot product attention (Table 5 and Table 6). We note that MLP attention provides input-dependent attention weights and therefore using MLP attention along with non-linear messages can be more expressive than pure non-linear messages. \n\n4. __Summary:__ The ranking of expressivity is that:\nMLP attention + non-linear message (Equiformer) > non-linear message (SEGNN) > linear message (TFN and NequIP). This explains why Equiformer can be advantageous over previous models based on irreps.\n\n#### __Equivariant Transformers__:\n\n1. __SE(3)-Transformer__ [5] uses dot product attention with linear messages (Line 253 - Line 254) and the attention can support tensors of any degree L (e.g., L = 2). \n\n2. __TorchMD-Net__ [6] and __EQGAT__ [7] also use dot product attention with linear messages. However, they design a more specialized architecture and the network can only use L = 0 and L = 1 tensors (Line 254 - Line 256). \n\n3. __Equiformer__ uses non-linear MLP attention with non-linear messages (Line 12 - Line 14). As shown in Table 5 and Table 6 and discussed in Line 208 - Line 209 and Line 214 - Line 219, MLP attention improves upon dot product attention and non-linear messages improve upon linear messages. Therefore, __the proposed attention is more expressive than the attention used in previous equivariant Transformers.__ Moreover, __the proposed attention is general and can support tensors of any degree L, and using higher degrees L__ (since we use irreps) __can lead to better performance as shown in NequIP and SEGNN.__\n \n4. __Summary:__ The proposed attention (MLP attention and non-linear messages) is more expressive than the attention (dot product attention and linear messages) used in all previous equivariant Transformers. The proposed attention is more general than TorchMD-Net and EQGAT and can support higher degrees L. They are the differences and advantages of Equiformer. \n\nReference:\n\n[1] Thomas et al. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds. 2018.\n\n[2] Batzner et al. E(3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials. Nature Communications 2022. \n\n[3] Weiler et al. 3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data. NeurIPS 2018.\n\n[4] Brandstetter et al. Geometric and Physical Quantities improve E(3) Equivariant Message Passing. ICLR 2022.\n\n[5] Fuchs et al. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. NeurIPS 2020.\n\n[6] Thölke et al. Equivariant Transformers for Neural Network based Molecular Potentials. ICLR 2022.\n\n[7] Le et al. Equivariant graph attention networks for molecular property prediction. 2022.\n", " We address additional common comments here.\n\n> 4. Limitations.\n\nAlthough we describe one limitation in the experiment section, which is that MLP attention does not always improve upon dot product attention obviously (Line 322 - Line 324), we describe all potential limitations below and will include them in the appendix:\n\n1. __Equiformer is based on irreducible representations (irreps) and therefore can inherit the limitations common to all equivariant networks based on irreps and the library e3nn [1].__ For example, using higher degrees L can result in larger features and using tensor products can be computationally intensive. Part of the reasons that tensor products can be computationally expensive are that the kernels have not been heavily optimized and customized as other operations in common libraries like PyTorch. But this is the issue related to software, not the design of networks. While tensor products of irreps naively do not scale well, if all possible interactions and paths (Line 132) are considered, some paths in tensor products can also be \"pruned\" for computational efficiency. We leave these potential efficiency gains to future work and in this work focus on general equivariant attention if all possible paths (up to $L_{max}$) in tensor products are allowed.\n\n2. As we describe in the experiment section, __the limitation of the proposed attention is that the improvement can depend on tasks and datasets.__ For QM9, MLP attention improves not significantly upon dot product attention (Table 5). We surmise that this is because QM9 contains less atoms and less diverse atom types and therefore linear attention is enough. For OC20, MLP attention clearly improves upon dot product attention (Table 6). Non-linear messages improve upon linear ones for the two datasets.\n\n3. __Equivariant graph attention requires more computation than typical graph convolution.__ It includes softmax operation and thus requires one additional sum aggregation compared to typical message passing. For non-linear message passing, it increases the number of tensor products from one to two and requires more computation. Along with 2., we note that if there is a constraint on training budget, using stronger attention (MLP attention and non-linear messages) would not always be optimal because for some tasks or datasets, the improvement is not obvious and using stronger attention can slow down training. For example, for the task of $C_{\\nu}$ on QM9, using linear or non-linear messages results in the same performance (index 1 and index 2 in Table 6). However, non-linear messages increase the training time of one epoch from 6.6 minutes to 11 minutes (Line 587 - Line 591 in the appendix).\n\n4. The proposed MLP attention has complexity proportional to the products of numbers of channels and numbers of edges. In the context of 3D atomistic graphs, the complexity is the same as that of messages and graph convolutions. However, in other domains like computer vision, the memory complexity of convolution is proportional to the number of pixels (nodes), not that of edges. Therefore, it would require further modification in order to use the proposed attention in other domains.\n\nHowever, the attention used in Equiformer is restricted to local neighborhoods (e.g., within a pre-defined cutoff radius (see Table 10 and Table 12 in the appendix)). Therefore, the memory complexity of the proposed attention is the same as typical graph convolutions and is proportional to the number of edges, not the square of the number of nodes. Besides, we also show that using MLP attention is more computationally efficient than dot product attention in our network and achieves equal or better results.\n\n[1] https://github.com/e3nn/e3nn\n\n", " We address some common comments by reviewers below.\n\n> 1. Training time.\n\n__The training time of all models including those in ablation study was reported in the appendix in the supplementary material__ (Line 587 ~ Line 591 for QM9 and Line 619 ~ Line 629 for OC20). \n\nParticularly, we emphasize that for OC20 IS2RE + IS2RS, __Equiformer achieves less errors on the testing set__ (Table 7 and Table 8 in the appendix) and __takes 2.33X less training time compared to GNS + Noisy Nodes__ [1] and __15.5X less training time compared to Graphormer__ [2, 3] (champion of OC20 challenge in 2021). Note that as shown by Noisy Nodes [1], __under this setting, greater depths and more computation do translate to better performance and that Equiformer achieves better results with less computation.__\n\nReference:\n\n[1] Godwin et al. Simple GNN Regularisation for 3D Molecular Property Prediction & Beyond. ICLR 2022.\n\n[2] Ying et al. Do Transformers Really Perform Badly for Graph Representation? NeurIPS 2021.\n\n[3] Shi et al. Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets. 2022.\n\n\n> 2. The number of parameters.\n\n\n* QM9:\n1. Equiformer (Table 1 and index 1 in Table 5): 3.53M.\n\n2. Equiformer with only MLP attention (index 2 in Table 5): 3.01M.\n\n3. Equiformer with only dot product attention (index 3 in Table 5): 3.35M.\n\n4. E(3)-Equiformer (Table 9 in the appendix): 3.28M.\n\t\n* OC20:\n1. Equiformer (Table 2 and Table 3 and index 1 in Table 6): 9.12M.\n\n2. Equiformer for IS2RE + IS2RS (Table 4 in the main text and Table 7 and Table 8 in the appendix): 26.8M.\n\n3. Equiformer with only MLP attention (index 2 in Table 6): 7.84M.\n\n4. Equiformer with only dot product attention (index 3 in Table 6): 8.72M.\n\n5. E(3)-Equiformer (Table 11 in the appendix): 8.77M.\n\n\n> 3. MD17 results.\n\nWe conduct experiments on MD17 following the setting of TorchMD-Net [1] and mainly compare with PaiNN [2] and TorchMD-Net. The numbers are mean absolute error of energy (kcal/mol) and force (kcal/mol/Angstrom) predictions. The numbers of related works are taken from TorchMD-Net. As shown in the table, __Equiformer achieves overall better results on the MD17 dataset.__ Compared to PaiNN [2], Equiformer achieves better results on energy and force predictions of all molecules. Compared to TorchMD-Net [1], Equiformer achieves better energy predictions and significantly better force predictions for the first four molecules. For the last four molecules, Equiformer achieves higher energy errors but significantly lower force errors. We note that this is because of the ratio of force loss to energy loss and that further tuning the ratio could ensure lower energy and force errors.\n\nReference:\n\n[1] Thölke et al. Equivariant Transformers for Neural Network based Molecular Potentials. ICLR 2022.\n\n[2] Schutt et al. Equivariant message passing for the prediction of tensorial properties and molecular spectra. ICML 2021.\n\n| Molecule | | | PaiNN | NequIP | TorchMD-Net | Equiformer |\n|:---------------:|:------:|---|:-----:|:------:|:-----------:|:----------:|\n| Aspirin | energy | | 0.167 | | 0.123 | **0.122** |\n| | force | | 0.338 | 0.348 | 0.253 | **0.167** |\n| Benzene | energy | | | | 0.058 | **0.051** |\n| | force | | | 0.187 | 0.196 | **0.151** |\n| Ethanol | energy | | 0.064 | | 0.052 | **0.051** |\n| | force | | 0.224 | 0.208 | 0.109 | **0.071** |\n| Malondialdehyde | energy | | 0.091 | | 0.077 | **0.075** |\n| | force | | 0.319 | 0.337 | 0.169 | **0.133** |\n| Naphthalene | energy | | 0.116 | | **0.085** | 0.089 |\n| | force | | 0.077 | 0.097 | 0.061 | **0.044** |\n| Salicylic Acid | energy | | 0.116 | | **0.093** | 0.101 |\n| | force | | 0.195 | 0.238 | 0.129 | **0.096** |\n| Toluene | energy | | 0.095 | | **0.074** | 0.084 |\n| | force | | 0.094 | 0.101 | 0.067 | **0.049** |\n| Uracil | energy | | 0.106 | | **0.095** | 0.099 |\n| | force | | 0.139 | 0.173 | 0.095 | **0.077** |\n\n", " The paper proposes a transformer architecture which is equivariant to the Euclidean group SE(3). While other transformer architectures which are equivariant exists, the main novelty of this paper consists in the ability to incorporate higher dimensional irreps, thus, not limiting itself to the l=0,1 representations (scalars, vectors).\n\nThe authors also propose a novel attention mechanism which involves any dimensional irrep based on the tensor product. The authors test their models in the QM9, and OC20 datasets, and show that it achieves state-of-the-art performance in most tasks.\n The paper is clear and well written. The assumptions are clearly stated and the algorithm is clearly described. The paper addresses an interesting topic, that is, how to extend the inductive bias of equivariance to transformers. \n\nThe main weakness of the paper consists perhaps in the fact that the experimental section could have been more extensive and details. For instance, it would have been interesting to report the number of parameters for the various experiments as well as the training time, since it is often the case that transformers are quite heavy. Also, all the experiments have “scalar” tasks, and it would be very interesting how the model would perform for vectorial predictions. \n It would have also been interesting to validate the model on some molecular dynamics datasets. For example, MD17 is nowadays a standard benchmarks for equivariant models. \n\nAlso, I think the relevant work section should belong in its entirety in the main text. I would suggest the authors to restructure the manuscript such that all the relevant information is present in the main text.\n \nI wish the authors would discuss the limitation of their approach, for example whether the model becomes too parameter intensive for larger graphs. \n\nOverall, I think the current paper is a nice work, and with a more extensive experimental section, involving also vectorial/tensorial tasks (like force prediction tasks in MD17) could qualify to be accepted in a venue like NeurIPS.", " This paper proposes an E(3)/SE(3) equivariant transformer on 3D molecular graphs. The central point is to apply a non-linear MLP attention mechanism based on the type-0 irreps features. Then the attention weights are multiplicated with other irreps features with type >0. The evaluations are carried out on QM9 and OC20, which somehow supports the benefit of the proposed method. Strengths:\n\n1.\tThis paper is overall compactly written and easy to follow.\n\n2.\tBesides QM9 (which is actually a well-explored dataset and could be overfitted by many recent methods), the authors evaluate their method on OC20, a new but more desirable dataset for performance comparison. It is valuable to see that the proposed method achieves good results on IS2RE. Necessary ablation studies are also performed. \n\n\nWeaknesses:\n\n1.\tThe biggest concern is that the methodological novelty is weak. It has been a common practice to explore transformers for graph data, such as Grover and Graphormer. And, even when considering equivariance, SE3-transformer and the works by [53,35] have investigated how to conduct attentions between equivariant features. While this paper does utilize certain slightly different components (such as LN, DTP), it is hard to see something essentially new in this submission compared to current papers. The authors claim the benefit of using MLP attention against the traditional dot-product counterpart. Yet, this replacement of attention computation just seems a trick other than a valuable contribution, particularly given that the MLP attention mechanism have already explored before in the graph learning field, such as GATv2 as cited by the authors. \n\n2.\tIt is nice to see the evaluations on OC20. Yet, it seems Equiformer obtains better performance in terms of MAE, but not regarding EwT. Does this mean Equiformer is over-parametric and fits the data well in some cases, but not that good on average? Experimentally, it is still unknown why the proposed attention can help promote the performance, and why it is better than other equivariant transformers such as SE3-transformer and [53,35]. \n\n3.\tSEGNN is a recent paper that can be regarded as a generalization version of many equivariant methods including tensor-field, LieConv, NequIP, EGNN, SE3 transformer, etc. Clearly, this paper has followed the notations and presentations in SEGNN (see for example the denotation in Eq. (6)). When compared to SEGNN, the equivariant attention in this paper actually degenerates to a specific case of SEGNN, for example, the attentions are given by Eq. (14) in SEGNN paper when \\alpha(f_i, f_j) becomes \\alpha(f_ij). \n 1.\tIt is suggested to move the related work right after the introduction part, and provide more comparisons between different equivariant transformer methods.\n\n2.\tLine 107-118: does the type of inversion actually involved in the current implementation?\n\n3.\tLine 179-184: how to determine the pair (L, L’)? By random?\n\n4.\tLine 66-67: Not all GNNs are based on message passing based models.\n\n5.\tLine 254-258: the comparisons here are too shot to justify the contributions of this work.\n The limitations and potential negative societal impact are not explicitly discussed. ", " This paper proposes a novel Equiformer model for DFT-level scalar quantum properties prediction based on 3D conformation of molecules or molecules and catalysts system.\nEquiformer has permutation equivariant and SE(3)/E(3) equivariant architecture, a novel transformer-like architecture operating on irreducible representations(Irreps) feature.\nThis paper proposes a novel Depth-wise Tensor Product method, creatively adopts MLP attention for Irreps and borrows the successful Non-Linear Message from SEGNN as Value in attention mechanism.\nExperiments are conducted on QM9 and OC20-IS2RE, exhibiting the effectiveness of Non-Linear Message, MLP attention and overall architecture, achieving SOTA performance in both tasks.\n Strengths:\n+ Proposes Novel Depth-wise Tensor Product\n+ Adopts MLP attention for Irreps and shows its effectiveness\n+ Clear Figure 2 in appendix describes how each component works.\n+ Proposed S×SE(3) Equivariant Equiformer achieves SOTA performance on QM9 and OC20-IS2RE\n\nWeaknesses:\n- Attention Score component could cooperate with Attention Value component, ablation studies only conducted on MLP attention + Non-Linear Message, MLP attention + Linear Message and Dot Product attention + Linear Message. It seems that is enough to show the effectiveness of each component, but the core contribution of this paper is the combination of MLP attention and Non-Linear Message. Thus it might be necessary to compare MLP attention to Dot Product attention when Non-Linear Message is used.\n- No experiment or explanation shows the effectiveness/efficiency of DTP.\n- Equiformer is only applied to scalar prediction tasks (except auxiliary task), which only need SE(3) invariance or trivial equivariance(i.e. identity representation). The experiments cannot show the significance of the equivariant nature of Equiformer.\n- Equiformer architecture is very complicated, while there isn’t a clear motivation or intuition or explanation or thorough experimentation to support it. If the embedding module and the parameterization of DTP weights are considered, Equiformer could be more complicated.\n- Section 3.1(line 154-184) is less informative than Figure 2, while occupies more space and harder to understand. Figure 2 is more important than the verbose background introduction in Section 2(Section 2.3 is good, while other parts could be more compact or omitted). And given line 114 has already said $C_L$ is the number of channels for type-L vectors, line 174-175 says “input x containing $(C_0+\\sum_{L=1}^{L_{max}}C_L)$ type-0 vectors” is confusing. It might be better to replace this paragraph with Figure 2(c).\n + All interactions between different type vectors and different elements within vector depend on DTP with $SH(r_{ij})$, while no experiment shows Self (Depth-wise) Tensor Product or Cross (Depth-wise) Tensor Product cannot improve the performance. \n+ Comparison with Dot Product Attention could be more complete, e.g., key can be obtained from $x_j$ with a linear layer and attention bias can be obtained from the scalar part of $f_{ij}$ with a MLP before Reshape (splitting heads) or learnable radial function(mentioned in line 235-239) encoding $||r_{ij}||$, etc.\n The authors didn’t describe the limitations of their work.", " The paper proposes an irreps-based Transformer architecture for atomistic property prediction, named Equiformer. The model leverages irreps to interact between features with different degrees, very much similar to previous literature like TFN, SE(3)-Transformer, or SEGNN. Equiformer further incorporates a GAT-like attention mechanism to enhance the capacity of the model. Equiformer has been experimentally evaluated on two datasets QM9 and the OC20 IS2RE task, obtaining competitive performance against other equivariant graph neural networks. ## Strengths\n1. Clear presentation with the necessary background on group theory and the irreps-based equivariant construction approach.\n2. Detailed depiction of the model architecture and the proposed attention mechanism.\n3. Nice ablation studies on both QM9 and OC20 to investigate the role of the proposed message-passing and attention module.\n\n## Weaknesses\n1. The technical contribution is somewhat limited given the rich literature on irreps-based equivariant GNNs like TFN [1] and SE(3)-Transformer [2]. There are also equivariant Transformers in this domain, e.g., SE(3)-Transformer [2], TorchMD-Net (ET) [3], and EQGAT [4]. Though some of them have been discussed in the related work, there is still a lack of insightful discussions on why the proposed Equiformer is superior over the so many existing ``Equivariant Transformer'' networks, where the differences lie, and what the key contributing parts are.\n2. The enhancement on QM9 is observed to be limited. Some important state-of-the-art models for atomistic property prediction are missing Table 1 (MAE result on QM9 test set), including PaiNN [5] and TorchMD-Net [3]. In particular, TorchMD-Net is also a Transformer-like architecture, which, I believe, is relevant to the paper here and should be compared in Table 1.\n3. Lacking discussions about modeling complexity and computational overhead. Generally, the irreps-based methods have already been observed to be quite computationally expensive, compared to other approaches for achieving equivariance, such as the scalar-based networks (e.g., EGNN [6], GMN [7], to name a few). Further incorporating attention will bring extra complexity to the model that is already very heavy to train and inference. There are no comparisons on the running time and number of parameters consumed compared with other baselines.\n\nRefs:\n\n[1] Thomas et al. Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds. 2018.\n\n[2] Fuchs et al. SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks. NeurIPS 2020.\n\n[3] Thölke et al. Equivariant Transformers for Neural Network based Molecular Potentials. ICLR 2022.\n\n[4] Le et al. Equivariant graph attention networks for molecular property prediction. 2022.\n\n[5] Schutt et al. Equivariant message passing for the prediction of tensorial properties and molecular spectra. ICML 2021.\n\n[6] Satorras et al. E(n)-equivariant graph neural networks. ICML 2021.\n\n[7] Huang et al. Equivariant Graph Mechanics Networks with Constraints. ICLR 2022.\n The questions below are proposed according to the weaknesses specified in the previous section.\n\n**Q1:**\nAre the authors able to systematically discuss the insights on why Equiformer is advantageous over the existing irreps-based methods and equivariant transformer architectures? Potential perspectives would include theoretical analysis (e.g., on the expressivity or universality) or other systematical comparisons and discussions towards this aspect.\n\n**Q2:**\nIt is strongly recommended to report the existing state-of-the-art results on QM9, like PaiNN and TorchMD-Net (ET). The authors can find these references given in the previous section. By considering these results, the performance of Equiformer on QM9 seems to be not that competitive.\n\n**Q3.**\nIt would be great to involve running-time comparison/parameter complexity of both Equiformer and the most competitive baseline on both QM9 and OC20. From the current paper, it is very hard to tell where the benefit comes from. One potential concern is related to fairness that Equiformer might be leveraging much higher computational complexity with slower training/inference speed as well as a larger network size. The authors claimed in the checklist that they have discussed the limitations. However, to the best of my effort, I am not able to find them in the paper. The authors are strongly recommended to discuss some limitations, potentially, as specified in Q3, from the high computational complexity of irreps and attention module." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Wf1q-dsBvW", "hrA3aYa3xHM", "zg-Vvns4vVA", "AK4w1mboT9s", "fKdUisDTftr", "u9wO3OXD1cM", "nips_2022__efamP7PSjg", "fiaIYC_qd_b", "MVFUFaQyjWA", "E0yy4S4UvvV", "KW1QLwSknzT", "om_ojFrDpUD", "PQXwcgO3jN7", "RGrHCQCO5i-", "DiVlrae79l", "WH-_ZGZth1-", "Zzah3p80u0x", "5ajWccxDjL", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg", "nips_2022__efamP7PSjg" ]
nips_2022_h10xdBrOxNI
Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork
Deep neural networks (DNNs) are vulnerable to backdoor attacks. Previous works have shown it extremely challenging to unlearn the undesired backdoor behavior from the network, since the entire network can be affected by the backdoor samples. In this paper, we propose a brand-new backdoor defense strategy, which makes it much easier to remove the harmful influence of backdoor samples from the model. Our defense strategy, \emph{Trap and Replace}, consists of two stages. In the first stage, we bait and trap the backdoors in a small and easy-to-replace subnetwork. Specifically, we add an auxiliary image reconstruction head on top of the stem network shared with a light-weighted classification head. The intuition is that the auxiliary image reconstruction task encourages the stem network to keep sufficient low-level visual features that are hard to learn but semantically correct, instead of overfitting to the easy-to-learn but semantically incorrect backdoor correlations. As a result, when trained on backdoored datasets, the backdoors are easily baited towards the unprotected classification head, since it is much more vulnerable than the shared stem, leaving the stem network hardly poisoned. In the second stage, we replace the poisoned light-weighted classification head with an untainted one, by re-training it from scratch only on a small holdout dataset with clean samples, while fixing the stem network. As a result, both the stem and the classification head in the final network are hardly affected by backdoor training samples. We evaluate our method against ten different backdoor attacks. Our method outperforms previous state-of-the-art methods by up to $20.57\%$, $9.80\%$, and $13.72\%$ attack success rate and on-average $3.14\%$, $1.80\%$, and $1.21\%$ clean classification accuracy on CIFAR10, GTSRB, and ImageNet-12, respectively. Code is available at https://github.com/VITA-Group/Trap-and-Replace-Backdoor-Defense.
Accept
The recommendation is based on the reviewers' comments, the area chair's personal evaluation, and the post-rebuttal discussion. This paper proposed a new training method to defend against backdoor attacks. While all reviewers see merits in this paper, some discussions about (1) the practicality of the defense using clean data samples and (2) fair comparisons to existing defenses have been raised and discussed. During the author-reviewer discussion phase, the reviewer had detailed interactions with the authors to clarify different use cases and practical scenarios of the proposed defense and the fairness of the evaluation. So both major concerns are adequately addressed. Another reviewer also champions acceptance in the internal discussion. All in all, I am recommending acceptance. My confidence is lower compared to other submissions simply because this paper has the lowest average rating score of all papers I recommend acceptance.
test
[ "N3GpycaCuB", "ZH1AXcMiAFP", "qw1flNvt2xK", "d6L-N1JTWt0", "UXXSix73dIY", "lIm0CrIrAJ", "lFoZX-akhv", "AgAtGmCWZCw", "KnV5Uy04wkn", "dZ1rkH6OVZB", "dqCQTiY9cLy", "9hWtHlz6jI", "WtXWsvAJhSi" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed comments and insightful opinions! We think these constructive discussions are beneficial, to not only the authors, but also the entire community.\n\nWe agree that our experiments are conducted for the I.I.D. setting (i.e., \"case 3\") you mentioned. Below we would like to humbly defense for the popularity of \"case 3\".\n\nAs mentioned in the label-consistent backdoor attack (LCBA) paper [20], \"one particular vulnerability stems from the fact that state-of-the-art ML models are trained on large datasets, which, unfortunately, are expensive to collect and curate. It is thus common practice to use training examples sourced from *a variety of, often untrusted, sources*.\"\n\nOur assumption is: Amount all the data sources, there are a small portion of well-trusted ones, which have largely the same data distribution as the untrusted sources. \n\nFor example, an autonomous driving company employs multiple agents to collect self-driving video data. The data collected by different agents are largely from the same distribution since they use the same hardware configurations (e.g., the same type of autonomous vehicle with the same type of camera) provided (and required) by the company. On one hand, there may be some malicious data collection agents (i.e., the attackers) trying to inject backdoor samples to the collected training data. On the other hand, there are also a small portion of well-trusted agents proving the small clean holdout set. \n\nNote that previous works [3,4,5] also used similar assumptions: The provider of the clean holdout set is well-trusted and not cooperating with the attacker, and that the clean holdout set (e.g., clean CIFAR10 images) has the same distribution of the poisoned training set (e.g., poisoned CIFAR10 training set). \n\nOther common I.I.D. scenarios can be found in medical image analysis. For example, when a medical institution or company collects chest X-rays to train a computer-aided diagnostic (CAD) model for tuberculosis diagnostic, samples collected by different (either untrusted or well-trusted) agents are largely from the same distribution since they are collected using the same type of X-ray machine required and provided by the company.\n\nThe requirements and available resources for different practical problems can be diverse. Thus, we believe it important to provide solutions for different practical scenarios. \n\nWe agree that it is important to make it clear to the readers about the problem settings used by different methods, so that they can make the best choice for their own applications. We will clearly describe the above-mentioned application scenarios in the final version. We also agree the transfer learning setting you mentioned is an important future work, which we will point out in the final version. \n\nWe would like to thank you again for appreciating our technical method, and we hope our response can solve your concern on the limitations of the application scenario. \n\nThank you!\n\n[20] Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Label-consistent backdoor attacks.", " Sorry for the late reply. I really appreciate the authors detailed comments. I think the discussion about the practical scenario is one of the most important parts of this work, as it uses a different setting as the previous defense methods. Here are my thoughts about the mentioned scenario 2:\n\nI'm trying to think about the more specific settings in scenario 2. In my opinion, there are three different cases:\n\n__1)__ The company collects untargeted data to try to train a pretrained model for the downstream target task. And the holdout clean dataset will be used for fine-tuning. In this case, the training pipeline is a typical transfer learning, where the source data has different distribution (including input distribution and label distribution) as the target distribution. Note that in this case it is much harder to perform targeted attack.\n\nThe company collects data with specific labels which is the same as the target task. This can be either __2)__ a transfer learning task (domain adaptation, where the input distribution is different between the source and target task) or __3)__ a vanilla i.i.d. training task (the company filters the untrusted dataset to construct the clean holdout dataset). \n\nIn summary, most of tasks in scenario 2 are about transfer learning, which is not discussed in this paper. The experimental results in the paper only show that T&R performs well in 3). These are my preliminary thoughts, and the authors are welcome to pointed out anything that may missed.\n\nI will keep my score for now, but it doesn't mean that I don't like the authors' method. Because I still think the practical application may be limited. But if the other reviewers think it doesn't matter, I'm fine with it.", " Dear Reviewer CZwS, \n\nThank you for reviewing our paper. We have tried to answer your insightful questions carefully. We would appreciate it if you could share your thoughts on it.\n\nThank you!", " Thank you for your response. We agree with you that the application scenario of our method is less flexible than that of [3,4,5]. This is one limitation of our method. We will clearly discuss this limitation of our method in the final version. We sincerely thank you for pointing this out!\n\nThere are three application scenarios:\n\nScenario 1: The company gets a pretrained model from an untrusted source (e.g., the Internet), which is potentially backdoored. The company has a small clean holdout set to sanitize the backdoored model, but don't have access to the original poisoned dataset. \n\nScenario 2: The company collects raw data from an untrusted source (e.g., uploaded by untrusted users or from the Internet), and then trains the model on her own using the collected dataset, which potentially contains backdoor samples. The company has a small clean holdout set to sanitize the backdoored model, as in Scenario 1. \n\nScenario 3: The company collects raw data from an untrusted source (e.g., uploaded by untrusted users or from the Internet), and then trains the model on her own using the collected dataset, which potentially contains backdoor samples. The company does not have a small clean holdout set to sanitize the backdoored model. This is a harder version than Scenario 2, since it doesn't require the company to have a small clean holdout set. \n\nPrevious methods FP [3], NAD [4], I-BAU [5] are applicable in Scenario 1 & 2. \n\nOur method is applicable in Scenario 2. \n\nPrevious methods ABL [12] and DP [35] are applicable in Scenario 2 & 3. \n\nIn Scenario 2, all methods are applicable, but our method achieves the best performance, outperforming all previous methods by a considerable margin.\n\nScenario 2 is **very common** in the real world, compared with Scenario 1: In many cases, the companies would train the model on their own, instead of directly using the (potentially backdoored) models released by a third-party. For example, the company may use a model with some specific model size or architecture adapted for their hardware (e.g., mobile devices) with unique requirements, which won't be met by the third-party model. Or maybe the company has a large amount of (potentially poisoned) internal data, which can lead to better performance than the third-party models trained on a dataset which is smaller and has distributional shifts. Or maybe the company has its own advanced techniques to train a model for the specific task, which can lead to better performance than the general training techniques available to the third-party model trainer. \n\nScenario 2 does have one more requirement than scenario 3: It requires a small clean holdout set. We think this is reasonable, since previous methods [3,4,5] also have this requirement (in both scenario 1 and 2). In practice, to make sure the model achieves good performance before its deployment, the company usually need to collect some clean samples for evaluation purpose. A small clean holdout set can be separated from the clean validation set. \n\nIn summary: \n\n1) We agree the application scenario of our method is less flexible than previous methods. We will clearly discuss this limitation in our final version. We sincerely thank the reviewer for pointing this out.\n\n2) Although relatively more limited than those of previous methods, the application scenario of our method is still very common and practical in the real world. \n\n3) In the application scenario where our method is applicable, our method achieves considerably better performance than previous works. \n\nWe hope our explanation can make you consider our work more favorably.\n\nThank you!", " Thanks for your detailed comments. The response has addressed part of my concerns. I appreciate the authors' work, which is intuitively correct and well performed. However, my concern about the threat model still remains. \n\nFirstly, it is right that the existence of the poisoned dataset is the presupposition for all backdoor defense methods (against poisoning-based backdoor attack), but the poisoned dataset may not be accessible by the defender. One of the typical scenarios is that when the poisoned models are directly provided by the adversary, and no poisoned datasets are available. The mentioned methods [3,4,5] (FP, NAD, I-BAU) do require poisoned models, but not poisoned datasets. The main difference is that FP, NAD and I-BAU can still be applied when poisoned datasets are not accessible, while the proposed T&R cannot. The author of [5] also mentioned that their method does not require the poisoned data: \"Note that DP requires access to the poisoned data; hence, its attack model is different from the attack model of the other baselines and our method.\" (Line 2~3 in page 7). This led to an embarrassing situation: when compared with FP, NAD and I-BAU etc., T&R requires additional poisoned data, when compared with methods that only require poisoned data like ABL [12], DP [35], T&P need to access an extra clean holdout dataset. From this perspective, the comparison with other methods in the paper might not be fair.", " Thank you for your insightful comments and questions. \n\nQ1. The assumption that a small clean holdout set is available to the defender is not practical. \n\nAs you have pointed out, this is a common assumption used by our and many previous methods [3,4,5]. We agree that this assumption can cause practical limitations of our method. However, we still would like to humbly defense for this assumption. \n\nTo make sure the model achieves good performance before its deployment, the owner of the model (e.g., a company) usually need to collect some clean samples for evaluation purpose. A small clean holdout set can be separated from the clean validation set. Some practical ways to obtain those clean samples include using the company’s own trusted internal data or buying from trusted third parties. \n\n\nQ2. More explanations on the intuition.\n\nThe image reconstruction task in stage 1 protects the stem network from overfitting to the easy-to-learn but semantically incorrect backdoor correlations. In contrast, no defense mechanisms are applied on the light-weighted classification head in stage 1. As a result, the classification head is more vulnerable than the stem network and more prone to learn the backdoor correlation. In other words, we “bait” the backdoor attack to the more vulnerable classification head. \n\nWe also added a new Figure 4 in appendix to visualize the feature scatters learned with and without the image reconstruction head. It provides more intuitive explanations on why the stem network is protected from learning the backdoor correlations. Please check the updated appendix file for detailed results. \n\nQ3. Can you show some reconstruction results? Does the reconstruction network reconstruct the trigger as well?\n\nWe added new visualization results in Figure 3 in Appendix D. Please check the updated appendix file for detailed results. \n\nThe answer to the second question varies depending on the attack method. For example, for $\\ell_2$-Invisible attack, the trigger is blurred out and totally unrecognizable in the reconstructed images. However, for Trojan-WM attack, the reconstructed images still keep a vague pattern of the trigger. \n\nNote that whether the reconstructed images keep the visual pattern of the backdoor trigger **has nothing to do with** the effectiveness of our defense method. It doesn’t matter whether the output features of the stem network encode the visual features of the backdoor trigger, as long as the **correlation** between the backdoor trigger and the target class is not learned. The newly added feature scatters in Figure 4 indicates that our method successfully prevents the stem network from learning the backdoor correlation. \n\nQ4. Why our method has higher clean accuracy than standard training on ImageNet-12 under SIG attack?\n\nStandard training on the clean ImageNet-12: clean accuracy=79.02%.\n\nStandard training on SIG poisoned ImageNet-12: clean accuracy=71.67%.\n\nOur method on SIG poisoned ImageNet-12: clean accuracy=77.33%.\n\nSIG attack on ImageNet degrades the image quality, making the clean accuracy drop by a considerable margin compared with training on clean ImageNet-12. Our method is finetuned on a clean holdout set, which may explain its superior clean accuracy than standard training on SIG poisoned ImageNet-12. \n\nQ5. Can our method generalize to ViT?\n\nWe agree this is an important question. Due to time limit of the rebuttal phase, we can’t get conclusions before the deadline. We will continue investigating in this problem and hopefully show results in the final version. \n\nQ6. Does this work aim to provide a defense mechanism in the self-supervised or semi-supervised scenario?\n\nOur method mainly focuses on the supervised learning scenario. We are also interested in how to generalize our method to the self-supervised or semi-supervised setting, which will be our future work. \n\nQ7. Multitask learning results.\n\nFollowing your suggestion, we use our method to defense backdoor attack in multitask learning. Specifically, we use two separate classification heads for CIFAR10 and GTSRB classification, and one image reconstruction head. All three head networks share the same stem network. $\\ell_2$-Invisible attack is added on both CIFAR10 and GTSRB training set. The model is jointly trained on the union of the poisoned CIFAR10 and poisoned GTSRB datasets. The results are listed below:\n\nNo defense:\n\nCIFAR10 head: ASR=100%, ACC=87.04%;\n\nGTSRB head: ASR=100%, ACC=94.91%.\n\nOur method:\n\nCIFAR10 head: ASR=1.80%, ACC=82.41%;\n\nGTSRB head: ASR=0.05%, ACC=93.80%.\n\nOur method can simultaneously defend multiple backdoor attacks in this multitask learning setting. \n", " Thank you for your careful reading and insightful comments. We carefully address your concerns below and hope they can make you consider our work more favorably. \n\nQ1. “Most of the previous methods only need access to either the poisoned dataset or a small clean dataset, but the proposed method requires access to both, which limits its practical value.”\n\nWe understand your concern that only practical assumptions should be used when designing methods. However, we humbly disagree with your claim that our method uses more assumptions than most previous backdoor defense methods. Please allow us to explain in more details bellow. \n\nOn one hand, the existence of the poisoned dataset is the presupposition for all backdoor defense methods. There has to be a poisoned dataset at the very first place before we need to conduct any backdoor defense. On the other hand, many previous defense methods [3,4,5] also require a small clean holdout set, just like in our method. \n\nFor example, in Algorithm 1 of the I-BAU paper [5], the poisoned model and the clean holdout set are both listed as the inputs. The poisoned model is obtained by training on the poisoned dataset. The same requirements are also used in FP [3] and NAD [4]. In other words, our method follows the assumptions used in [3,4,5], where two input datasets are required: a large poisoned training set (or equivalently a pretrained poisoned model in [3,4,5]) and a small clean holdout set. \n\nThe difference between our method and [3,4,5] is not in which datasets are required, but in when the defense takes place. [3,4,5] use a post-hoc defense strategy: They first train the model on the poisoned dataset without any defense mechanism. The defense is only conducted after that first stage as a post-hoc sanitization process. However, their limitation is that once the backdoor features are learned in the first stage, it is hard to be unlearned in the second stage. Our method solves this problem by applying the defense mechanism at the very beginning of the training process (i.e., in the first training stage). Specifically, our method baits and traps the backdoors in a small and easy-to-replace subnetwork (i.e., the 1st stage of our method), making it much easier to remove the learned backdoor correlations from the network (i.e., the 2nd stage of our method).\n\nQ2. According to Table 1, the degradation on clean accuracy is considerably large.\n\nCompared with previous defense methods, our method has much smaller degradation on clean accuracy. For example, in the last row of Table 1, our method has 3.14% higher average clean accuracy and 5.04% lower average attack success rate than I-BAU on CIFAR10. \n\nQ3. Why we reported a larger clean accuracy drop in ANP on SIG attack than the original paper?\n\nThanks for your careful reading. We used a stronger attack setting for SIG compared with the original ANP paper. Specifically, we poison 100% of the target-class training samples when using SIG attack in our paper. In contrast, the ANP paper poisoned only 80% of the target-class training sample. In other words, the poisoning ratio of SIG attack in our paper is higher than that used in the ANP paper. Using our stronger attack setting, SIG achieves 99.93% attack success rate (ASR) on CIFAR10 when no defense method is applied (in the 1st column and the 3rd to the last row in Table 1). In contrast, the weaker attack setting used in ANP paper leads to a lower 94.26% ASR (numbers cited from the Appendix A of the ANP paper). The backdoor injected by stronger attacks is harder to get unlearned. This explains why ANP suffers higher clean accuracy drop when applied under the stronger attack setting used in our paper. \n", " Thank you for appreciating our work. We are glad to respond to your constructive questions and comments. \n\nQ1. Why using the fully connected layer as classification head fails to defend the backdoor (as shown in the first row in Table 5)?\n\nIntuitively, it is easier to trap the backdoor attacks in a larger subnetwork (e.g., a larger classification head). If the classification head $f_c$ has only one fully connected layer, then it is easy for the backdoor to scape to shallower layers. \n\nQ2. How many layers should be chosen as the classification head in different model architectures?\n\nIn Table 5, we showed that the best results are achieved when the classification head is the last three layers (one fully connected layer and two convolutional layers) in WRN16-1. Following your suggestion, we conducted the same ablation study on a different model architecture – WRN28-2. The results on CIFAR10 with $\\ell_2$-Invisible attack are listed below:\n\nNumber of layers in $f_c$=1: ASR=100%, CA=92.03%;\n\nNumber of layers in $f_c$=3: ASR=1.25%, CA=89.16%;\n\nNumber of layers in $f_c$=5: ASR=0.74%, CA=75.08%.\n\nWe suggest using 3-layer classification head in this case for the best trade-off between ASR and CA.\n\nQ3. Definition of reconstruction head. \n\nThank you for your suggestion. We will add its definition in the introduction in the final version. \n", " Thank you for your careful reading and acknowledging the novelty and effectiveness of our method. We are glad to answer your insightful questions. \n\nQ1. Analyses the difference between features learned with and without the reconstruction head. \n\nFollowing your suggestion, we added new visualization results in Figure 4 in Appendix D. In summary, the visualization results show a two-fold conclusion. First, when trained without the image reconstruction task, the stem network ignores the semantic features and overfits to the easy-to-learn but semantically incorrect backdoor correlations on backdoored samples. Second, when trained with the image reconstruction task, the stem network successfully preserves the semantically correct features on the poisoned samples. Please check the updated appendix file for detailed results. \n\nQ2. What is the relation between image reconstruction quality and backdoor attack success rate (ASR)?\n\nTo investigate their relation, we design two sets of experiments. \n\n2.1. The first is to vary the value of $\\lambda_1$ in Eq (1). The larger $\\lambda_1$, the better reconstruction results. As you have noticed, this is what we did in Table 8 in appendix. Following your suggestion, we added qualitative results of image reconstruction under different $\\lambda_1$ value in Figure 3 in appendix. We also added another row in Table 8 to show the mean square error (MSE) of image reconstruction. Please check the updated appendix file for detailed results. \n\nSame conclusions can be drawn from the new visualization results as those from the original Table 8: A proper amount of image reconstruction quality is required to get good performance. On one hand, if $\\lambda_1$ is too small (e.g., $\\lambda_1$<1), then the supervision from the image reconstruction task is too weak, and thus the reconstruction quality is bad and ASR is high. On the other hand, if $\\lambda_1$ is too large (e.g., $\\lambda_1$=20), then the clean accuracy (CA) decreases because the stem network is biased towards learning image reconstruction features while ignoring the classification features.\n\n2.2. Following your suggestion, we designed a second experiment by varying the capacity of the decoder (i.e., the image reconstruction head). Specifically, we use decoders with different channel numbers. For example, $1\\times$ is the original decoder we used, and $2\\times$ is the decoder with twice the channel numbers in all decoder layers (and thus with larger model capacity). The results are listed below:\n\n$1\\times$ channel-width decoder: ASR=0.74%, CA=84.01%, MSE=0.0085\n\n$2\\times$ channel-width decoder: ASR=0.71%, CA=83.49%, MSE=0.0082\n\nAs we can see, increasing the capacity of decoder doesn’t significantly benefit the reconstruction quality (in terms of MSE) or the ASR. \n\nQ3. Similar to previous methods, a clean holdout dataset is required, which is a limitation of the proposed method.\n\nWe agree this is a limitation of our method, which also commonly exists in other backdoor defense methods such as [3, 4, 5] as you pointed out. Note that on CIFAR10 and GTSRB, the size of our holdout set is 5% instead of 10%.\n\nQ4. Setting the learning rate for samples from clean holdout set to 10x the learning rate for the untrusted dataset, with no image reconstruction task used. \n\nThank you for suggesting this new experiment. The results on CIFAR10 dataset with l2-invisible attack are: ASR=98.44%, ACC=86.60%. In other words, it can’t defend backdoor attack, showing the necessity of our backdoor trapper (i.e., the image reconstruction task). \n\nQ5. More ablation study results on the size of clean holdout set. \n\nFollowing your suggestion, we have added another two columns in Table 9 in appendix, which show the results when the holdout set sizes are 1% and 0.5% (i.e., 500 and 250 images on CIFAR10), respectively. In summary, decreasing the size of clean holdout set only affects the final clean accuracy but not the attack success rate. This is very intuitive since the defense of the stem network is done in the first training stage of our method and has nothing to do with the holdout set. The size of the holdout set only affects the quality of the clean classification head learned in the second stage. Please check the updated appendix file for detailed results. \n", " The proposed work introduces a defense which uses an auxiliary reconstruction task along with the classification to ‘trap’ the backdoor within the classification head. In the second stage, a new classification head is trained from scratch to completely remove the effect of the backdoor. The intuition is that the auxiliary task ensures that low-level features of the image is preserved within the backbone or the stem network, hence reducing the effectiveness of the backdoor. Results are shown on CIFAR-10, GTSRB and ImageNet-12 against various attacks. **Strengths:**\n\n— The defense method introduced is novel and effective. It is also easy to use without requiring too much hyper-parameter tuning compared to previous methods.\n\n— Authors conduct extensive ablation study to illustrate the effectiveness of the method. An Adaptive attack is also considered, against which the defense still remains effective.\n\n— Results are shown on a variety of backdoor attacks, indicating that it can be used to defend against multiple attacks.\n\n**Weaknesses:**\n\n— Although the authors do provide an intuition that low-level features are preserved better with the auxiliary task, it is not clear why the proposed method should be effective. A feature analysis on the difference with and without the reconstruction task can be helpful in identifying why the defense is effective to such a degree.\n\n— The reconstruction task proposed by the authors is not very well understood. An experiment where the quality of reconstructed images vs ASR can improve understanding. Although authors do provide a variant of this experiment in Table 8 of the appendix, a qualitative analysis would be better in this scenario. Another experiment would be to vary the size or capacity of the ‘decoder’ and observe the variation in ASR.\n\n— Similar to previous methods, a clean heldout dataset is required which is typically 10%. This can be difficult to obtain for larger datasets. — It seems that replacing the classification head without the auxiliary task results in significant drop in ASR. Is it possible to consider an experiment where the learning rate for samples from clean holdout set is 10x the learning rate for the untrusted dataset? There is no auxiliary task in this scenario. Such an experiment would show whether the effectiveness of the defense is purely due to the correct decision boundaries learned by the classification head or if it is due to the trapping mechanism.\n\n— A more detailed ablation study on the size of clean holdout set is necessary. From, Table 9 of the appendix, it seems there is no difference between the 2.5% and 5% setting. It would be interesting to know what is the least amount of samples required to achieve similar results. \n\n\nAfter Rebuttal:\nThanks to the authors for providing the rebuttal. Having gone through the other reviews, I will keep my score unchanged and strongly encourage the authors to include some of the new experiments about the relationship between reconstruction and ASR in the main paper. Yes", " This paper proposes a new defense strategy *Trap and Replace* to protect deep neural networks from the backdoor in the poisoned dataset. This paper presumes that the backdoor pattern is easy to learn, so it trains a standard image classification model consisting of a stem network and a classification head and an extra image reconstruction model, which consists of the stem network and reconstruction head at the same time to encourage stem model to learn the low-level visual features. Then freeze the stem work and initialize the classification head by setting those parameters to random values. And train the new model in a small but clean dataset. The experimental results show that Trap and Replace outperforms SOTA defense strategies in most cases.\n Strength:\n1. Introducing a reconstruction task to force the stem model to learn visual features is a novel idea. It is self-supervised training that does not need extra labeled samples.\n2. Training classification head-on clean dataset can mitigate the attack success rate and keep accuracy at a high level.\n3. The paper's summary of relative works is well organized and comprehensive.\n4. The experiments are well designed and clearly shown in the tables.\n5. Ablation study showing that both Trap and Replace are necessary is convincing.\n\n\t \nWeakness:\n1. Why using the fully connected layer as classification head fails to defend the backdoor should be discussed.\n2. How many layers should be chosen as the classification head in others models that are not included in this paper should be considered. N/A Comments:\n1. In previous works, the classification head is considered the fully connected layer. You can explain the definition of classification head and reconstruction head in the introduction session.\n2. In line 156, there is a repetitive word \"assumptions\".", " The authors propose to trap the backdoor by a lightweight classification head on top of a low level feature extractor and replace it with a clean classifier to remove the backdoor. Extensive experimental results on different dataset against various attacks show the effectiveness of the proposed method. Strengths:\n\na. The idea is interesting, and the motivation is reasonable.\n\nb. The authors provide rich ablation experiments to evaluate the proposed method.\n\nWeaknesses:\n\na. Most of the previous methods only need access to either the poisoned dataset or a small clean dataset, but the proposed method requires access to both, which limits its practical value.\n\nb. According to Table 1, the degradation on clean accuracy is considerably large. This may because the final classifier is only trained on limited data (the holdout dataset). a. I am curious about the performance of the other methods. For example, in [11], ANP use 500 images (i.e., 1% of the training data) to purify the model against SIG on CIFAR-10, and the degradation of the clean accuracy is only 0.24% (93.64% to 93.40%). In this paper, the authors use 5% of the training data (2500 images), but the performance of ANP is far worse (89.27% to 85.67%). It is strange that the baseline methods perform such badly. No. I think the largest limitation is the threat model. As far as I know, this is the first method that require both a poisoned dataset and a clean holdout set. If so, I suggest the author to discuss the practical scenario of such threat model.", " The authors of this work proposed a defense method against backdoor attacks. The defense method consists of two stages. In the first stage, they trapped the backdoors in a subnetwork. In the second stage, they replace the poisoned subnetwork and retrain the network with clean samples. Consequently, this method outperforms previous state-of-the-art methods. Strengths:\n\n1. The writing of this paper is very clear and easy to follow.\n2. The experimental results show that the method outperforms the other six baseline methods in various attack methods except in some cases (Blend, Trojan SQ, Trojan WM).\n3. The ablation study clearly shows the significance of the two stages.\n\nWeaknesses:\n\n1. In my opinion, the threat model is pretty unrealistic (with some holdout clean samples) however you can follow this assumption from previous works. \n2. The auxiliary image reconstruction task encourages the stem network to keep sufficient low-level visual features that are hard-to-learn but semantically correct, protecting the stem network from overfitting to the easy-to-learn but semantically incorrect backdoor correlations. I cannot figure out the relation between this intuition and how the effect of poisoned data can be trapped.\n\nTypos:\n1. Line 156 has two “assumptions”.\n2. Line 238 “Tojan” --> “Trojan”.\n 1. I think the reconstruction feature is pretty different from the classification feature which may cause both tasks to be difficult to train in more complex datasets. Can you show some reconstruction results? Does the reconstruction network reconstruct the trigger as well?\n2. It is confusing that in the SIG attack on ImageNet, the clean accuracy becomes higher in Table 3. Why your finetuning is so powerful in this case?\n3. Is your framework only suitable for convolutional neural networks? How about vision transformers?\n4. As you mentioned in the footnote on page 2, semi-supervised learning and self-supervised learning are not naive solutions, since they are vulnerable to backdoor attacks. Is your work aim to provide a defense mechanism in this scenario?\n5. As a suggestion and a question, I think that the poisoned data is aim to fool the classifier hence finetuning will work in this case. In multi-task learning, you can try to create poisoned data that aims to fool any downstream tasks. As finetuning might work for each task, how about putting various purpose attacks at the same time?\n Yes. The authors state that there are no potential negative societal impacts of this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "ZH1AXcMiAFP", "d6L-N1JTWt0", "UXXSix73dIY", "UXXSix73dIY", "lFoZX-akhv", "WtXWsvAJhSi", "9hWtHlz6jI", "dqCQTiY9cLy", "dZ1rkH6OVZB", "nips_2022_h10xdBrOxNI", "nips_2022_h10xdBrOxNI", "nips_2022_h10xdBrOxNI", "nips_2022_h10xdBrOxNI" ]
nips_2022_UaXD4Al3mdb
Masked Autoencoders As Spatiotemporal Learners
This paper studies a conceptually simple extension of Masked Autoencoders (MAE) to spatiotemporal representation learning from videos. We randomly mask out spacetime patches in videos and learn an autoencoder to reconstruct them in pixels. Interestingly, we show that our MAE method can learn strong representations with almost no inductive bias on spacetime (only except for patch and positional embeddings), and spacetime-agnostic random masking performs the best. We observe that the optimal masking ratio is as high as 90% (vs. 75% on images), supporting the hypothesis that this ratio is related to information redundancy of the data. A high masking ratio leads to a large speedup, e.g., > 4x in wall-clock time or even more. We report competitive results on several challenging video datasets using vanilla Vision Transformers. We observe that MAE can outperform supervised pre-training by large margins. We further report encouraging results of training on real-world, uncurated Instagram data. Our study suggests that the general framework of masked autoencoding (BERT, MAE, etc.) can be a unified methodology for representation learning with minimal domain knowledge.
Accept
This paper presents an interesting simple representation learning approach by extending masked autoencoders to videos. The reviewers have unanimously recognized the simplicity of approach, clarity of writing, and extensivity of experiments. Although there are some minor concerns about the novelty of the proposed method, the findings in this paper are of interest to the community. Given these, we are happy to recommend acceptance for this submission.
train
[ "_3WInM-97GL", "zOCRC4EV8uO", "vRHj2Qm932", "YWKEG0wss11", "2UTDxbBUpLi", "XIL9JIVzB4", "2hs8k3zfEo3", "onDpLZY2bGK", "crEUbe_7YSE", "Q1kBXI-gHTU", "K7umXF4QJhD", "rygbNz6H_JG", "Ae10ynOoywz", "OZ8ixolEw55" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the feedback and are glad that most of the concerns are addressed. ", " > Interesting. Does it work by removing the temporal tokenization (e.g. keep k_t=1), and predicting the time-slices based on the sampling stride. This removes one variable and could generalize better when evaluated on more diverse datasets.\n\nThis is an interesting hypothesis, and could further improve the encouraging video pre-training for image recognition results in Section 5.5 (since deflation is no longer needed). We have not tried this idea yet, but will follow the reviewers' suggestion and add changing the tokenization in this way to the paper. We thank the reviewer for the feedback. ", " Thank you for your explanation and insight into your own experiments. We also think the training procedure would need to be changed. \n\nRegarding your other question: \n\n> One more question. I wonder did you try to mix two videos together? For example, by gluing two videos and then sampling a mask that has a higher density of visible tokens at the beginning and at the end of this glued video?\n\nThis is an interesting idea. We have not explored something like that but will do, thank you for suggesting it. We have noticed a possibly related work that does a similar form of mixing in the image domain https://arxiv.org/pdf/2205.13137.pdf", " Most of my concerns are addressed. I would like to raise my rating from weakly accept to accept.", " I thank the authors for the detailed reply and additional experiments. The results seem interesting.\n\nAbout Gibbs sampling. I was interested in whether one can sample plausible video with your model given some noise as an input. For example, complete noise in RGB space or maybe some structure noise, e.g. patches from a real video but rearranged in random order. I assume some structure should appear if one samples with Gibbs sampling with your model. I've tried it with the original MAE pipeline and discovered that there is some structure indeed but the images are not plausible. Now I believe one needs to change the training procedure to enable this Giggs-sampling behavior. So I withdraw my question.\n\nOne more question. I wonder did you try to mix two videos together? For example, by gluing two videos and then sampling a mask that has a higher density of visible tokens at the beginning and at the end of this glued video?\n", " Interesting. Does it work by removing the temporal tokenization (e.g. keep k_t=1), and predicting the time-slices based on the sampling stride. This removes one variable and could generalize better when evaluated on more diverse datasets. \n\nLastly, I'm looking forward to seeing the power of MAE in larger data scales!\n", " Thank you. Apologies for not including our response earlier. Please see below and let us know if more information is required. \n\n> Line 130-131: \"In practice, we find it sufficient to predict a single time slice of the patch (16x16)\", which time index?\n\nWe predict the 1st time-slice for each patch of size (2x16x16). So for 16 frames, we make 8 predictions of size (1x16x16). Predicting all 16 time-slices leads to slightly lower accuracy (-0.4%).", " Thanks for the response! I have updated my scores accordingly. Can author[s] share some knowledge about Q2?", " We thank the reviewer for the feedback and positive comments. Below are our responses. \n\n> The ablation experiments in Table 2 are only performed on a single dataset (K400), which may not provide a precise or comprehensive evidence of how these ablated items may affect the model performance. \n\nWe agree that ablation studies on multiple datasets would be better; however, please note that ablations on a single (large) dataset is a standard approach in most papers (that are commonly accepted by top venues) and it is rare to find video classification publications that perform ablations on multiple datasets. Especially since K400 is a large-scale dataset and therefore each ablation is computationally expensive (e.g., the K400 training set has ~72x more frames than ImageNet has images). Please note that we trained representations on 4 datasets: K400 / K600 / K700 / Instagram and evaluated them on 3 dataset: K400, AVA and SSv2 with comparable performance to state-of-the-art on all. We nevertheless agree with the reviewer about properties in SSv2 and have already started to perform a set of main SSv2 ablations that will be included in the final version. \n\n> Overall the novelty is low, as it is almost a direct extension of image MAE. While there are few interesting conclusions/findings, e.g., (a) video modeling requires a higher mask ratio. (b) random masking works better than structual masking, and (c) decoder in video MAE needs to be more powerful, etc. But these are somewhat expected. I do value the experimental results that validate these findings, but the overall contribution is limited.\n\nWe fully agree that our work's strength is not in technical novelty. We appreciate that all reviewers are open-minded and have recognized our work's value in many other aspects.\n\nWe believe that the scope of novelty is broad and is beyond technical algorithms. It can also include extending existing paradigms to new problems, designing new experiments/ablations, drawing new observations/insights, and verifying expected hypothesis in new problems or tasks. From the reviewer comments, we have seen that many of these values of our work are recognized. We thank the reviewer for seeing value in our work from that perspective. \n\n>L192-194, Fig 6 (right). How do you vary the #encoded tokens when the mask ratio is fixed? By training longer?\n\nThis is correct, training is over different number of epochs. The datapoints in Fig. 6 right and left are identical, only the x-axis changes. We will make this more clear. \n\n>L129 mentions it is sufficient to predict a single time slice (16x16) instead of the full space-time patch (x16x16). Is there any experimental results that compares the two strategies?\n\nYes, predicting the full spacetime patch (2$\\times$16$\\times$16) has 84.0% accuracy, instead of a single slice (1$\\times$16$\\times$16) that has 84.4% accuracy. The experimental setting is identical to our baseline in Table 2 of the main paper. \n", " We thank the reviewer for the feedback and positive comments. Below are our responses. \n\n>Line 162: \"it takes K temporal clips (by default K=7 on Kinetics)\", any intuition why choose 7-view here?\n\nThe videos have a duration of 10 seconds with a sampling rate of 30 fps (i.e. 300 frames). Since we sample clips of 16 frames with a sampling stride of 4 (covering a window of 15*4=60 frames), taking 7 clips, uniformly over time, will cover the full video with some overlap between the clips, which works well empirically; e.g. the ViT-L model in Table 7 has 84.5% with 5-clip, 84.8% with 7-clip and 84.8% with 10-clip testing. \n\n> In Table 4, VideoMAE pretrained with IG-uncurated doesn't show big improvement over K400 when both are fine-tuned on in-domain K400. This raises a natural question in my mind: in MAE, does the accuracy continue to significantly improve with a bigger scale of data?\n\nWe also think that IG-uncurated performing on-par with the in-domain K400 data could be related to domain gap, and think the result is still encouraging as the data is purely random Instagram videos. For scaling pre-training data, the experiments on AVA in Table 3 show a clearer trend: From K400->K600->K700->IG-uncurated the gains are: +1.4 / +2.0 / +3.1 mAP. Here, IG-uncurated outperforms K400 significantly (+3.1 mAP) on AVA. Since both Kinetics and AVA are human action recogntition datasets, there is more domain overlap between Kinetics and AVA. than random IG videos. So the experiment shows that larger data (K400/K600/K700) improves accuracy, even it is not domain-specific (IG-uncruated). In future work, we hope to explore even larger data scales, beyond 1M videos. \n\n> A similar question regarding resolution is raised in Table 7,8,9: If VideoMAE with a larger resolution continues to show further gains?\n\nYes, this is an interesting question, increasing the resolution can further improve accuracy: \nWe fine-tunied ViT-H, 16$\\times$224$\\times$224 resolution in Table 7, for 30 epochs with a resolution of 32$\\times$312$\\times$312 (maximum resolution that fits into memory), and the accuracy increases from 85.1% to 86.1%. \nFurthermore, we have fine-tuned K400 pre-trained ViT-L, 16x224x224 resolution in Table 7, with a resolution of 40$\\times$312$\\times$312, and the accuracy increases from 84.8% to 85.8%. The experiments are summarized below and we will add these to the final version of the paper. \n\n| architecture | input size | K400 accuracy |\n| ----------- | ----------- | ----------- |\n| ViT-L | 16$\\times$224$^2$ | 84.8% |\n| ViT-L | 40$\\times$312$\\times$312 | **85.8**% |\n| ViT-H | 16$\\times$224$^2$ | 85.1% |\n| ViT-H | 32$\\times$312$\\times$312 | **86.1**% |\n\nTable B: Higher resolution results on Kinetics-400. _Cf_. Table 7 of the main paper. \n\n>Random shuffle, slicing and reindexing causes randomly accessing the memory, which recomputes the cache. In practice, depending on the actual hardware, it may slow down the process (i.e. 90% cannot lead to 10x speed-up). Can authors share any knowledge about it?\n\nWe do not observe noticeable overhead caused by random shuffling. It can be simply implemented by gather (in PyTorch) or einsum with one-hot indexes (in PyTorch/TensorFlow/JAX), and einsum (which self-attention is based on) is highly optimized in speed.\n\nThe <10$\\times$ speed-up ratio in 90% masking is not caused by random shuffling. A major overhead is the presence of the decoder which is on all tokens. So even the theoretical speedup is <10$\\times$ (7.7$\\times$). In practice, the actual speedup is smaller than 7.7$\\times$, because smaller computations (e.g., fewer tokens in our case) are less parallelism-friendly for hardware.", " We thank the reviewer for the feedback and positive comments. Below are our responses. \n\n>It would be very interesting to see qualitative results for future predictions (when every token is masked in several last frames).\n\nThis is an interesting visualization suggestion. We will include such qualitative predictions in the final version.\n\n>Did you try to use masking while finetuning for a supervised task? (to obtain speed-up in computation on this stage as well). Maybe one can anneal mask ration in finetuning stage to obtain the best metrics with minimum computational resources.\n\nApplying masking during fine-tuning is a great idea. We have explored this with a masking ratio of 75% that is annealed to 0% with a cosine schedule during fine-tuning. The result is 83.8% (instead of 84.4% for full fine-tuning without masking). If we start fine-tuning with a masking ratio of 50% and anneal it to 0%, the accuracy is 84.1%. The experiments are summarized in Table A below. We think this is an encouraging result and with more tuning might be even more competitive to the original fine-tuning, but at lower cost. Thanks for this suggestion! We will include the experiment in the paper.\n\n| starting mask ratio | K400 accuracy | speedup |\n| ----------- | ----------- | ----------- |\n| 0% (baseline) | 84.4% | 1.0x |\n| 50% | 84.1% | 1.2x |\n| 75% | 83.8% | 1.3x |\n\nTable A: Cosine annealing of masking ratio during fine-tuning. The starting masking ratio is varied between 0% (baseline without masking), 50% and 75%. The annealing is towards 0% at the end of fine-tuning. The model is ViT-L, with an input size of 16$\\times$224$\\times$224 and a spacetime patch size of 2$\\times$16$\\times$16. The pre-training length is 800 epochs. _Cf_. Table 2 of the main paper. \n\n> Is it possible to do Gibbs sampling with your pipeline? If yes -- did you try it and what is the quality of the samples\n\nThanks for your suggestion. We would be happy to add experimental results if this idea could be specified further. Here are our thoughts on how we proceeded with implementing this idea: It is possible to iteratively sample and generate videos using MAE. Let $y$: MAE generated video, $x$: visible patch encodings, $m$: masked tokens. Given $x_0, m_0$, we recursively sample images with MAE's decoder by $y_{i+1}=MAE(x_i,m_i)$ and $x_{i+1}=x_i, m_{i+1}=m_{i,out}$, where $m_{i,out}$ is the output of the decoder at iteration $i$. We inspect the samples after applying MAE recursively and observe that the visual quality does not improve but rather degrades slightly after each iteration. We think the reason is that MAE is not trained for the recursive video generation approach specified above. ", " The paper extends the recently proposed MAE pipeline for images to the video domain. The method shows competitive performance on video datasets despite having minimum domain knowledge. Strengths\n\n- Simple pipeline with competitive performance\n- Great ablation study\n- interesting possibility for computational speed-up (as the encoder is only applied on the sparse set of visible patches)\n- good performance on the uncurated dataset\n- good comparison with other video pretraining methods\n\nWeaknesses\n- I don’t see any\n\nAlthough the novelty seems limited to me as it is a straightforward generalization of MAE method to the video domain, in other aspects the paper seems seamless to me, the work has practical value to the community, and the experiments section is great. 1) It would be very interesting to see qualitative results for future predictions (when every token is masked in several last frames).\n2) Did you try to use masking while finetuning for a supervised task? (to obtain speed-up in computation on this stage as well). Maybe one can anneal mask ration in finetuning stage to obtain the best metrics with minimum computational resources.\n3) Is it possible to do Gibbs sampling with your pipeline? If yes -- did you try it and what is the quality of the samples? Yes", " This paper proposes to extend MAE for videos. Extensive ablations show that MAE can be successfully extended to videos and the optimal masking ratio is higher than images. It achieves competitive results on popular video classification and detection datasets based on both large uncurated and well-aligned upstream datasets. 1. Simple and effective idea: simple random masking with pixel-level reconstruction works well for videos.\n\n2. Clean design: Standard ViTs with a lightweight decoder is enough to achieve competitive results on video classification and detection.\n\n3. Comprehensive ablation studies on upstream/downstream data, decoder design choices, masking ratio/strategy, and reconstruction targets. 1. Line 162: \"it takes K temporal clips (by default K=7 on Kinetics)\", any intuition why choose 7-view here?\n\n2. Line 130-131: \"In practice, we find it sufficient to predict a single time slice of the patch (16x16)\", which time index?\n\n3. In Table 4, VideoMAE pretrained with IG-uncurated doesn't show big improvement over K400 when both are fine-tuned on in-domain K400. This raises a natural question in my mind: in MAE, does the accuracy continue to significantly improve with a bigger scale of data?\n\n4. A similar question regarding resolution is raised in Table 7,8,9: If VideoMAE with a larger resolution continues to show further gains?\n\n5. Random shuffle, slicing and reindexing causes randomly accessing the memory, which recomputes the cache. In practice, depending on the actual hardware, it may slow down the process (i.e. 90% cannot lead to 10x speed-up). Can authors share any knowledge about it? Yes. The authors have adequately addressed the limitations and potential negative societal impact of their work", " This paper studies a simple extension of image MAE to video domain. The experiments are conducted on a set of standard video datasets, i.e., Kinetics, AVA, SSv2, etc., showing the proposed approach achieve appealing results. This paper also presents some interesting findings: (a) due to high information redundancy, video MAE requires higher mask ratio comparing to its image counterpart; (b) random masking works surprisingly well; (c) architectural findings such as video MAE requires a deeper decoder and a larger decoder hidden size. On the one hand, the ablation experiments are comprehensive which covers many aspects of the model design. On the other hand, these experiments are only conducted on a single dataset, K400, which is not convincing. As the paper is mainly an analysis paper, instead of proposing a novel approach, the ablation studies are insufficient. Strengths\n\n- Simple, yet effective method for video self-supervised learning.\n- Comprehensive experimental comparison shows that the proposed approach has strong performance compare to prior work.\n- The paper is well-organized and is easy to follow. Most of the claims are supported with solid experimental results.\n\nWeaknesses:\n\n- The ablation experiments in Table 2 are only performed on a single dataset (K400), which may not provide a precise or comprehensive evidence of how these ablated items may affect the model performance. In TimeSformer [4] Table 1, it is shown that the conclusions aren’t exactly the same when looking at results from K400 and SSv2, as K400 is more static while SSv2 requires more temporal modeling. I strongly encourage the authors to also conduct ablation experiments on SSv2 to have more persuasive conclusions.\n- Overall the novelty is low, as it is almost a direct extension of image MAE. While there are few interesting conclusions/findings, e.g., (a) video modeling requires a higher mask ratio. (b) random masking works better than structual masking, and (c) decoder in video MAE needs to be more powerful, etc. But these are somewhat expected. I do value the experimental results that validate these findings, but the overall contribution is limited. - L192-194, Fig 6 (right). How do you vary the #encoded tokens when the mask ratio is fixed? By training longer?\n- L129 mentions it is sufficient to predict a single time slice (16x16) instead of the full space-time patch ($t$x16x16). Is there any experimental results that compares the two strategies? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "YWKEG0wss11", "XIL9JIVzB4", "2UTDxbBUpLi", "crEUbe_7YSE", "K7umXF4QJhD", "2hs8k3zfEo3", "onDpLZY2bGK", "Q1kBXI-gHTU", "OZ8ixolEw55", "Ae10ynOoywz", "rygbNz6H_JG", "nips_2022_UaXD4Al3mdb", "nips_2022_UaXD4Al3mdb", "nips_2022_UaXD4Al3mdb" ]
nips_2022_V91cZ9i_sV3
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
Current referring expression comprehension algorithms can effectively detect or segment objects indicated by nouns, but how to understand verb reference is still under-explored. As such, we study the challenging problem of task oriented detection, which aims to find objects that best afford an action indicated by verbs like sit comfortably on. Towards a finer localization that better serves downstream applications like robot interaction, we extend the problem into task oriented instance segmentation. A unique requirement of this task is to select preferred candidates among possible alternatives. Thus we resort to the transformer architecture which naturally models pair-wise query relationships with attention, leading to the TOIST method. In order to leverage pre-trained noun referring expression comprehension models and the fact that we can access privileged noun ground truth during training, a novel noun-pronoun distillation framework is proposed. Noun prototypes are generated in an unsupervised manner and contextual pronoun features are trained to select prototypes. As such, the network remains noun-agnostic during inference. We evaluate TOIST on the large-scale task oriented dataset COCO-Tasks and achieve +10.7% higher $\rm{mAP^{box}}$ than the best-reported results. The proposed noun-pronoun distillation can boost $\rm{mAP^{box}}$ and $\rm{mAP^{mask}}$ by +2.6% and +3.6%. Codes and models are publicly available.
Accept
The paper proposes a Task-Oriented Instance Segmentation Transformer (TOIST) approach for finding objects that best afford a verb-indicated action, to handle the affordance recognition task. TOIST proposes two approaches of teacher-student knowledge distillation — it leverages the referring expression comprehension algorithm as the teacher module for guiding the student module to learn the noun-pronoun transformation. Experiments on Coco tasks show the gains of the proposed approach. All the reviewers accepted the paper, however there were multiple suggestions that would be good for the authors to address. Reviewer aXr4 recommended simplifying aspects of the approach further, and recommended using a more recent baseline. Reviewer u6fT suggested adding some more ablation experiments and asked for clarifications in the loss function formulation. Reviewer mBEm suggested using a different baseline, and had concerns about the proposed method using extra training data compared to the baselines. Based on the feedback provided by the reviewers, we recommend this paper for publication at NeurIPS 2022. We thank the authors for addressing some of the comments of the reviewers in their original review and subsequent author feedback period. The authors seem to have reported new results and addressed the concerns/feedback from the reviewers in the rebuttal period -- it would be good to include these additional results and discussions as much as possible in the updated paper/supplemental materials.
train
[ "jWp9MU300JF", "O2Is6TpkmUy", "aI-2tWYwIPz", "lanLTb3UhJN", "fA6HK_22u61", "XNhYD-N1Yh73", "IqeV4YEttY3", "ilgz5ZsVJBb", "vBvi2Y3zY5r", "5js2NmH6XOL", "3Oy28XfpTHa", "ilZ3rhhFsnr", "Fs5UsScafX3", "PDgf8lpXW4", "htj_uPRyp7G", "jqXqWHtwgFj" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for R#mBEm's feedback.\n\nFirstly, TOIST and MDETR+GGNN employ the same image encoder and text encoder. And the performance of MDETR+GGNN could derive from the pre-training process. The experimental results are shown below.\n\nTable 5: Comparison of different methods with the same backbone.\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| MDETR+GGNN w/o pretraining | 9.6 | 8.6 |\n| MDETR+GGNN | 36.8 | 30.3 |\n| TOIST | *41.3* | *35.2* |\n| TOIST w/ distillation | **43.9** | **38.8** |\n\nThe results also show that though MDETR+GGNN can also benefit from the pretraining process, our method (with the same pretraining) still outperforms it by +7.1% $\\rm{mAP}^{box}$ and +8.5% $\\rm{mAP}^{mask}$. This demonstrates that our TOIST architecture is a standalone technical contribution towards task oriented instance segmentation and pretraining is necessary but insufficient to get the performance level of TOIST.\n\nSecondly, the results in Table.1 and Table.2 show that our noun-pronoun distillation training framework is a standalone technical contribution no matter whether the pretraining is used. This method makes it possible to leverage the abundant information in the well-studied noun referring expression understanding to advance the research on verb referring expression understanding.\n\nFinally, pretraining is a widely used process to improve performance on downstream tasks. We think the technical contributions and the methodological contributions of our proposed method should not be ignored just because the existence of pretraining, especially when the quantitative results (Table.5) have shown that our method still outperforms other methods in a fair comparison (with the same backbone and the same pretraining).\n", " Thanks for the authors' response.\n* As shown in Table 1 vs. Table 2, it is known that the TOIST requires the pre-trained models, trained with the extra data, to boost the model performance. \n* In Table 2, do TOIST and MDETR+GGNN employ the same image encoder and text encoder? Could the performance also derive from the extra training data used in the pre-trained student and teacher TOIST models?", " Dear reviewer,\n\nPlease let us know if our responses have addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results address the concerns you've raised. We are happy to address any further concerns.", " Dear reviewer,\n\nPlease let us know if our responses have addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results address the concerns you've raised. We are happy to address any further concerns.", " Dear reviewer,\n\nPlease let us know if our responses have addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results address the concerns you've raised. We are happy to address any further concerns.", " \n### Question.7\n> $\\mathcal{L}\\_{\\rm{match}}$ was a loss term in DETR model to encourage matching the class and bounding boxes of ground truth and prediction. However it is not included in loss functions here (equations 3 and 9). Why is $\\mathcal{L}\\_{\\rm{match}}$ not used in loss function? In line 247, authors mention KL divergence is also a part of $\\mathcal{L}\\_{\\rm{match}}$. However, the original DETR paper doesnt mention that.\n\nFirstly, in DETR, $\\mathcal{L}\\_{\\rm{match}}$ is used to find an optimal bipartite matching between predicted and ground truth objects. It is not a loss term for backpropagation.\n\nSecondly, in our method, we calculate the bipartite matching $\\hat \\sigma\\_0$ with:\n\n\n\n$\\hat{\\sigma\\_0}=\\mathop{\\arg \\min }\\limits\\_{\\sigma\\_0 \\in \\mathfrak{S}\\_{{n\\_{\\rm{pred}}}}} \\sum\\_{i}^{{n\\_{\\rm{pred}}}}\n\\mathbb{1}\\_{\\{p^{\\rm{span}}\\_{i,n\\_{\\rm{max}}} = 0\\}}\n\\left[\\mathcal{L}\\_{\\rm{l1}}(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)})+\n\\mathcal{L}\\_{\\rm{giou}}(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)})+\n\\mathcal{L}\\_{\\rm{token-m}}({{\\mathbf{p}}^{\\rm{span}}\\_{i}}, {\\hat{\\mathbf{g}}\\_{\\sigma\\_0(i)}})\\right].$\n\n\n\nHere, \n\n\n\n$\\mathcal{L}\\_{\\rm{l1}}(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)}) = \\left\\|b\\_{i}-\\hat{b}\\_{\\sigma\\_0(i)}\\right\\|\\_{1},$\n\n$\\mathcal{L}\\_{\\text {giou}}(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)})=1-\\left(\\frac{|b\\_{i} \\cap \\hat{b}\\_{\\sigma\\_0(i)}|}{|b\\_{i} \\cup \\hat{b}\\_{\\sigma\\_0(i)}|}-\\frac{|B(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)}) \\backslash b\\_{i} \\cup \\hat{b}\\_{\\sigma\\_0(i)}|}{|B(b\\_{i}, \\hat{b}\\_{\\sigma\\_0(i)})|}\\right),$\n\n$\\mathcal{L}\\_{\\rm{token-m}}({{\\mathbf{p}}^{\\rm{span}}\\_{i}}, {\\hat{\\mathbf{g}}\\_{\\sigma\\_0(i)}}) = -\\sum\\_{j}^{n\\_{\\rm{max}}} p^{\\rm{span}}\\_{i,j} \\frac{\\exp \\left(\\hat g\\_{j}^{\\sigma\\_0(i)}\\right)}{\\sum\\_{l=1}^{n\\_{\\rm{max}}} \\exp \\left(\\hat g\\_{l}^{\\sigma\\_0(i)}\\right)}.$\n\n\n\nMore details can be found in section 1.2 of the supplementary.\n\nThirdly, $\\mathcal{L}\\_{\\rm{match}}$ mentioned in line 245-247 is proposed to find a bipartite matching between $n\\_{\\rm{pred}}$ object predictions of the teacher model and $n\\_{\\rm{pred}}$ object predictions of the student model, which is not the same as $\\mathcal{L}\\_{\\rm{match}}$ in DETR. And in our method, we leverage KL-Divergence for preference distillation. \n\n\n\n### Question.8\n> Instead of minimizing the distance between $l\\_{\\rm{pron}}^{\\rm{tr}}$ and $l\\_{c\\_s}^j$ in equation 4, why can't one minimize the distance between $l\\_{\\rm{pron}}^{\\rm{tr}}$ and $l\\_{\\rm{noun}}^{\\rm{tr}}$ for knowledge distillation?\n\nWe have tried this simplified method but it does not work well. The quantitative results are demonstrated below:\n\n\nTable 3: Comparison of different distillation methods.\n\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| TOIST | 41.3 | 35.2 |\n| distill from $l\\_{c\\_s}^j$ to $l\\_{\\rm{pron}}^{\\rm{tr}}$ | **43.9(+2.6)** | **38.8(+3.6)** |\n| distill from $l\\_{\\rm{noun}}^{\\rm{tr}}$ to $l\\_{\\rm{pron}}^{\\rm{tr}}$ | *41.9(+0.6)* | *36.0(+0.8)* |\n\n\n\nWe will present the result in the new version of the paper.\n\n\n### References\n[1] Kamath A, Singh M, LeCun Y, et al. MDETR-modulated detection for end-to-end multi-modal understanding[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1780-1790.\n\n", " \n### Question.4\n> I do not understand why distillation has to be done the way it is in \"clustering distillation\". What about loading the same data for Teacher and Student, and simply distill l\\_{noun} into l\\_{pronoun} (say in the same way as Equation 4)? This gets rid of introducing \"number of tasks\" and \"memory bank\", which greatly simplifies the proposed method.\n\nWe have tried this simplified method that directly distill $l\\_{\\rm{pron}}^{\\rm{tr}}$ into $l\\_{\\rm{noun}}^{\\rm{tr}}$, but it does not work well. The quantitative results are demonstrated below:\n\nTable 6: Comparison of different distillation methods.\n\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| TOIST | 41.3 | 35.2 |\n| distill from $l\\_{c\\_s}^j$ to $l\\_{\\rm{pron}}^{\\rm{tr}}$ | **43.9(+2.6)** | **38.8(+3.6)** |\n| distill from $l\\_{\\rm{noun}}^{\\rm{tr}}$ to $l\\_{\\rm{pron}}^{\\rm{tr}}$ | *41.9(+0.6)* | *36.0(+0.8)* |\n\nTherefore, we propose the clustering distillation method. And the overhead introduced by this method is only for maintaining the memory bank, doing the cluster selection and calculating the cluster loss. The process is clear and the extra cost is significantly smaller than the model backbone, while improving the performance markedly.\n\nWe will present the result in the new version of the paper.\n\n\n### References\n[1] Kamath A, Singh M, LeCun Y, et al. MDETR-modulated detection for end-to-end multi-modal understanding[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 1780-1790.\n\n[2] Sawatzky J, Souri Y, Grund C, et al. What object should i use?-task driven object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 7605-7614.\n\n[3] Liu Y, Ott M, Goyal N, et al. Roberta: A robustly optimized bert pretraining approach[J]. arXiv preprint arXiv:1907.11692, 2019.\n\n[4] Wolf T, Debut L, Sanh V, et al. Transformers: State-of-the-art natural language processing[C]//Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. 2020: 38-45.\n\n", " \n### Question.1\n> L169 mentioned that the text encoder is \"pre-trained\". What data is this text encoder pre-trained on? This is the reason why the pronoun in Table 4 makes a difference, right? What would the performance be if this part is trained from scratch? Does the distillation still work?\n\nThe text encoder is a RoBERTa-base [3], which is pre-trained on five datasets: BookCorpus, English Wikipedia, CC-News, OpenWebText, and Stories. The implementation and weights are taken from HuggingFace [4].\n\nTo verify the effectiveness of our distillation, we have trained our model from scratch on the COCO-Tasks dataset.\nThe final results are:\n\n\n\nTable 4: The results of TOIST without pre-training.\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| pronoun input | 3.65 | 5.74 |\n| noun input | 11.19 | 12.67 |\n| noun-pronoun distillation | 7.43(+3.78) | 11.28(+5.54) |\n\n\n\nThe results without pre-training demonstrates that the proposed distillation can still work well even without a pre-trained text encoder.\n\nWe will add the results into the supplementary.\n\n### Question.2\n> The \"clustering distillation\" component requires the notion of \"number of tasks\", and a clustering algorithm is done for each task. However, I don't think the concept of \"task\" is well defined in this paper (e.g. in Section 3). Does \"task\" equal to verb, like \"dig hole\" is one task, and \"sit comfortably on\" is another? If so, how many training examples are in COCO-Tasks, and how many tasks? Dividing the former by the latter can give the reader a rough sense of how many training examples per task, and that will also inform how the number of clusters, K, ought to be chosen.\n\nYes, in this paper, every 'task' corresponds to a verb phrase like 'dig hole' or 'sit comfortably on'.\nAs mentioned in line 263-264, there are 14 tasks contained in the COCO-Tasks dataset, and for each task, there are 3600 train images and 900 test images.\nMore details about the dataset can be found in section 2 of the supplementary. \nFurthermore, we have included some statistics about the categories of the ground truth objects in each task in the Figure 1-2 in the supplementary, which show the diversity of the data distribution. \nWe also present exhaustive class-by-class quantitative results of our proposed method in Figure 3-8 in the supplementary. The corresponding analysis is marked in red, which demonstrates the effectiveness of our method\n\n\n### Question.3\n> Following the question above, Section 5.5 ablated the cluster number K. What about $n\\_{\\rm{task}}$? Does the distillation still work when $n\\_{\\rm{task}}$ = 1, i.e. throwing away the notion of \"task\"?\n\nWe thank R#aXr4 for this suggestion. We have included a new ablation study about the $n\\_{\\rm{task}}$. We show the object detection results on 5 tasks below:\n\nTable 5: Ablations for the task number $n\\_{\\rm{task}}$ on object detection.\n| Method | step on something | sit comfortably | place flowers | get potatoes out of fire | water plant |\n|:------------------------:|:-----------------:|:---------------:|:-------------:|:------------------------:|:-----------:|\n| TOIST w/o dis | 44.0 | 39.5 | 46.7 | 43.1 | 53.6 |\n| dis $n\\_{\\rm{task}}$ = 14 | 46.2(+2.2) | 39.6(+0.1) | 49.9(+3.2) | *47.1(+4.0)* | 54.5(+0.9) |\n| dis $n\\_{\\rm{task}}$ = 5 | *46.4(+2.4)* | *40.7(+1.2)* | **51.3(+4.6)** | 46.8(+3.7) | *54.6(+1.0)* |\n| dis $n\\_{\\rm{task}}$ = 1 | **47.0(+3.0)** | **42.1(+2.6)** | *50.8(+4.1)* | **47.4(+4.3)** | **55.2(+1.6)** |\n\nIn this table, the first line corresponds to the plain TOIST without distillation, and the other lines show the results of distillation with different $n\\_{\\rm{task}}$.\nThe results demonstrate that our proposed distillation still works for different $n\\_{\\rm{task}}$, even if $n\\_{\\rm{task}}$ = 1.\nAnd overall, smaller $n\\_{\\rm{task}}$ leads to better performance. We attribute this to the reduced problem complexity due to the less interaction between different tasks, which makes it easier to improve the ability of the model to understand verbs through noun-pronoun distillation.\n\nWe will add the results into the supplementary.\n\n", " We thank R#aXr4 for professional feedbacks. Here we address raised concerns one by one.\n\n### Weakness.1\n> The distillation version of TOIST may be a bit overly complicated, and I do not understand why it has to be designed the way it is.\n\nWe have tried the simplified method that directly distill $l\\_{\\rm{pron}}^{\\rm{tr}}$ into $l\\_{\\rm{noun}}^{\\rm{tr}}$ instead of clustering distillation, but it does not work well. Please see the response to question.4 for the quantitative results.\n\n### Weakness.2\n> The baseline [48] is more than 3 years old, and as someone who is not extremely familiar with COCO-Tasks, it is concerning that there is no follow-up works in 3 years, posing questions about the baseline and the dataset in general.\n\nIn order to demonstrate that TOIST is not only stronger than methods developed 3 years ago, we present a new baseline 'MDETR+GGNN'. It takes a strong noun reference understanding model published in ICCV 2021 [1] as the detector.\n\nTo leverage the knowledge in noun referring expression comprehension, we use the official pre-trained model of MDETR and then fine-tune it on the COCO-Task dataset. We use the class names of the ground truth objects in each image as the text input to detect these objects. Then we use the GGNN model [2] to infer which objects are preferred for a task. The results are shown below:\n\n\n\nTable 1: Comparison of the proposed method to 'MDETR+GGNN' baseline on the COCO-Tasks dataset.\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| MDETR+GGNN | 36.8 | 30.3 |\n| TOIST | *41.3(+4.5)* | *35.2(+4.9)* |\n| TOIST w/ distillation | **43.9(+7.1)** | **38.8(+8.5)** |\n\nThe detection and segmentation results for each class are:\n\nTable 2: The object detection results of 'MDETR+GGNN' and the proposed method for each task.\n| Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | mean |\n|:---------------------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| MDETR+GGNN | *44.3* | 36.5 | 45.2 | 28.6 | 44.0 | **27.6** | 35.9 | 20.7 | **34.7** | *46.3* | 27.8 | 41.5 | 46.5 | 36.2 | 36.8 |\n| TOIST | 44.0 | *39.5* | *46.7* | *43.1* | *53.6* | 23.5 | *52.8* | *21.3* | 32.0 | *46.3* | *33.1* | *41.7* | *48.1* | *52.9* | *41.3* |\n| TOIST w/ distillation | **46.2** | **39.6** | **49.9** | **47.1** | **54.5** | *26.7* | **57.3** | **23.1** | *33.1* | **49.9** | **35.4** | **44.7** | **52.1** | **54.9** | **43.9** |\n\nTable 3: The instance segmentation results of 'MDETR+GGNN' and the proposed method for each task.\n| Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | mean |\n|:---------------------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| MDETR+GGNN | 36.9 | 31.3 | 43.6 | 17.1 | 42.9 | *20.1* | 19.9 | *18.7* | *24.5* | *45.5* | 23.1 | *30.9* | 46.2 | 24.0 | 30.3 |\n| TOIST | *37.0* | *34.4* | *44.7* | *34.2* | *51.3* | 18.6 | *40.5* | 17.1 | 23.4 | 43.8 | *29.3* | 29.9 | *46.6* | *42.4* | *35.2* |\n| TOIST w/ distillation | **40.8** | **36.5** | **48.9** | **37.8** | **53.4** | **22.1** | **44.4** | **20.3** | **26.9** | **48.1** | **31.8** | **34.8** | **51.5** | **46.3** | **38.8** |\n\nNote that this baseline is also tested with privileged noun ground truth, but our distillation method only use the priviledged knowledge during training. Nevertheless, our proposed method has a significant performance improvement over this strong baseline.\n\nWe will add the results into the supplementary.\n\nAs for the dataset, we have included some statistics in the Figure 1-8 in the supplementary (we mark the corresponding analysis in the supplementary in red). These statistics demonstrate the diversity and complexity of the COCO-Tasks dataset.\n\n\n", " \n\n### Weakness.3\n> Compared with the state-of-the-art methods, it is not fair to compare the methods extracting features with different backbones. It is interesting whether the 'TOIST w/ distillation' in table 2 can still surpass the baseline with the same backbone, i.e., 'mdetr+GGNN'?\n\nWe thank R#mBEm for this suggestion. We have included the new baseline 'MDETR+GGNN'.\n\nTo leverage the knowledge in noun referring expression comprehension, we use the official pre-trained model of MDETR and then fine-tune it on the COCO-Task dataset. We use the class names of the ground truth objects in each image as the text input to detect these objects. Then we use the GGNN model [2] to infer which objects are preferred for a task. The results are shown below:\n\nTable 2: Comparison of the proposed method to 'MDETR+GGNN' baseline on the COCO-Tasks dataset.\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| MDETR+GGNN | 36.8 | 30.3 |\n| TOIST | *41.3(+4.5)* | *35.2(+4.9)* |\n| TOIST w/ distillation | **43.9(+7.1)** | **38.8(+8.5)** |\n\nThe detection and segmentation results for each class are:\n\nTable 3: The object detection results of 'MDETR+GGNN' and the proposed method for each task.\n| Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | mean |\n|:---------------------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| MDETR+GGNN | *44.3* | 36.5 | 45.2 | 28.6 | 44.0 | **27.6** | 35.9 | 20.7 | **34.7** | *46.3* | 27.8 | 41.5 | 46.5 | 36.2 | 36.8 |\n| TOIST | 44.0 | *39.5* | *46.7* | *43.1* | *53.6* | 23.5 | *52.8* | *21.3* | 32.0 | *46.3* | *33.1* | *41.7* | *48.1* | *52.9* | *41.3* |\n| TOIST w/ distillation | **46.2** | **39.6** | **49.9** | **47.1** | **54.5** | *26.7* | **57.3** | **23.1** | *33.1* | **49.9** | **35.4** | **44.7** | **52.1** | **54.9** | **43.9** |\n\nTable 4: The instance segmentation results of 'MDETR+GGNN' and the proposed method for each task.\n| Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | mean |\n|:---------------------:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| MDETR+GGNN | 36.9 | 31.3 | 43.6 | 17.1 | 42.9 | *20.1* | 19.9 | *18.7* | *24.5* | *45.5* | 23.1 | *30.9* | 46.2 | 24.0 | 30.3 |\n| TOIST | *37.0* | *34.4* | *44.7* | *34.2* | *51.3* | 18.6 | *40.5* | 17.1 | 23.4 | 43.8 | *29.3* | 29.9 | *46.6* | *42.4* | *35.2* |\n| TOIST w/ distillation | **40.8** | **36.5** | **48.9** | **37.8** | **53.4** | **22.1** | **44.4** | **20.3** | **26.9** | **48.1** | **31.8** | **34.8** | **51.5** | **46.3** | **38.8** |\n\nNote that this baseline is also tested with privileged noun ground truth, but our distillation method only use the privileged knowledge during training. Nevertheless, our proposed method still has a significant performance improvement over this strong baseline.\n\nWe will add the results into the supplementary.\n\n### Question.1\n> The tackled affordance recognition task is not a well-explored research topic; hence, the compared baseline method [48] is not advanced. In order to demonstrate the performance gain of the proposed TOIST, it is better to train the model without using the extra training data and comparing it with the baseline of an advanced backbone network, for example, 'mdetr+GGNN.' Please see [Weaknesses] for reference.\n\nPlease see the responses to weakness.2 and weakness.3.\n\n", " We thank R#mBEm for professional feedbacks. Here we address raised concerns one by one.\n\n### Weakness.1\n> The contribution to upgrading the task-oriented detection into task-oriented instance segmentation upon the 'existing' transformer model is weak.\n\nWe would like to respond to this contribution concern from three perspectives:\n\n(1) In order to investigate whether our noun-pronoun distillation training framework is a standalone technical contribution without pre-trained models, we present experiments without using pre-trained models. The quantitative results can be found in the Weakness.2 part.\n\n(2) In order to investigate whether our TOIST architecture is a standalone technical contribution by marginalizing the benefits brought by pre-trained models, we present another baseline 'MDETR+GGNN'. The quantitative results can be found in the Weakness.3 part.\n\n(3) Finally, except for technical contributions, we think this study has a methodological contribution: we propose the new scheme of using verb-pronoun for task oriented detection, and using pre-trained transformer models is its implementation. We reformulate the problem into a verb reference understanding one so that the noun-pronoun distillation becomes possible.\n\n### Weakness.2\n> Though the existing method [48] is a two-stage model, yet the proposed TOIST needs to separately fine-turn the pre-trained student and teacher TOIST models with a final knowledge distilling. It seems that the pre-trained models employ the extra data for training, and hence the extra training data and the extra training procedure make the advantage of the claimed one-stage model somewhat weak.\n\nTo verify the effectiveness of our distillation, we have trained our model from scratch on the COCO-Tasks dataset.\nThe final results are:\n\nTable 1: The results of TOIST without pre-training.\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| pronoun input | 3.65 | 5.74 |\n| noun input | 11.19 | 12.67 |\n| noun-pronoun distillation | 7.43(+3.78) | 11.28(+5.54) |\n\nThe results demonstrates that the proposed distillation can still work well even without pre-training.\n\nWe will add the results into the supplementary.\n", " \n### Question.2\n> It would be good to see an analysis of what verbs are associated with what objects (comparing ground truth and model predictions). Something like a distribution plot that the verb \"sit on\" was associated with \"chair\" in 10 out of 20 times, it was associated with \"table\" in 5 out of 20 times etc. That will indicate if the model fails for any verbs more frequently than others.\n\nWe thank R#u6fT for this suggestion. We have included some statistics in the Figure 1-8 in the supplementary for analysis (we mark the corresponding analysis in section 4 of the supplementary in red). This analysis sheds more light on noun-pronoun distillation. As demonstrated, we reach the conclusion that the proposed distillation method makes TOIST more capable of filtering out objects that do not afford the tasks. And the effect of the distillation on different categories is influenced by the proportion of categories in the tasks. When a few classes take a large portion of selected objects in a certain task, the effect of the distillation on these classes is good, while that on others is poor. If the number of categories in the whole task is distributed more evenly, the distillation can boost performance for most categories.\n\n\n### Question.3\n> How is the score ${\\hat s}\\_{i}$ is used in loss function in equation 3? Is it used in localization loss terms or segmentation loss terms? Or is it used in some other way?\n\n${\\hat s}\\_{i}$ is used in loss terms in an indirect way through the predicted logits ${\\hat{\\mathbf{g}}\\_{i}}$.\n\nThe preference score ${\\hat s}\\_{i}$ is defined as\n${\\hat s}\\_{i} = 1 - \\frac{\\exp \\left(\\hat g\\_{n\\_{\\rm{max}}}^i\\right)}{\\sum\\_{j=1}^{n\\_{\\rm{max}}} \\exp \\left(\\hat g\\_{j}^i\\right)}$, in which the predicted logits ${\\hat{\\mathbf{g}}\\_{i}} = [\\hat g\\_1^i, \\ldots, \\hat g\\_{n\\_{\\rm{max}}}^i]$ is constrained by the soft-token prediction loss $\\mathcal{L}\\_{\\rm{token}}$. And $\\mathcal{L}\\_{\\rm{token}}$ is defined as:\n$\\mathcal{L}\\_{\\rm{token}}({{\\mathbf{p}}^{\\rm{span}}\\_{i}}, {\\hat{\\mathbf{g}}\\_{\\sigma\\_0(i)}}) = -\\sum\\_{j}^{n\\_{\\rm{max}}} p^{\\rm{span}}\\_{i,j} \\log\\frac{\\exp \\left(\\hat g\\_{j}^{\\sigma\\_0(i)}\\right)}{\\sum\\_{l=1}^{n\\_{\\rm{max}}} \\exp \\left(\\hat g\\_{l}^{\\sigma\\_0(i)}\\right)}$. More details about loss functions can be found in section 1.2 of the supplementary.\n\n### Question.4\n> How is default value of $n\\_{\\rm{max}}$ = 256 decided? What is the value of $n\\_{\\rm{pred}}$? I am assuming it should be greater than the total number of objects in CoCo dataset. Is that the case?\n\nWe follow MDETR [1] to set the default value of $n\\_{\\rm{max}}$ to be 256.\nThe value of $n\\_{\\rm{pred}}$ is 100, which is greater than the maximum number of objects in all images of the COCO-Tasks dataset.\n\n\n### Question.5\n> Ablations for including vs not including the loss terms $\\mathcal{L}\\_{\\rm{token}}$ and $\\mathcal{L}\\_{\\rm{align}}$?\n\nPlease see the response to weakness.2.\n\n### Question.6\n> It is not clear what $\\mathfrak{S}\\_{{n\\_{\\rm{pred}}}}$ means in line 245. Please define what it stands for.\n\n$\\mathfrak{S}\\_{{n\\_{\\rm{pred}}}}$ is the set of all permutations of $n\\_{\\rm{pred}}$ elements.\n\nFor instance, all permutations of the set $S = \\{1,2,3\\}$ can be written as:\n$\\sigma\\_1=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 1 & 2 & 3 \\end{array}\\right)$, $\\sigma\\_2=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 1 & 3 & 2 \\end{array}\\right)$, $\\sigma\\_3=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 2 & 1 & 3 \\end{array}\\right)$, $\\sigma\\_4=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 2 & 3 & 1 \\end{array}\\right)$, $\\sigma\\_5=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 3 & 1 & 2 \\end{array}\\right)$, $\\sigma\\_6=\\left(\\begin{array}{lll} 1 & 2 & 3 \\\\\\\\ 3 & 2 & 1 \\end{array}\\right)$.\nHere, $\\sigma\\_4$ satisfies $\\sigma\\_4(1)=2$, $\\sigma\\_4(2)=3$ and $\\sigma\\_4(3)=1$. The same goes for others. And then $\\mathfrak{S}\\_{3} = \\\\{\\sigma\\_1, \\sigma\\_2, \\dots,\\sigma\\_6\\\\}$.\n", " We thank R#u6fT for professional feedbacks. Here we address raised concerns one by one.\n\n\n### Weakness.1\n> The memory bank is a queue and is updated in a FIFO fashion. However, this might lead to removal of a noun feature not adequately represented in the rest of the list. Wouldn't it make more sense to update the queue by removing elements in a smarter way to reduce occurrence of similar features? For example, for any new object feature remove one of the past object features whose representation is closest to it.\n\nWe thank R#u6fT for this suggestion. We have included a new ablation study that demonstrates the impact of this new memory updating scheme, as shown below:\n\nTable 1: Comparison of different updating methods for the memory bank in the distillation.\n\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------:|:-----------:|:------------:|\n| TOIST w/o distillation | 41.3 | 35.2 |\n| FIFO | 43.9(+2.6) | 38.8(+3.6) |\n| remove closest one | **44.1(+2.8)** | **39.0(+3.8)** |\n\nThis improvement brings +2.8% $\\rm{mAP}^{box}$ and +3.8% $\\rm{mAP}^{mask}$ under the noun-pronoun distillation setting, achieving an updated SOTA performance.\n\nWe will present the results in the new version of the paper.\n\n### Weakness.2\n> Some ablation studies regarding why some loss terms are useful are missing.\n\nWe thank R#u6fT for this suggestion. We provided ablation studies in which we remove $\\mathcal{L}\\_{\\rm{align}}$ or $\\mathcal{L}\\_{\\rm{token}}$ or both. The quantitative results are demonstrated below:\n\nTable 2: Ablations for the soft-token prediction loss and the contrastive alignment loss.\n\n| Method | $\\rm{mAP}^{box}$ | $\\rm{mAP}^{mask}$ |\n|:--------------------------------------:|:-----------:|:------------:|\n| TOIST | **41.3** | **35.2** |\n| TOIST w/o $\\mathcal{L}\\_{\\rm{token}}$ | 40.1(-1.2) | 34.8(-0.4) |\n| TOIST w/o $\\mathcal{L}\\_{\\rm{align}}$ | *41.1(-0.2)* | *35.1(-0.1)* |\n| TOIST w/o $\\mathcal{L}\\_{\\rm{token}}$ and $\\mathcal{L}\\_{\\rm{align}}$ | 23.4(-17.9) | 20.7(-14.5) |\n\nIt shows that removing $\\mathcal{L}\\_{\\rm{token}}$ brings a performance drop of -1.2% $\\rm{mAP}^{box}$ and -0.4 $\\rm{mAP}^{mask}$, because the association between the matched object predictions and the task descriptions is weakened.\nRemoving $\\mathcal{L}\\_{\\rm{align}}$ brings a performance drop of -0.2% $\\rm{mAP}^{box}$ and -0.1% $\\rm{mAP}^{mask}$, because the features of an object and its corresponding text features cannot be explicitly constrained to be closer.\nInterestingly, removing both of them brings a significant performance drop of -17.9% $\\rm{mAP}^{box}$ and -14.5% $\\rm{mAP}^{mask}$, implying the two loss terms enhance the effect of each other to make TOIST understand verb reference better.\n\nWe will add the results into the supplementary.\n\n### Question.1\n> How many unique pronouns are used in the captions to train the student model? How many objects in Coco Task dataset? Is the coco task dataset captions modified in any way (like replacing objects with pronouns) to train the student model?\n\nBefore we answer questions, we use an example to illustrate how we generate text inputs:\n\nLet us consider a task whose caption is 'dig hole'.\nWe use 'dig hole with pronoun' as the text input of our plain TOIST or the student TOIST, where the 'pronoun' (like 'something') is the same for all the data. We use 'dig hole with noun' for the teacher TOIST, where the 'noun' is the name of the ground truth object (like 'skateboard') that changes with the input image.\n\n\nThen we answer the questions:\n\nFirstly, we use only one unique pronoun for all verb phrases to train the student model. This unique pronoun can be 'something', 'it', 'them' or 'abcd', as demonstrated in Table.4.\n\nSecondly, the COCO-Tasks dataset contains a total of 65797 objects spanning 49 categories.\n\nThirdly, the COCO-Tasks dataset provides captions for verb phrases separately, and we concatenate the phrase with the selected pronoun to train the student model. So there is no need to 'replacing objects with pronouns'.\n\n", " Authors propose a novel way to do task oriented object detection. Dataset used is COCO Task. Author modify the backbone used in DETR model to feed the transformer encoder with textual features along with image features so that better contextualized representations are obtained. The loss function is optimized to perform accurate bounding box localization along with instance segmentation. The paper leverages knowledge from a model trained with verb-noun captions (using the ground truth nouns) to train a student model with verb-pronoun caption. This way the model still remains noun-agnostic during inference time. But it can detect the noun from verb-pronoun if it was trained properly. Strength:\n 1. Authors propose a novel way for task oriented detection by introducing verb pronoun captions and leveraging knowledge distillation to learn from noun ground truth.\n 2. The paper presents state of art results on the said task.\n 3. Ablation provided to show utility of the distillation components which is one of the novelties of the paper.\n\nWeakness: \n1. The memory bank is a queue and is updated in a FIFO fashion. However, this might lead to removal of a noun feature not adequately represented in the rest of the list. Wouldn't it make more sense to update the queue by removing elements in a smarter way to reduce occurrence of similar features? For example, for any new object feature remove one of the past object features whose representation is closest to it.\n2. Some ablation studies regarding why some loss terms are useful are missing.\n 1. How many unique pronouns are used in the captions to train the student model? How many objects in Coco Task dataset? Is the coco task dataset captions modified in any way (like replacing objects with pronouns) to train the student model?\n2. It would be good to see an analysis of what verbs are associated with what objects (comparing ground truth and model predictions). Something like a distribution plot that the verb \"sit on\" was associated with \"chair\" in 10 out of 20 times, it was associated with \"table\" in 5 out of 20 times etc. That will indicate if the model fails for any verbs more frequently than others.\n3. How is the score s_i is used in loss function in equation 3? Is it used in localization loss terms or segmentation loss terms? Or is it used in some other way?\n4. How is default value of n_max = 256 decided? What is the value of npred? I am assuming it should be greater than the total number of objects in CoCo dataset. Is that the case?\n5. Ablations for including vs not including the loss terms L_token and L_align?\n6. It is not clear what G_npred means in line 245. Please define what it stands for.\n7. L_match was a loss term in DETR model to encourage matching the class ad bounding boxes of ground truth and prediction. \nHowever it is not included in loss functions here (equations 3 and 9). Why is L_match not used in loss function? In line 247, authors mention KL divergence is also a part of L_match. However, the original DETR paper doesnt mention that.\n8. Instead of minimizing the distance between l_pron_tr and l_cs_j in equation 4, why can't one minimize the distance between l_pron_tr and l_noun_tr for knowledge distillation? Limitations are briefly discussed in \"Conclusion\" section.", " In order to handle the affordance recognition task, this paper proposes a Task-Oriented Instance Segmentation Transformer (TOIST) to find objects that best afford an action indicated by verbs. The TOIST is a teacher-student knowledge distillation model, and such a model leverages the referring expression comprehension algorithm as the teacher module for guiding the student module to learn the noun-pronoun transformation. The experiments show the positive effect of the knowledge distillation mechanism. [Strengths] \n+ The idea of utilizing the referring expression comprehension algorithm as the teacher module is interesting.\n+ The manuscript is well organized and has several interesting analyses.\n\n[Weaknesses] \n- The contribution to upgrading the task-oriented detection into task-oriented instance segmentation upon the `existing’ transformer model is weak.\n- Though the existing method [48] is a two-stage model, yet the proposed TOIST needs to separately fine-turn the pre-trained student and teacher TOIST models with a final knowledge distilling. It seems that the pre-trained models employ the extra data for training, and hence the extra training data and the extra training procedure make the advantage of the claimed one-stage model somewhat weak.\n- Compared with the state-of-the-art methods, it is not fair to compare the methods extracting features with different backbones. It is interesting whether the ‘TOIST w/ distillation’ in table 2 can still surpass the baseline with the same backbone, i.e., ‘mdetr+GGNN’?\n The tackled affordance recognition task is not a well-explored research topic; hence, the compared baseline method [48] is not advanced. In order to demonstrate the performance gain of the proposed TOIST, it is better to train the model without using the extra training data and comparing it with the baseline of an advanced backbone network, for example, ‘mdetr+GGNN.’ Please see [Weaknesses] for reference. The authors described the limitations and potential negative societal impact of their work.", " This paper aims at task oriented detection. Instead of specifying the type of object to detect, this problem requires detection of the objects that best fits the task description. The authors proposed a largely Transformer based model, TOIST, that outperformed the previous state-of-the-art. They then proposed two distillation techniques to distill the type of object into the student model, and performance is further boosted. Experiments are performed on COCO-Tasks. \n\nPOST-REBUTTAL UPDATE:\n\nI have read the authors' rebuttal. The authors added a lot of experiments to justify their design and showcase their big improvement over the previous baseline. However I don't think my non-result related questions are well-addressed, such as \"posing questions about the baseline and the dataset in general\". Overall I decided to slightly increase my score from 4 to 5. Regardless of the final result, I suggest the authors to simplify the proposed approach if possible, e.g. throwing away the notion of \"task\", which seems to deliver the best performance according to the new experiment. Strengths:\n- The base model, TOIST, is fairly well motivated and well described.\n- The performance gain over the previous state-of-the-art seems significant.\n- The idea of distilling the object type into \"something\", i.e. distilling noun into pronoun, is interesting and novel in my opinion.\n\nWeaknesses:\n- The distillation version of TOIST may be a bit overly complicated, and I do not understand why it has to be designed the way it is.\n- The baseline [48] is more than 3 years old, and as someone who is not extremely familiar with COCO-Tasks, it is concerning that there is no follow-up works in 3 years, posing questions about the baseline and the dataset in general.\n\nOverall, I think originality is fairly good; quality, clarity, significance is medium. 1. L169 mentioned that the text encoder is \"pre-trained\". What data is this text encoder pre-trained on? This is the reason why the pronoun in Table 4 makes a difference, right? What would the performance be if this part is trained from scratch? Does the distillation still work?\n\n2. The \"clustering distillation\" component requires the notion of \"number of tasks\", and a clustering algorithm is done for each task. However, I don't think the concept of \"task\" is well defined in this paper (e.g. in Section 3). Does \"task\" equal to verb, like \"dig hole\" is one task, and \"sit comfortably on\" is another? If so, how many training examples are in COCO-Tasks, and how many tasks? Dividing the former by the latter can give the reader a rough sense of how many training examples per task, and that will also inform how the number of clusters, K, ought to be chosen. \n\n3. Following the question above, Section 5.5 ablated the cluster number K. What about n_{task}? Does the distillation still work when n_{task} = 1, i.e. throwing away the notion of \"task\"?\n\n4. I do not understand why distillation has to be done the way it is in \"clustering distillation\". What about loading the same data for Teacher and Student, and simply distill l_{noun} into l_{pronoun} (say in the same way as Equation 4)? This gets rid of introducing \"number of tasks\" and \"memory bank\", which greatly simplifies the proposed method. The authors used 3 lines in Conclusion to talk about limitations. I feel it can be expanded to talk about some of the angles in my Questions section above." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "O2Is6TpkmUy", "5js2NmH6XOL", "jqXqWHtwgFj", "htj_uPRyp7G", "PDgf8lpXW4", "PDgf8lpXW4", "jqXqWHtwgFj", "jqXqWHtwgFj", "jqXqWHtwgFj", "htj_uPRyp7G", "htj_uPRyp7G", "PDgf8lpXW4", "PDgf8lpXW4", "nips_2022_V91cZ9i_sV3", "nips_2022_V91cZ9i_sV3", "nips_2022_V91cZ9i_sV3" ]
nips_2022_QfI_usBXNCM
Cross-Image Context for Single Image Inpainting
Visual context is of crucial importance for image inpainting. The contextual information captures the appearance and semantic correlation between the image regions, helping to propagate the information of the complete regions for reasoning the content of the corrupted regions. Many inpainting methods compute the visual context based on the regions within the single image. In this paper, we propose the Cross-Image Context Memory (CICM) for learning and using the cross-image context to recover the corrupted regions. CICM consists of multiple sets of the cross-image representations learned from the image regions with different visual patterns. The regional representations are learned across different images, thus providing richer context that benefit the inpainting task. The experimental results demonstrate the effectiveness and generalization of CICM, which achieves state-of-the-art performances on various datasets for single image inpainting.
Accept
The paper discusses how to use external information for inpainting. Reviewers appreciated the idea but raised concerns regarding limited novelty, use of the proposed method for inpainting, baselines being evaluated incorrectly, and missing ablations. The rebuttal was able to address most of the concerns and reviewers remained positive. AC concurs and doesn't find reasons to overturn an unanimous majority recommendation.
train
[ "3cChXTPwLTf", "fORAdyxYPf", "sogNU0IDIVx", "r0C6XRdGd9u", "0EORZ7yIbfO", "yduObXg9QjX", "4NxQ89GRH13", "23PdKDoPI-", "ZcKljQmMJfQ", "DGAbWT1Hc-m", "WmdxJFo8zCH", "XFPUgEQbFj7", "thgao51_GLk", "nyVyW5uJ_Fb", "M4r6m263It", "n9R0tuK4p-L", "kKTX5jqBzJM", "IGCEEXu0h5J", "Rk8Pv9so5K", "_sTOGyFZGhO", "w3Tk9zkOuG", "ADj0bvZt3Zt" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 3YLw,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer 8FQc,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer Kgu7,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer 6dJj,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer GGCD,\n\nThank you again for your review. We are pleased to see that the questions raised by you are solved.\n\nBest,\n\nAuthors of Paper ID 465", " I believe that the proposed work describes a very interesting algorithm based on CICM, yet unseen from existing works. As addressed by the authors, the CICM is fast to search into and effective to produce higher quality generation by utilizing compact cross-image information unavailable within the input image. The quality gains are quite clear without much sacrificing the efficiency. My final recommendation is still to strongly support to accept the paper.", " Dear Reviewer 3YLw,\n\nWe thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer GGCD,\n\nWe thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer 8FQc,\n\nWe thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer 6dJj,\n\nWe thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.\n\nBest,\n\nAuthors of Paper ID 465", " Dear Reviewer Kgu7,\n\nWe thank you again for your valuable comments, which significantly help us to polish our paper. We are looking to discussing with you the questions that are addressed unsatisfactorily.\n\nBest,\n\nAuthors of Paper ID 465", " **1. The resolution of the regional features and the number of layers adopting the CICM are also vital settings that affect the image inpainting performance.**\n\nWe are sorry for missing these results. We have experimented with changing the resolution of regional features and the number of layers adopting the CICM on Places2 dataset by using a UNet with 15 convolutional layers as the backbone. We provide these results in the two tables below.\n\n||||||||Table 1|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n||0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|2×2|28.92|21.04|18.89|0.913|0.794|0.618|1.104|3.589|6.817|0.0883|0.2210|0.3412|19.01|52.38|86.39|\n|4×4|29.11|21.48|19.07|0.917|0.798|0.620|1.098|3.532|6.541|0.0845|0.2137|0.3349|18.31|48.38|83.11|\n|8×8|29.21|21.73|19.21|0.921|0.805|0.626|1.079|3.478|6.375|0.0829|0.2045|0.3284|17.21|45.17|78.49|\n|16×16|29.14|21.66|19.08|0.920|0.804|0.622|1.083|3.482|6.488|0.0841|0.2075|0.3331|17.95|47.28|82.69|\n\n||||||||Table 2|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n||0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|1 layer|29.21|21.73|19.21|0.921|0.805|0.626|1.079|3.478|6.375|0.0829|0.2045|0.3284|17.21|45.17|78.49|\n|2 layers|29.74|22.00|19.72|0.927|0.809|0.651|0.993|3.385|5.834|0.0859|0.1972|0.3175|16.98|43.28|67.39|\n|3 layers|30.02|22.57|20.21|0.934|0.816|0.677|0.921|3.321|5.516|0.0885|0.1880|0.3086|16.62|39.02|57.62|\n|4 layers|30.03|22.68|20.55|0.935|0.815|0.689|0.902|3.315|5.370|0.0881|0.1853|0.3011|16.48|37.59|55.37|\n\nIn Table 1, we select the resolution of regional features from the set {2x2, 4x4, 8x8, and 16x16} and report the performances. Here, we use the backbone UNet to output the convolutional map with the lowest resolution. By subdividing the convolutional map into a set of 8x8 regional features, we achieve the best results in Table 1.\n\nIn Table 2, we equip CICMs to different convolutional layers (1,2,3,4) and report the results. Here, 1 means the deepest layer that outputs the convolutional feature map with the lowest resolution. In each case, we always equip the deeper layers with CICMs. By adding CICMs to more layers, we achieve better results. We keep using CICM at 4 layers in Tables 4 and 5 in the paper.\n\t\n**2. In line 74, symbols of H and W are used to denote the height and width of the image and the feature, but they are different. In line 117, what is the value of the momentum factor? In line 168, “four groups” is “three groups”.**\n\nThanks for your correction. In line 74, we let the height and width of the image and the feature be the same, to simplify the notations in this paper. In line 117, the momentum factor is 0.5. In line 169, we have corrected “four groups” to “three groups”.\n\n**3. Five losses are used to train the model. Is there any balance for their contributions? Since there are different terms in each loss, why are the magnitude of these loss values the same as in Fig. 5 in the supplementary?**\n\nThanks. Different balance coefficients are used for the losses in Eq. 9 (L_inpaint 1.0, L_adv 0.1, L_ratio 1.0, L_inter 20, and L_intra 0.5). In Figure 5 of the supplementary file, we have multiplied the balance coefficients by the corresponding losses, which thus have similar magnitudes in the figure.\n\n**4. How are the anchor features and feature sets initialized?**\n\nIn our implementation, we use a warm-up strategy to pre-train the backbone network for 50K iterations. The encoder of the pre-trained backbone is used to compute the regional features of different images. We conduct k-means clustering on the regional features, computing the cluster centers as the initial anchor features. The regional features, which are nearest to the initial anchor features, are selected as the initial cross-image features in different sets of CICM. We will add this detail to the supplementary file.", " **1. How is the CICM initialized?**\n\nIn our implementation, we use a warm-up strategy to pre-train the backbone network for 50K iterations. The encoder of the pre-trained backbone is used to compute the regional features of different images. We conduct k-means clustering on the regional features, computing the cluster centers as the initial anchor features. The regional features, which are nearest to the initial anchor features, are selected as the initial cross-image features in different sets of CICM. We will add this detail to the supplementary file.\n\n**2. What is the backbone network of the proposed model in Tables 4 and 5?**\n\nIn Tables 4 and 5, we use a UNet with 15 convolutional layers as the backbone. We have clarified this in the revised paper (see lines 254-255).\n\n**3. Would the quality enhancement by applying the CICM scheme be outstanding enough compared to the complexity increase (i.e., trade-off)?**\n\nThanks for your comment. The complexity increase of CICM has been shown in Table 5 of the supplementary file (at most 2M Parameters, 3GB Memory, 2G FLOPs). At these costs, we achieve very consistent performance gains by using CICM (up to 1.845 PSNR, 0.0355 SSIM, 0.672 L1, 0.0491 LPIPS, and 25.20 FID on Places2; 1.857 PSNR, 0.0309 SSIM, 0.544 L1, 0.0221 LPIPS and 9.645 FID on CelebA). Please also see Table 3 of the paper for the performance gains. We have discussed the trade-off between performance and efficiency in Section 3.2 of the supplementary file. One may use this discussion to consider the trade-off between performance and efficiency.\n\n**4. The search into CICM is a bottleneck of the model performance.**\n\nPlease note that the search into CICM is fast. First, we use the anchor features to select a feature set (see Eq. 4). Next, we use all of the cross-image features in the selected set to augment the regional feature, where the augmentation can be implemented as the matrix multiplication and accelerated by GPU. In our implementation, the search into CICM only occupies about 2% of the testing time.", " **1. The core contribution seems not strong enough. What is the unique design to make CICM suitable for inpainting?**\n\nOur method of learning the cross-image context in CICM is non-trivial. The existing methods use the single-image context or class-specific information to recover the corrupted images, which however lacks visual information for computing the context. CICM stores the cross-image features learned from different images. It takes the advantage of more useful regions across images, providing richer context for recovering a region. CICM allows the inpainting to benefit from not only a kind of specific context. As discussed in Section 5.2 “Extensive Evaluation on Semantic Inpainting” of the paper, CICM stores the cross-image context learned from RGB images and segmentation results, which improve the results on the semantic inpainting task.\n\nOur contribution not only lies in the methodology ground but also in the extensive thinking, evaluation, and discussion of the CICM. CICM is a separable component alongside the inpainting network. Is it possible to train CICM with a network and apply it to another network? Is it also possible to train CICM on a dataset and apply it to the inpainting on another dataset? These questions are highly correlated to the generalization of CICM, but they are answered by few literatures. In Section 3.3 of the supplementary file, we answer these questions by evaluating CICM in the cross-model and -dataset scenarios. The cross-model CICM achieves better results than the methods without CICM. It demonstrates the capacity of CICM for transferring the learned context between different models. Yet, CICM degrades the performances in the cross-dataset scenario. We have provided our explanation of the degradation, pointing out a direction for improving the generalization of cross-image context.\n\n**2. The visual quality improvement is not obvious. The results are not as attractive as LaMa and RePaint.**\n\nMore visualization results are provided in the supplementary file. Please zoom in on the results for a better visual quality.\n\nCICM is a component with generalized cross-image context for assisting different inpainting networks, rather than a stand-alone network that surpasses all of the existing networks. The generalization of CICM has been discussed in Section 5.2 “Combination with Different Inpainting Networks”. We compare the latest networks (RFR. JPG, and MISF) with/without CICM. CICM consistently improve these networks on Places2 and CelebA datasets.\n\nWe also compare LaMa and RePaint with/without CICM. We use the pre-trained parameters and fine-tune the networks with CICM. Again, CICM helps LaMa and RePaint to achieve better results. Please see the results below.\n||||||||Places2|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n||0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|LaMa|31.64|24.87|21.13|0.952|0.846|0.701|0.742|2.375|4.925|0.0424|0.1275|0.2227|16.32|33.48|63.87|\n|LaMa-CICM|31.72|25.94|22.67|0.956|0.859|0.719|0.711|2.242|4.072|0.0398|0.1113|0.1884|14.22|29.94|55.49|\n|RePaint|31.75|24.97|21.35|0.953|0.848|0.708|0.725|2.223|4.873|0.0411|0.1241|0.2153|14.49|29.84|58.82|\n|RePaint-CICM|31.88|26.21|22.93|0.959|0.862|0.722|0.686|2.098|4.002|0.0386|0.1089|0.1780|11.57|25.39|51.16|\n\n||||||||CelebA|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n||0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|LaMa|34.57|26.79|21.97|0.971|0.893|0.772|0.479|1.582|3.774|0.0313|0.0924|0.1846|5.539|21.15|53.19|\n|LaMa-CICM|34.69|27.93|23.22|0.975|0.904|0.798|0.441|1.335|3.177|0.0289|0.0804|0.1602|4.127|17.92|41.87|\n|RePaint|34.57|26.88|22.15|0.972|0.898|0.778|0.472|1.563|3.573|0.0302|0.0912|0.1802|4.370|20.03|48.20|\n|RePaint-CICM|34.72|28.02|23.69|0.979|0.907|0.802|0.428|1.307|2.963|0.0275|0.0785|0.1577|3.237|16.63|35.54|\n\n**3. There are many sentences that are not meaningful but very complicated.**\n\nThanks. We have rephrased the complicated in the revised paper.\n\n**4. Visualize the features learned in CICM.**\n\nIn Figure 6 of the supplementary file, we have visualized the distribution of the cross-image features in different/identical set(s) of CICM. We use t-SNE to map these features into a 2D latent space for visualization. Different/identical set(s) of the cross-image features appear closely/far. It means that they have a large diversity/consistence. The diversity in different feature sets provides richer context for inpainting. The consistence in the identical set reduces the unreasonable cases, where discrepant contents are predicted for the similar regions. We will add the visualized feature maps.\n\n**5. Missing reference in Figure 4.**\n\nThanks. We have added the references to Figure 4 in the revised paper.\n", " **1. Use FID as a metric.**\n\nWe have added the results of different methods in terms of FID to the revised paper.\n\n**2. The baseline method is evaluated incorrectly.**\n\nThanks for pointing out this error. We re-train and re-evaluate RFR, JPG, and MISF on CelebA. In Table 4 of the revised paper, we update the performances of RFR, JPG, and MISF on CelebA. CICM still achieves better results.\n\n**3. MISF numbers are different in Tables 3 and 4.**\n\nDifferent methods (e.g., RFR, JPG, and MISF), which are presented in original papers, compute the convolution feature maps with different resolutions for inpainting. To control the impact of resolution on the performance in the ablation study, all methods in Table 3 use the feature map with the lowest resolution (see lines 247-249 of the paper). In Table 4, we keep their original settings for state-of-the-art comparison.\n\n**4. Comparison with ComodGAN.**\n\nCICM is a component with the generalized cross-image context for assisting different inpainting networks, rather than a stand-alone model that surpasses other methods. The generalization power of CICM has been justified in Section 5.2 “Combination with Different Inpainting Networks”. We compare the performances of the latest networks (RFR. JPG, and MISF) with/without CICM. With CICM, these networks achieve consistent improvements on Places2 and CelebA datasets. We also compare ComodGAN with/without CICM in the table below. With CICM, ComodGAN achieves a better performance.\n\n||||||||Places2|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n|methods|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|Comod|31.24|24.46|20.26|0.952|0.843|0.696|0.768|2.411|5.015|0.0428|0.1308|0.2275|17.53|34.57|65.11|\n|Comod-CICM|31.36|25.42|21.87|0.960|0.857|0.712|0.730|2.281|4.152|0.0404|0.1154|0.1913|16.30|31.17|58.21|\n\n||||||||CelebA|||||||||\n|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|||PSNR↑|||SSIM↑|||L1↓|||LPIPS↓|||FID↓||\n|methods|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|0-20%|20-40%|40-60%|\n|Comod|34.23|26.67|21.48|0.968|0.889|0.770|0.485|1.614|3.815|0.0317|0.0943|0.1878|5.738|21.74|58.26|\n|Comod-CICM|34.34|27.68|22.78|0.973|0.898|0.794|0.454|1.476|3.326|0.0293|0.0824|0.1653|4.558|18.37|48.32|\n\n**5. How is the encoder trained? How are the cross-image features updated? Every update of the network parameter makes the features outdated.**\n\nWe train the encoder end-to-end with the losses in Eq. 9. First, we update the network parameters in backward propagation. Next, we rely on the updated parameters to compute the new regional features, which are used to update the cross-image features in CICM.\n\nOur strategy of updating the cross-image features can reduce the outdated cross-image features. Given a regional feature computed by the latest network parameters, we update all cross-image features in the corresponding set, where the intra-set similarities are enhanced. Compared to the well-updated feature sets, the outdated feature sets lead to a larger loss of intra-set similarities, thus driving new regional features to be injected into the outdated feature sets. In Table 1, we have experimented with/without using the intra-similarity for training. The results have demonstrated the effectiveness of the intra-similarity.\n\n**6. Eqs. 2, 3, and 4 seem to be online k-means clustering.**\n\nOnline k-means focuses on how to continuously update the cluster centers with new samples. Eqs. 2-4 share a similar spirit with online k-means at this point but represent a specific process of utilizing and updating the cross-image features for inpainting.\n\n$[Feature$ $utilizing]$\n\nThe k-means clustering only stores the centers for different clusters. These centers can be regarded as the anchor features of CICM, which are associated with different sets of cross-image features. But the centers alone lack visual information for inpainting, as evidenced by the experimental results in Table 2, “anchor only”. In contrast, we use CICM to store the cross-image features, which are computed and used to update the anchor features (see Eqs. 2 and 3), to provide a richer context for augmenting the regional features (see Eq. 4).\n\n$[Feature$ $updating]$\n\nOnline k-means relies on the new samples to update the cluster centers. It assumes that all samples contain contemporary information. In our work, we use Eq. 2 to use the new regional features to update the cross-image features in different sets. Moreover, we focus on reducing the outdated cross-image features, by using the inter- and intra-set similarities to drive the feature updating. It refreshes the complex cross-image features, which are learned from the images with diverse contents.\n\n**7. How are the corruption ratios estimated?**\n\nThe ratios are estimated by the network.", " **1. The method is a little common. What are the advantages of external memory?**\n\nThanks for your useful comment. It helps us to clarify the advantage of our method.\n\n$[The$ $advantage$ $of$ $using$ $the$ $external$ $memory$ $for$ $image$ $inpainting]$\n\nImage inpainting relies on the context of the relevant regions for recovering the corrupted regions. The existing methods propagate the context of the surrounding regions to recover the corrupted regions in the same image. They learn the single-image context but yield unsatisfactory performances when the corrupted images lack information. The external memory stores the cross-image context learned from different images. Thus, it takes the advantage of more useful regions across images, providing a richer context for recovering a region.\n\n$[The$ $advantage$ $of$ $using$ $CICM$ $for$ $image$ $inpainting]$\n\nOur method of learning the cross-image context in the external CICM is non-trivial. The existing methods use the single-image context or class-specific information to recover the corrupted images. In contrast, we construct CICM, where the cross-image context is learned from rich image data. Moreover, CICM allows the inpainting to benefit from not only a kind of specific context. As evidenced in Section 5.2 “Extensive Evaluation on Semantic Inpainting” of the paper, CICM can store various kinds of cross-image context, which are learned from RGB images and semantic segmentation results.\n\n$[Clarification$ $of$ $the$ $major$ $contribution$ $of$ $this$ $paper]$\n\nOur major contribution not only lies in the methodology ground but also in the extensive thinking, evaluation, and discussion of CICM. CICM is a separable component alongside the inpainting network. Is it possible to train CICM with an network and apply it to another network? Is it also possible to train CICM on a dataset and evaluate it on another dataset? These questions are answered by very few works. In Section 3.3 of the supplementary file, we answer the above questions by evaluating CICM in the cross-model and -dataset scenarios. The cross-model CICM achieves better results than the methods without CICM. It demonstrates the capacity of CICM for transferring the context between different models. Yet, CICM degrades the performances in the cross-dataset scenario, demonstrating its limitation. We have provided our explanation of the performance degradation, pointing out a direction for improving the generalization of CICM in the future.\n\n**2. The framework (Figure 2) is a little hard to follow. (c) and (d) provide the details but they are not necessary.**\n\nWe have revised Figure 2(c-d), by trimming the redundant arrows and re-organizing the legends. Please see the revised paper.\n\n**3. The aim of maximizing/minimizing the inter-/intra-set similarities between the cross-image features is not clear. Why is it useful for inpainting?**\n\nWe are sorry for reversing the terms of inter- and intra-set similarities in Eq. 9. Actually, we minimize the inter-set similarity, for enhancing the diversity of the cross-image features in different sets. We maximize the intra-set similarity to encourage the cross-image features in the identical set to contain consistent context. We have updated Eq. 9 and its description in the revised paper (lines 151-155). We have updated Figure 5 in the supplementary file, where the inter- and intra-set similarities are reversed. We have double-checked our implementation, making sure the losses are implemented correctly. The diversity in different feature sets provides more chances to find the useful context from CICM for inpainting. The consistency in the identical set reduces the unreasonable cases, where discrepant contents are predicted for similar regions. Their effectiveness has been evaluated in Table 1 of the paper.\n\n**4. Visualization evidence of different/identical set(s).**\n\nIn Figure 6 of the supplementary file, we have visualized the distribution of the cross-image features in different/identical set(s). We use t-SNE to map these features into a 2D latent space for visualization. Different/identical set(s) of the cross-image features appear closely/far. It means that they have a large diversity/consistency.\n\n\n**5. Figure 1 cannot show that cross-image feature sets are useful for inpainting.**\n\nWe have replaced the example in Figure 1 in the revised paper. In the new example, the cars are required to be recovered. However, a large portion of the image is corrupted, thus lacking visual information about cars. By using the cross-image features in CICM (see the feature set 1), we find the relevant context of cars for recovering the image.\n\n**6. External memory brings a lot of computational costs.**\n\nWe have discussed this limitation in Section 3.2 of the supplementary file. We compare the network parameters, GPU memory, and FLOPs of the networks with/without CICM, along with the performances. One may use these results to consider the trade-off between performance and efficiency.", " We express our deep gratitude to all reviewers for their valuable comments, which significantly help us to better clarify our technical contributions and evaluate the effectiveness of our method. Below, we provide our point-to-point responses to the questions raised by all reviewers.", " This paper proposes a Cross-Image Context Memory (CICM) for learning and using the cross-image context to recover the corrupted regions. It tries to provide richer context that benefits the inpainting task. The experimental results demonstrate the effectiveness and generalization of CICM. Strengths:\n1. The idea of using external memory is reasonable. \n2. The paper is clearly written. \n\nWeaknesses:\n1. The method is a little common. \n2. The advantages of external memory need more discussion. \n3. The framework (Figure 2) is a little hard to follow. (c) and (d) provide the details but they are not necessary. \n4. The aim of maximizing/minimizing the inter-/intra-set similarities between the cross-image features is not clear. Why is it useful for impainting? 1. Please show some visualization evidence of different/identical set(s).\n2. Figure 1 can not show that cross-image feature sets are useful for inpainting. Please provide more clear examples. 1. Using external memory can improve the performance, also it will bring a lot of computational costs. Please make some analysis. \n", " The paper proposes a way to utilize external information for inpainting. To do this, the proposed method maintains a database of clustered features collected from the dataset. Then, for each region's encoded feature, a matching cluster is identified, and the features of that cluster is augmented to the encoded feature in a soft, differentiable way. The proposed method can be an independent add-on to existing inpainting methods, and it consistently improves performance on quantitative metrics that measure similarity with the ground truth, such as PSNR, SSIM, L1, or LPIPS. Strength\n\nI find it interesting to utilize external features for better image inpainting. \nThe ablations are thoughtful, including single-image vs cross-image context, and different ways to create the bank of features. \nThe paper achieves consistently good results than the baseline methods. In particular, it can be plugged into multiple existing methods and improve on all of them. \n\nWeakness\n\nThe exposition is a bit difficult to follow. In general, the paper focuses on how the method is formulated, rather than why. Regarding this, I have a few questions. Please see the Questions section. \n\nAll evaluation metrics measure how much the output matches the ground truth, but it may not directly correlated with realism. For example, since image inpainting is a multi-modal problem with diverse possible outputs, to minimize the L1 loss, the output needs to be at the median of all possible values. In the end, L1 score gives advantage to the outputs that are smooth and low saturation. For example, a pix2pix model trained only on L1 objective (https://phillipi.github.io/pix2pix/images/index_facades2_loss_variations.html) looks unrealistic, even though it does achieve good L1 loss. To separately evaluate realism, metrics like FID are used (ComodGAN, Zhao et al, ICLR2021). There is possibility that a baseline method was evaluated incorrectly. It seems that the paper preprocessed the CelebA dataset with facial landmark alignment and cropping (Figure 8 of Supp Mat). However, MISF seems to not have preprocessed the data (their results are on much larger crop around the face, and the faces are not necessarily upright). Therefore, taking the MISF model trained on the raw CelebA data, and evaluating it along with a model trained specifically on aligned images is unfair. Likely because of this, the MISF results of the paper are often blurry (Fig4(a) and Fig8 of Supp Mat), while the results in the original MISF paper not not blurry. It is also wrong to copy the MISF numbers from the original paper (i.e. MISF line with 34.494, 26.635, 21.553, ... in Table 4), and comparing it with the CICM results, because they were in fact run on different datasets (unaligned vs aligned&cropped faces). If MISF were trained and test on the same preprocessing, it may get better results. \n\nWhy is MISF number different between Table 3 and 4? In Table 3, CelebA PSNR is 34.302, 26.387, 21.289, ... In Table 4, it is 34.494, 26.635, 21.553. \n\nHow would compare the quality of this result with ComodGAN (Zhao et al., ICLR2021)? Quantitative comparison is not provided, but ComodGAN seems very nice in qualitative results. \n\nHow is the convolutional encoder (which predicts F_m) trained? If it is trained end-to-end with the losses of Eq9, how are the features C of the cross-image feature sets are updated accordingly? Every update of the network parameter would make the feature Cs outdated. \n\nI am still not sure why Eq2, 3 and 4 are formulated in the particular form presented. There seems to be connection to online k-means clustering. \n\nIn Eq7, how is the corruption ratio estimated? Is it predicted by the network? The authors does address the limitations. ", " The paper proposes to use the cross-image context which consists of features learned from different visual patterns to recover the corrupted regions. The proposed method archives state-of-the-art performance on single-image inpainting on multiple datasets. Pros: \nThe proposed CICM achieves stable improvements on all the baseline methods and achieves state-of-art inpainting performance.\nThe authors conduct extensive experiments for the ablation study to validate the design of context generalization and context augmentation. The design of these modules is validated. \n\nCons: \nThe core contribution seems not strong enough for me. As the external memory bank-based method has been adopted in many tasks, what is the unique design to make it suitable for inpainting? On the other hand, the visual quality improvement is also not obvious enough. Even with the proposed CICM, the inpainting results are not as attractive as LaMa and RePaint.\n \nThe writing needs to be improved. There are many sentences that are not meaningful but very complicated (e.g., lines 36 to 39). \n\nHow is the external memory bank-based method compared to gan prior-based methods such as LAMA?\nIs it possible to visualize the features learned in CICM? It is interesting to see how the CICM is learned during the training. \n \nSuggestions:\nMissing reference in Figure 4. \n Please address my concerns shown in the weaknesses. N/A", " This paper proposes an image inpainting algorithm that learns visual context features across different images and saves them in an external memory (CICM). These features are used to augment regional features of the corrupted input image, which may result in better completion quality than relying on features inside the single image. The proposed approach outperforms existing models on the public datasets. * Strengths:\n - The proposed approach based on cross-image context memory is novel. It saves higher level richer visual context thus is unlike previous memory based methods such as TMAD [42] and SRM [43].\n - The proposed approach is highly effective, outperforming recent existing works (Tables 4 and 5).\n - The proposed approach is flexible and general, capable of extending several existing frameworks with consistent enhancements (Table 3).\n - Ablation studies on internal component of the proposed approach have been thoroughly made thus proved their importance (Tables 1 and 2).\n - Several extensions and variants of the proposed approach have been explored in the supplementary materials, all showing meaningful results.\n\n* Weaknesses:\n - It is unclear how the CICM is initialized at the beginning of the training.\n - It is unclear which architecture is used as their default backbone network.\n - The use of an external memory bank increases the ram usage, parameters and FLOPs quite a bit.\n - Depending on the scale of CICM and device property, the searching processes may become a bottleneck of the inference speed.\n\n\n----- Comments after reading the rebuttal -----\n\nI believe that the proposed work describes a very interesting algorithm based on CICM, yet unseen from existing works. As addressed by the authors, the CICM is fast to search into and effective to produce higher quality generation by utilizing compact cross-image information unavailable within the input image. The quality gains are quite clear without much sacrificing the efficiency. My final recommendation is still to strongly support to accept the paper. How is the CICM initialized?\n\nWhat is the backbone network of the proposed model in Tables 4 and 5?\n\nWould the quality enhancement by applying CICM scheme be outstanding enough compared to the complexity increase (i.e., trade-off)?\n\nThe search into the CICM is not a bottleneck of the model performance, which is not addressed? Yes, they addressed the limitations in Sections 3 and 4 of the supplementary materials, which include failure cases, memory increase, cross model and cross dataset scenarios, and societal impacts. These discussions seem to have been thoroughly made.", " This paper proposes the Cross-Image Context Memory (CICM) for learning and using the cross-image context to recover corrupted images. CICM consists of multiple sets of cross-image features learned from the image regions with different visual patterns. The regional features are learned across different images, thus providing richer context that benefits the inpainting task. The experimental results demonstrate the effectiveness and generalization of CICM, which achieves state-of-the-art performances on various datasets for single image inpainting. Strengths \n1)\tThe utilization of the cross-image context to assist in image inpainting is reasonable and the proposed cross-image context memory is somewhat novel and can also be generalized to the existing inpainting models.\n2)\tThe experiments are sufficient, and the internal study is nice to show the effectiveness of the CICM\n3)\tThe presentation is clear, and the reference is also adequate.\n\nWeaknesses\n1)\tBesides the number of feature sets and the size of each set, the resolution of the regional features and the number of layers adopting the CICM are also vital settings that affect the image inpainting performance, but there seems no explanation and analysis for them.\n\nMinor:\n1)\tIn line 74, symbols of H and W are used to denote the height and width of the image and the feature, but they are actually different.\n2)\tIn line 117, what is the value of the momentum factor.\n3)\tIn line 168, “four groups” is “three groups”.\n 1)\tFive losses are used to train the model. Is there any balance for their contributions? Since there are different terms in each loss, why are the magnitude of these loss values the same as in Fig. 5 in the supplementary?\n2)\tHow are the anchor features and feature sets initialized?\n N/a" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 4 ]
[ "ADj0bvZt3Zt", "_sTOGyFZGhO", "IGCEEXu0h5J", "Rk8Pv9so5K", "yduObXg9QjX", "w3Tk9zkOuG", "ADj0bvZt3Zt", "w3Tk9zkOuG", "_sTOGyFZGhO", "Rk8Pv9so5K", "IGCEEXu0h5J", "ADj0bvZt3Zt", "w3Tk9zkOuG", "_sTOGyFZGhO", "Rk8Pv9so5K", "IGCEEXu0h5J", "nips_2022_QfI_usBXNCM", "nips_2022_QfI_usBXNCM", "nips_2022_QfI_usBXNCM", "nips_2022_QfI_usBXNCM", "nips_2022_QfI_usBXNCM", "nips_2022_QfI_usBXNCM" ]
nips_2022_-T5seeOMnM5
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks
Unrestricted color attacks, which manipulate semantically meaningful color of an image, have shown their stealthiness and success in fooling both human eyes and deep neural networks. However, current works usually sacrifice the flexibility of the uncontrolled setting to ensure the naturalness of adversarial examples. As a result, the black-box attack performance of these methods is limited. To boost transferability of adversarial examples without damaging image quality, we propose a novel Natural Color Fool (NCF) which is guided by realistic color distributions sampled from a publicly available dataset and optimized by our neighborhood search and initialization reset. By conducting extensive experiments and visualizations, we convincingly demonstrate the effectiveness of our proposed method. Notably, on average, results show that our NCF can outperform state-of-the-art approaches by 15.0%$\sim$32.9% for fooling normally trained models and 10.0%$\sim$25.3% for evading defense methods. Our code is available at https://github.com/VL-Group/Natural-Color-Fool.
Accept
The proposed approach exploits the color distribution of semantic classes, thus improving the flexibility of the current unrestricted color attack. This method generates novel transferrable adversarial attacks. The authors conducted extensive experiments on wide variety of network architectures. A significant improvement of the attack success rate is achieved with the proposed method on both undefended and defended models.
train
[ "34h4RvRCvGx", "PRvhq7AAMjU", "HwMVOXPDUmQ", "XFETUyxz_h0", "ZVl0yksel6o", "_iIaOOuYtLJ", "fcR3GH2yWQ", "tXZFUPXyW-z", "FqDvb3nrt8f", "1kqrnwlSY9c", "y5YgYoPbv-0", "B7zFhfwP4-k", "rwiYWht9sx", "Fo-EqTJUb0" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Authors have convincingly addressed my concerns and I am willing to increase the score.", " Thanks for addressing my concerns. I am willing to increase the score. ", " Thank you a lot for the reply, which gives us the opportunity to address your concerns more clearly.\n\n_**Q4 What makes your attacks achieve high transferability, color-wise perturbation, or fewer attacking iterations.**_\n\n**A4:** Both color-wise perturbation and more attacking iterations can help to improve transferability. For the former, color-wise perturbation is similar to patch-wise perturbation [h] which has been demonstrated to improve transferability. For the latter, [i] has shown that existing transfer methods with more iterations yield better results. Table d also indicates that $N=100$ performs better than $N=15$.\n\n\nTable d: The effect of the iterations ($N$) of neighborhood search on the attack success rates (\\%) of NCF. We fix the maximum perturbation of the transfer matrix and increase the number of iterations (\"*\" denotes white-box attack). \n| $N$ | Res-18* | VGG-19 | Mobile-v2 | Inc_v3 | Dense-121 | Res-50 | ViT-S | XCiT-N12 | DeiT-S |\n| --- | --------- | -------- | --------- | -------- | --------- | -------- | -------- | -------- | -------- |\n| 15 | 92.9* | 72.1 | 72.7 | 48.3 | 55.3 | 66.7 | 53.0 | 55.3 | 32.8 |\n| 100 | **95.1*** | **74.2** | **75.5** | **50.9** | **57.7** | **69.0** | **55.0** | **56.7** | **34.8** |\n\n\n_**Q5 Could the NCF and baseline methods improve if more substitute models are used?**_\n\n**A5:** Yes, as demonstrated in the following Table e, using more substitute models can improve the performance of NCF and baseline methods. \n\nTable e: Comparison of ensemble attack and single model attack.\n\n| Attacks | Models | Dense-121 | Res-50 | ViT-S | XCiT-N12 | DeiT-S |\n| ---------- | ------------ | --------- | -------- | -------- | -------- | -------- |\n| SAE | Res-18 | 36.5 | 37.0 | 44.5 | 37.4 | 22.2 |\n| | VGG-19 | 39.3 | 39.0 | 48.3 | 37.6 | 24.3 |\n| | Mobile-v2 | 38.1 | 39.3 | 46.6 | 37.7 | 23.3 |\n| | Ensemble | **44.2** | **47.0** | **53.7** | **42.1** | **26.2** |\n| ReColorAdv | Res-18 | 37.2 | 38.1 | 21.4 | 36.7 | 17.3 |\n| | VGG-19 | 33.8 | 31.7 | 20.4 | 33.4 | 16.6 |\n| | Mobile-v2 | 32.4 | 34.4 | 20.7 | 36.7 | 20.0 |\n| | Ensemble | **47.2** | **50.5** | **25.9** | **43.3** | **24.2** |\n| cAdv | Res-18 | 43.0 | 41.2 | 34.4 | 44.9 | 30.4 |\n| | VGG-19 | 43.4 | 40.7 | 38.8 | 43.9 | 32.9 |\n| | Mobile-v2 | 44.3 | 39.1 | 36.0 | 44.1 | 30.8 |\n| | Ensemble | **59.9** | **59.0** | **45.7** | **56.1** | **41.6** |\n| ColorFool | Res-18 | 19.8 | 22.9 | 35.5 | 22.3 | 9.2 |\n| | VGG-19 | 23.5 | 26.6 | 42.2 | 25.6 | 9.6 |\n| | Mobile-v2 | 23.3 | 24.5 | 39.7 | 22.8 | 9.4 |\n| | Ensemble | **32.2** | **36.8** | **49.5** | **30.0** | **14.1** |\n| ACE | Res-18 | 19.9 | 18.3 | 21.6 | 22.4 | 9.1 |\n| | VGG-19 | 21.6 | 18.0 | 20.7 | 21.6 | 9.5 |\n| | Mobile-v2 | 20.0 | 19.0 | 20.3 | 22.6 | 9.3 |\n| | Ensemble | **29.2** | **27.9** | **25.4** | **27.7** | **10.7** |\n| NCF (Ours) | Res-18 | 55.3 | 66.7 | 53.0 | 55.3 | 32.8 |\n| | VGG-19 | 53.6 | 64.3 | 56.5 | 53.5 | 30.7 |\n| | Mobile-v2 | 54.4 | 66.2 | 55.4 | 56.4 | 32.6 |\n| | Ensemble | **63.5** | **71.6** | **59.7** | **61.7** | **37.0** |\n\n\n[h] Lianli Gao, Qilong Zhang, Jingkuan Song, Xianglong Liu, and Heng Tao Shen. Patch-wise attack for fooling deep neural network. In ECCV 2020.\n\n[i] Zhengyu Zhao, Zhuoran Liu, Martha Larson. On Success and Simplicity: A Second Look at Transferable Targeted Attacks. In NeurIPS 2021.", " **Why NCF is better on transferability and worse on white-box attacks.**\n\nI am confused about what makes your attacks achieve high transferability, color-wise perturbation, or fewer attacking iterations. Therefore, I wonder what the black box transferability is after you use $N=100$. \n\n**Could the NCF and baseline methods improve if more substitute models are used?**\n\nThanks for running the experiments.\n\n", " _**Q4 The main idea seems to be similar to [42]. [42] is a different application, but technically what are their differences?**_\n\n**A4**: No, our main idea is much bigger than [42]. Concretely, 1) the challenge of unrestricted color attack is how to guarantee natural images while large perturbations. We point out that existing solutions lack flexibility and can only perturb in the neighborhood. 2) To overcome this problem, we propose constructing a more flexible color attack space without adjacency in the global space. For this purpose, we introduce [42] to automatically construct the color distribution library. However, the resulting library constructed by [42] is redundant. To simplify it, we select 20 distributions (rather than distribution sets) for each semantic class to represent its color space. 3) Based on our color distribution library, we propose to transfer color distributions for attacks. We further improve the black-box transferability of NCF by proposing IR and NS strategies (see Table 4). Note that without the rest of the NCF, [42] cannot generate effective adversarial examples (see the resulf of NCF-IR-NS-* in Table b).\n\n_**Q5 Authors have addressed some limitations and potential negative social impact of their work, what about the feasibility of the attack in the physical world?**_\n\n\n**A5**: We evaluated the attack performance of NCF on Google Cloud Vision API to demonstrate the feasibility of our method in the physical world.\nFirst we selected 100 images from the Image-Net-compatible Dataset that were correctly classified on the Google Cloud Vision API. Then NCF was used to generate adversarial examples via the substitute model Res-18. Finally, the resulting adversarial examples are fed into Google Cloud Vision API to perform attack. Notably, NCF can achieve a 42\\% attack success rate in this realistic scenario. This shows that NCF is also threatening in real-world applications.", " Thank you for your feedback. We will answer your questions one by one below.\n\n_**Q1 Why the proposed method can improve flexibility?**_\n\n**A1**: Please refer to the response to the generic comment.\n\n_**Q2 What is the impact of different segmentation and clustering algorithms? Also, the clustering is based on the ADE20k dataset. Can this guarantee performance?**_\n\n\n**A2**: Intuitively, the semantic segmentation model, the clustering algorithm and the dataset all have some impact on NCF, but these are not the focus of our paper. So we did not conduct detailed experiments. In this rebuttal, we briefly analyze each part in the following.\n\nFirstly, we compare the performance of our NCF under different semantic segmentation models pre-trained on ADE20K (including Swin-T [a], OCRNet [b] and Deeplabv3+ [c]). As indicated in Table a, segmentation models have impact on the attack success rates of resulting adversarial examples. Among these models, Swin-T is usually the best choice for our NCF. Therefore, in our paper, we choose it to segment inputs. Note that even if the segmentation model affects our method, the lowest black-box attack success rate of NCF is still much higher than the existing methods.\n\nSecondly, intuitively, the better the clustering algorithm, the more obvious the difference in style between clusters, and the larger the search range for semantic classes, then the greater the likelihood of searching for adversarial examples.\n\nFinally, ADE20K contains 150 classes, while other popular semantic segmentation datasets like MS COCO has fewer classes (only 80 semantic classes). Intuitively, the more classes in the dataset, the richer the color distributions library constructed, and the more natural the NCF generated adversarial examples. \n\nTable a: The influence of different segmentation models on attack success rates. (“*” denotes the white-box attack)\n\n\n| Segm | Res-18* | VGG-19 | Mobile-v2 | Inc-v3 | Dense-121 | Res-50 | ViT-S |\n|:----------:|:------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n| Swin-T | **92.9\\*** | **72.1** | **72.7** | **48.3** | **55.3** | **66.7** | 53.0 |\n| OCRNet | 89.9* | 69.1 | 67.1 | 44.2 | 50.6 | 61.1 | **56.5** |\n| Deeplabv3+ | 91.0* | 68.0 | 68.6 | 45.3 | 49.2 | 62.0 | 54.0 |\n\n\n[a] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV, 2021\n\n[b] Yuhui Yuan, Xilin Chen, and Jingdong Wang. Object-contextual representations for semantic segmentation. 2020\n\n[c] Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Muller, R. Manmatha, Mu Li, and Alexander Smola. Resnest: Split-attention networks. arXiv preprint arXiv:2004.08955, 2020\n\n_**Q3 Why random colors can successfully attack the classification model with such a high success rate?**_\n\n**A3**: NCF-IR-NS (in Table 4) does not mean selecting random colors to attack. Specifically, it first generates a set of adversarial examples with different color distributions and then selects the best example from them based on the loss of the white-box model to attack. Therefore, NCF-IR-NS is close to a white-box attack. \n\nTo support our claim, we evaluate the performance of random color attack (NCF-IR-NS-*), i.e., randomly select colors for each semantic class and use the resulting adversarial examples to attack. As demonstrated in Table b, the performance of NCF-IR-NS-\\* is much lower than NCF-IR-NS. For example, NCF-IR-NS-\\* only achieves a 27.3\\% (degraded from 51.6\\%) success rate on Inc-v3. Thus, directly using random colors to generate adversarial examples is ineffective.\n\nTable b: The attack success rate of using white-box information and not using it. NCF-IR-NS using Inc-v3 as the substitute model. (“*” denotes the white-box attack)\n\n| Methods | Inc-v3* | Res-18 | VGG-19 | Mobile-v2 | Dense-121 | Res-50 | ViT-S | XCiT-N12 | DeiT-S |\n|-------------|---------|--------|--------|-----------|-----------|--------|-------|----------|--------|\n| Clean | 19.2 | 16.1 | 11.4 | 12.8 | 7.9 | 7.5 | 13.3 | 13.7 | 5.8 |\n| NCF-IR-NS |**51.6***|**43.8**|**42.2**| **42.4** | **28.0** |**33.0**|**38.3**| **32.0**|**14.8**|\n| NCF-IR-NS-* | 27.3 | 34.8 | 30.9 | 31.1 | 20.5 | 24.2 | 32.7 | 25.0 | 11.6 |\n", " \n_**Q5 The role of Tranfer Matrix T is not discussed or elaborated.**_\n\n**A5**: The role of the transfer matrix $T$ can be explained by Eq. 4 and Figure 1. Formally, with $T$, we can convert the color distribution of the original image $\\pmb{x}$ to any specific distribution. For example, in Figure 1, $\\pmb{x_H'}$ is mapped via $\\pmb{x}$, $\\pmb{x_H}$ and $T$.\n\n\n_**Q6 Despite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples.**_\n\n**A6**: Compared with existing methods, NCF is more flexible and thus its attack space is larger. Consequently, this helps to search for better adversarial examples. Furthermore, we introduce the Initialization Reset (IR) technique, which helps to jump out of local optimal points. Therefore, the transferability of NCF is better than the existing methods in most cases.\n\n_**Q7 About line 129. Why not select a single color distribution as a template from each set? What does the natural representation mean here?**_\n\n**A7**: The reviewer seems to misunderstood line 129. For each semantic class, we have 20 clusters (distribution sets) and the style varies from set to set. Since the color distributions in each set are similar, we just select a single color distribution from each set as the template, i.e., using one color distribution to represent each distribution set. \n\nAs for \"natural representation\", this means that this is the real distribution sampled from the dataset. Please note that if we average all color distributions of a specific distribution set, resulting color distribution may not be present in the dataset, i.e., fake and unnatural.\n\n_**Q8 About line 141. What does the perception is more uniform mean here? How does that help to create adv examples?**_\n\n**A8**: Perceptual uniformity means that the pixel space variation is similar to the perception of the human eye, i.e., when the color space value change is large, the human eye perception change should also be large. Conversely, the human eye perceives small changes.\n\nThere is an underlying assumption in the generation of the adversarial examples: \"adversarial examples with small perturbations are less perceptible to the human eye and have higher image quality\". In a space where perception is more uniform (e.g., CIELab), it is easier to control the invisibility of the adversarial perturbation [d,e] when the pixel change. However, this assumption does not always hold if in a perceptually less uniform space (e.g., RGB). In this case, even the perturbation of the adversarial example is smaller, it still may be more abrupt to the human eye.\n\n[d] Cassidy Laidlaw and Soheil Feizi. Functional adversarial attacks. In NeurIPS, 2019.\n\n[e] Zhengyu Zhao, Zhuoran Liu, and Martha A. Larson. Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In CVPR, 2020.", " Thank you for your positive feedback and insightful comments. Please see our detailed response below.\n\n_**Q1 What does the \"flexibility\" mean?**_\n\n**A1**:Please refer to the response to the generic comment.\n\n_**Q2 It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set?**_\n\n**A2**: Yes, we can control the number of distribution sets for each class when building the color distribution library. But if we select only few number of distribution set for each class, the attack space for our method will be reduced and thus limiting the performance of our NCF. It is because that each set is represented by a single color distribution (i.e. one style) for simplicity (see lines 129-132). If the number of distribution set for each class is 1, the result will be the same no matter how many times initialization reset (IR) is executed. However, as shown in Table 4, IR plays a very important role in attack performance. Therefore, in this paper, we use 20 (instead of \"few number\") different distribution sets.\n\n_**Q3 It is not clear how to form the target distribution H. How do you formulate H?**_\n\n**A3**: $\\pmb{H}$ denotes the overall color distribution consisting of the color distributions of all semantic classes in an image. Specifically, we randomly choose a color distribution (from the color distribution library) for each semantic class and weight the sum according to the area ratio $w$ of the relevant semantic classes to obtain the target color distribution $\\pmb{H}$: \n\n$$\\pmb{H}=\\sum_{\\tilde{y}=1}^{|\\tilde{Y}|} w_{\\tilde{y}} \\cdot \\pmb{c}_{\\tilde{y}},$$\n\nwhere $\\tilde{Y}$ denotes the semantic classes contained in the image, $w_{\\tilde{y}}$ denotes the area ratio of semantic class $\\tilde{y}$ in an image, and $c_{\\tilde{y}}$ denotes the target color distribution chosen for semantic class $\\tilde{y}$. Essentially, $\\pmb{H}$ and $c_{\\tilde{y}}$ are matrices of size $100\\times256\\times256$. If $c_{\\tilde{y}}[L_i,A_i,B_i]=w\\neq 0$, it means that in the current style, the semantic class $\\tilde{y}$ contains $w*100$% of the pixels with the value $(L_i,A_i,B_i)$.\n\n\n_**Q4 There is no discussion on how to generate $\\pmb{x_H}$ from $\\pmb{H}$ and what does $\\pmb{x_H}$ constitute of?**_\n\n**A4**: $\\pmb{x_H}$ is an intermediate variable used for color transfer, which is reconstructed by $\\pmb{H}$ but without spatial information. Specifically, $\\pmb{x_H}$ is generated based on the image size and the color ratio recorded in $\\pmb{H}$. It aims to make the color distribution of $\\pmb{x_H}$ equal to the target color distribution $\\pmb{H}$. The following is the pseudo-code for generating $\\pmb{x_H}$ from $\\pmb{H}$:\n\n```python\ndef convert(H, img_h, img_w):\n \"\"\"\n Args:\n H: target color distribution\n img_h: height of the image to be attacked\n image_w: width of the image to be attacked\n\n \"\"\"\n img_area = img_h*img_w # Image size \n x_H = np.zeros(img_area, 3) # Initialization \n pos = np.nonzero(H) # All index positions in H that are not zero\n \n start = 0\n for i in len(pos): \n (L, A, B) = pos[i] # Extracts the color\n num = img_area*H[L, A, B] # The number of pixels of the color (L,A,B) in x_H\n \n x_H[start: start+num] = (L, A, B)\n start = start + num\n \n x_H = x_H.reshape(img_h, img_w, 3)\n\n return x_H\n ```", " _**Q1 What does the “flexibility” mean?**_\n\n**A1**: Please refer to the response to the generic comment.\n\n_**Q2 Why NCF is better on transferability and worse on white-box attacks.**_\n\n**A2**: NCF aims to generate coordinated, natural-looking adversarial examples. Instead of perturbing each pixel value individually, pixels of similar color are usually adjusted uniformly, i.e., color-wise not pixel-wise perturbation. Besides, to ensure the efficiency of NCF, our perturbation is optimized in only $N=15$ iterations. if we increase $N$ to 100, white-box success rate on res18 can be further improved by about 3\\% (i.e., 92.9\\% -> 95.1\\%), which outperforms SAE and ColorFool (Please note ReColoradv, cAdv and ACE need more iterations, e.g.,the maximum iteration for ACE is 500). Therefore, the reported attack success rates on white-box models is limited. As for the black-box attack, we argue it is because our color-wise perturbation does not over-fit the white-box model and thus achieving higher black-box transferability (like FGSM vs. I-FGSM). \n\nPlease note that our white-box results do not contradict the flexibility of our approach. Take ColorFool as an example. It needs to **manually** split an image into two parts and adds controlled noises on the human-sensitive part, which largely depends on the authors’ intuition (but it varies from person to person). By contrast, our NCF can **automatically** select an adversarial color distribution for each semantic class. In this case, \"automatic\" reflects the flexibility of NCF as opposed to the \"manual\" nature of ColorFool.\n\n_**Q3 Could the NCF and baseline methods improve if more substitute models are used? This paper [f] is also worth to be discussed.**_\n\n**A3**: Thanks for providing [f], which is an interesting paper and we have discussed it in our revision. As for ensemble model attack (fusing the logits of multiple models like [g]), here we report the result of NCF. As indicated in Table c, the attack success rate of NCF can be further improved when crafting via an ensemble of models.\n\nTable c: Comparison of ensemble attack and single model attack. We report attack success rates (\\%) of NCF and the leftmost model column denotes the substitute model, where Ensemble means an ensmeble of Res-18, VGG-19 and Mobile-v2.\n\n| Models | Dense-121 | Res-50 | ViT-S | XCiT-N12 | DeiT-S |\n|-----------|-----------|--------|-------|----------|--------|\n| Clean | 7.9 | 7.5 | 13.3 | 13.7 | 5.8 |\n| Res-18 | 55.3 | 66.7 | 53.0 | 55.3 | 32.8 |\n| VGG-19 | 53.6 | 64.3 | 56.5 | 53.5 | 30.7 |\n| Mobile-V2 | 54.4 | 66.2 | 55.4 | 56.4 | 32.6 |\n| Ensemble | **63.5** | **71.6** | **59.7** | **61.7** | **37.0** |\n\n[f] Alina Elena Baia, Alfredo Milani, and Valentina Poggioni. One for many: an Instagram inspired black-box adversarial attack. 2021.\n\n[g] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In CVPR, 2018", " We have concluded all the comments from all the reviewers and responded to the generic comments as follows:\n\n_**Q1 What does \"flexibility\" mean? Moreover, why is our approach more flexible than existing methods?**_\n\n**A1**: The \"flexibility\" in this paper is a relative concept. Existing methods usually have many limitations when modifying the color of an image (as described in Sec 3.2). For example, ReColorAdv requires constraining the perturbation to a relatively small range, which cannot take full advantage of the “unrestricted” setting; cAdv enforces the color belonging to the low-entropy cluster to remain unchanged, which inevitably reduces the attack space; ColorFool manually splits an image into two parts and adds controlled noises on the human-sensitive part, which largely depends on the authors’ intuition (but it varies from person to person). In contrast, NCF does not have these limitations, which automatically makes full use of the “unrestricted” setting.\nTherefore NCF is more flexible than existing methods.", " We appreciate all reviewers (UgYm, C6Jv and Hmyg) for their insightful comments and are glad to get one \"borderline accept\" (reviewer UgYm), one \"weak accept\" (reviewer C6Jv) and one \"borderline reject\" (reviewer Hmyg) at this time. Encouragingly, all reviewers highlight our extensive experiments on a wide variety of models and the improvement of the attack success rate on the black-box models. UgYm and C6Jv praise for our well-written and easy-to-follow paper. C6Jv highlights our novelty. \n\nWe carefully revise the manuscript according to the comments of all the reviewers. For convenience, we highlighted the revised text in color except for the revision of grammars. Here we briefly summarize the updates we have made to the revision:\n\n* cite and discuss the papers the reviewers provided.\n\n* add experiments for the influence of segmentation Models in Appendix H.\n\n* add experiments for the effect of ensemble attack in Appendix I.\n\n* discuss the difference between NCF-IR-NS and random color attack in Appendix J. ", " This paper works on Unrestricted color attacks, which manipulate semantically meaningful color of an image.\nCurrent works usually sacrifice the flexibility of the uncontrolled setting to ensure the naturalness of adversarial examples. As a result, the black-box attack performance of these methods is limited. To boost transferability of adversarial examples without damaging image quality, they propose a Natural Color Fool (NCF) which is guided by realistic color distributions sampled from a publicly available dataset and optimized by our neighborhood search and initialization reset. Extensive experiments and visualizations demonstrate the effectiveness of their proposed method. The writing and exposition are clear and of high quality. The authors generally address this problem in the style of CV instead of ML.\n\n1. By exploiting the color distribution of semantic classes, the proposed Natural Color Fool (NCF) improves the flexibility of the current unrestricted color attack.\n2. Some experimental improvement is obtained.\n\nProblems:\n1. The proposed method is mostly empirical without theoretical proof. For example, why the proposed method can improve flexibility?\n 1. The proposed method is largely based on a segmentation network and a clustering strategy. What is the impact of different segmentation and clustering algorithms? Also, the clustering is based on the ADE20k dataset. Can this guarantee performance?\n2. In the main method, it is unclear to why a random color can successfully attack the classification model with such a high success rate?\n3. The main idea seems to be similar to [42]. [42] is a different application, but technically what are their differences? Authors have addressed some limitations and potential negative social impact of their work, what about the feasibility of the attack in the physical world?", " The paper proposes a color based unrestricted adversarial black box attack on image classification deep neural networks by transferring the adversarial examples using a substitute network. Authors propose to generate a natural color distribution library based on the publicly available ADE20K dataset. They create a library of distinct color distributions for 150 semantic classes. They generate the adversarial examples by randomly picking several color distributions for each semantic class from the library and find the image that fools the substitute network. In addition, they perform neighborhood search on a 3x3 Transfer Matrix T that performs the color mapping to further boost the attack success rate. Furthermore, they reset the Transfer Matrix T like random restarts in PGD attack. Extensive results show that the proposed method maintains the image quality and boosts the attack transferability significantly compared to the existing methods. Major boost of the transferability comes from optimizing the matrix T to generate adversarial examples. Strengths:\n1)\tWell written paper. Most of the parts are easy to understand.\n2)\tProposes a novel method to generate transferrable adversarial attack.\n3)\tMethod explanation is easy to follow.\n4)\tConducted extensive experiments on wide variety of network architectures.\n5)\tShown a significant improvement of the attack success rate with the proposed method on both undefended and defended models (L_p based defenses and input processing defenses).\n\nWeakness:\n1)\tIn the beginning of the paper, authors often mention that previous works lack the flexibility compared to their work. It is not clear what does it mean and thus makes it harder to understand their explanation. \n2)\tIt is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? \n3)\tThe role of Tranfer Matrix T is not discussed or elaborated.\n4)\tIt is not clear how to form the target distribution H. How do you formulate H? \n5)\tThere is no discussion on how to generate x_H from H and what does x_H constitute of? \n6)\tDespite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples.\n In addition to the points mentioned in weakness, I have two additional questions:\n\nAt line 129, it is mentioned that “Since the color distributions in each distribution set are similar and averaging all color distributions of a specific distribution set may not yield a natural representation, we randomly select a color distribution to represent the overall color characteristics for simplicity”. If their distributions are similar, why not select a single color distribution as a template from each set? What does the natural representation mean here?\n\nIn line 141, authors mention that “we craft adversarial perturbations in CIELab color space where the perception is more uniform than RGB color space”. What does the perception is more uniform mean here? How does that help to create adv examples?\n As per my understanding, authors briefly addressed the limitations and negative impact in their work.", " This paper introduces a new unrestricted attack (NCF) that boosts the black box performance. It is done by color converting, neighborhood search, and initialization reset. The experiment includes both CNN and Vit architectures and compares a host of baseline methods. Overall, NCF achieves state-of-the-art performance. \nStrengths:\n1. The paper specifically targeted on black-box setting and achieved much higher adversarial transferability than baseline methods. \n2. The authors consider a lot of recent ViT models and lots of defense methods to evaluate. \n\nWeaknesses: \n1. The authors state that \"current unrestricted color attacks lack flexibility, which results in limited transferability of the adversarial examples\". First, I am not exactly sure what flexibility means here. Then, I observe (from table 1) that other attacks usually outperform NCF in the white-box setting. If NCF is more flexible, shouldn't it also have a higher white-box attack success rate? Authors should explain why NCF is better on transferability and worse on white-box attacks. \n\n2. I noticed that all black-box attacks are generated from one substitute model, whereas in some real-world settings, attackers may have multiple substitute models. Could the NCF and baseline methods improve if more substitute models are used? This paper [1] is also worth to be discussed. \n\n[1] Baia et al. One for Many: an Instagram-inspired black-box adversarial attack\n\n\n========================================================================\nAfter rebuttal: \nI think the authors carefully address my concerns, and I am willing to increase my score. \n\n See previous section NA" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "fcR3GH2yWQ", "HwMVOXPDUmQ", "XFETUyxz_h0", "FqDvb3nrt8f", "_iIaOOuYtLJ", "B7zFhfwP4-k", "tXZFUPXyW-z", "rwiYWht9sx", "Fo-EqTJUb0", "y5YgYoPbv-0", "nips_2022_-T5seeOMnM5", "nips_2022_-T5seeOMnM5", "nips_2022_-T5seeOMnM5", "nips_2022_-T5seeOMnM5" ]
nips_2022_lXUp6skJ7r
Adversarial Style Augmentation for Domain Generalized Urban-Scene Segmentation
In this paper, we consider the problem of domain generalization in semantic segmentation, which aims to learn a robust model using only labeled synthetic (source) data. The model is expected to perform well on unseen real (target) domains. Our study finds that the image style variation can largely influence the model's performance and the style features can be well represented by the channel-wise mean and standard deviation of images. Inspired by this, we propose a novel adversarial style augmentation (AdvStyle) approach, which can dynamically generate hard stylized images during training and thus can effectively prevent the model from overfitting on the source domain. Specifically, AdvStyle regards the style feature as a learnable parameter and updates it by adversarial training. The learned adversarial style feature is used to construct an adversarial image for robust model training. AdvStyle is easy to implement and can be readily applied to different models. Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains and show that we can achieve the state of the art. Moreover, AdvStyle can be employed to domain generalized image classification and produces a clear improvement on the considered datasets.
Accept
Simple and practical way to do better at domain generalization when it comes to semantic segmentation. AdvStyle can generate images that are hard during training, and prevent the model from overfitting on the source domain. Given that it works well, is relatively simple to implement and conceptually sound, I think it will appeal to a large portion of the NeurIPS audience that works on domain generalization.
test
[ "FSNE6R-XGqZ", "A8QcUInNalG", "0atXthd0h2v", "DlaPVj6PPj", "b8OTPiql6II", "A5c9qzI86Bx", "PbdBbJQW1iJ", "L3FZNv2-6Mh", "dPo5SnFNnWX", "I7SU4XUPODN", "gdzGXPIFILJ", "gRcSA0hKXc", "5qILJQ3D6eH", "acXP9BAaRk" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer NDMW,\n\nThanks for your kind reply. We are delighted that you appreciate our response. We definitely will add all these additional experiments into the final version. We also fully agree with you that the multi-resolution spatial pyramid strategy can bring further improvement and would like to leave it as our extension in future work. We would highly appreciate it if you consider increasing your confidence and upgrading the score based on our proper feedback.\n\nBest,\n\nAuthors of Paper #444", " Thanks for the authors’ effort in the feedback. All my concerns have been answered. I would suggest adding these additional experiments into the final version. In addition, if the 4 patches strategy can improve performance from 37.39 to 37.76, I would expect the multi-resolution spatial pyramid strategy can further boost the performance. ", " Dear Reviewer Z9RR,\n\nWe sincerely appreciate your review work, which significantly helps us improve our paper. We have carefully addressed each of your concerns in the response. Please let us know if you have any further questions or concerns. We are happy to clarify them.\n\nMany thanks for your comments again.", " Dear Reviewer B93V,\n\nWe sincerely appreciate your review work, which significantly helps us improve our paper. We have carefully addressed each of your concerns in the response. Please let us know if you have any further questions or concerns. We are happy to clarify them.\n\nMany thanks for your comments again.", " Dear Reviewer NDMW,\n\nWe sincerely appreciate your review work, which significantly helps us improve our paper. We have carefully addressed each of your concerns in the response. Please let us know if you have any further questions or concerns. We are happy to clarify them.\n\nMany thanks for your comments again.", " Thanks for your valuable comments. Please find our detailed response below.\n\n>**Comment 1:** Interpretability of AdvStyle.\n>\n>**Response 1:** Thanks for these constructive comments. We would like to explain the effectiveness of our AdvStyle in two aspects.\n>>\n>First, as stated by [34], the domain generalization task can be formulated as solving the worst-case problem, and the solution of the worst-case problem guarantees good performance against data distributions that are different from the source domain. In other words, training the model with data that have different distributions from the source data but with realistic/reasonable variations can improve the performance on unseen domains. In this paper, we follow this statement and design our AdvStyle for learning generalized models. From the visualizations (in Figure 2 of the main paper and more figures in the Appendix), we can observe that our AdvStyle can generate realistic variations that are different from the source domain. This property largely guarantees our AdvStyle to meet the important condition of solving the worst-case problem. Therefore, it is reasonable that training the samples generated by AdvStyle can improve the performance on unseen domains.\n>>\n> Second, to better understand the benefit of AdvStyle, we have provided a quantitative analysis of the distribution of different datasets in the Appendix (see Section 7 and Table 4). For your convenience, we provide the details below.\n>>\n> Specifically, we computed the histograms of pixel values of four datasets (GTAV, CityScapes, BDD-100K, Mapillary) and the AdvStyle-augmented dataset of GTAV, which is generated by four epochs. The bin size is set to 8. For each dataset, the histograms of RGB channels are normalized by L1-norm and re-scaled (×) by #bins, and then are concatenated as the histogram feature. We estimate the distribution distance between two datasets by computing the KL-distance between their histogram features. Results are reported in the table below. We can observe that the AdvStyle-augmented dataset has a smaller distance to real datasets, indicating that AdvStyle can generate data closer to the real distributions. This further explains why training the model with AdvStyle-generated samples can improve the performance on these real datasets.\n> \n>\n>| Source | CityScapes ↓| BDD ↓| Mapillary ↓| Mean ↓ |\n>| -------- | -------- | -------- | -------- | -------- |\n>|GTAV |0.5867 |0.3421| 0.3211| 0.4166|\n>|Adv-GTAV |**0.5587**| **0.3217** |**0.3058**| **0.3954**|\n>\n>At last, we would like to point out that our AdvStyle did not explicitly guide the adversarial direction to the real-world distribution, which inevitably will produce unreal or improper variations. We admit this is one drawback of our method and hope to address it in the future study.\n\n----\n\n>**Comment 2:** More augmentation effects are needed, such as lighting.\n> \n>**Response 2:** Good suggestion. During rebuttal, we conducted experiments by applying our AdvStyle in the LAB space. Specifically, we first convert the RGB-sample to the counterpart LAB-sample and obtain the learnable mean and standard deviation. Then, we reconvert the LAB-sample to RGB-sample for adversarial learning and model optimization. This manner enables us to implement AdvStyle in the LAB space as well as to use the ImageNet-pretrained parameters. As shown in the Table below, LAB-based AdvStyle also significantly improves the performance on unseen domains but achieves lower results than RGB-based AdvStyle on two of the three benchmarks. Nevertheless, we may expect that LAB-based Advstyle will achieve better results in lighting matters context, such as your suggested one: the generalization from normal-light training data to low-light testing data.\n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| Baseline | 28.95 |25.14| 28.18| 27.42|\n>| AdvStyle (LAB) | 37.09 | 32.89 | **37.13** | 35.70 |\n>| AdvStyle (RBG) | **39.62** |**35.54** |37.00 |**37.39**|\n>\n>In future work, we would like to use/design a module/network that jointly accommodates different types of style variations (e.g., color, lighting, and texture) and employ our AdvStyle strategy on it to learn more robust models.\n\n----", " \n>**Comment 3:** Apply AdvStyle on SOTA classification baselines.\n>\n>**Response 3:** Good suggestion. Following your comment, we apply AdvStyle on another two state-of-the-art classification methods (ME-ADA and L2D) on two classification benchmarks. As shown in the Table below, AdvStyle can yield an improvement of 8.0% in average accuracy on Digits over ME-ADA and 6.2% in average accuracy on PACS over L2D, respectively. In addition, combining AdvStyle with ME-ADA or L2D outperforms the ERM+AdvStyle on both datasets. These results verify the universality of our method. We will add these results in the revision.\n>\n>\n>| Method | SVHN | MNIST-M | SYN| USPS| Avg.|\n>| -------- | -------- | -------- | -------- | -------- | -------- |\n>| ERM | 27.8 | 52.7 | 39.7 | 76.9 | 49.3 |\n>|ME-ADA | 42.6 | 63.3 | 50.4 | 81.0 | 59.3 |\n>|ERM+AdvStyle | 50.4 | 73.4 | 58.7 | **81.6** | 66.0 |\n>|ME-ADA+AdvStyle | **55.5**\t| **74.1**\t| **59.3**\t| 80.1\t| **67.3** |\n>\n>\n>| Method | Art. | Car. | Ske. | Pho.| Avg.|\n>| -------- | -------- | -------- | -------- | -------- | -------- |\n>| ERM | 67.4 | 74.4 | 51.4 | 42.6 | 58.9 |\n>|L2D | 74.3 |77.5 | 54.4 | 45.9 |63.0|\n>|ERM+AdvStyle | 75.8 | 76.6 | 58.1 | 51.1 | 65.4 |\n>|L2D+AdvStyle | **80.6** |\t**78.4** | **58.3**\t| **59.7**|**69.2**|\n\n----\n\n>**Comment 4:** More comparison with current SOTA augmentation method.\n>\n>**Response 4:** First, we have compared AdvStyle with different style augmentation methods (RandStyle, MixStyle [43] and CrossStyle [30]) in Table 4 of the main paper. We provide this experiment in the Table below for your convenience, where the models are trained on the GTAV dataset for single-source DG. AdvStyle consistently outperforms other methods on the three datasets and outperforms MixStyle by 2.79% and CrossStyle by 2.81% in mean mIoU, respectively.\n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| RandStyle | 33.40| 34.14 |31.67| 33.07|\n>| MixStyle | 35.53| 32.41 |35.87| 34.60|\n>| CrossStyle | 37.26| 32.40 |34.09| 34.58|\n>| AdvStyle (Ours) | **39.62** |**35.54** |**37.00** |**37.39**|\n>\n> Second, in Table 2 of the supplementary material, AdvStyle consistently outperforms MixStyle on the three datasets. The quantitative improvement is 1.27% in the mean mIoU. We admit that the improvement is not as significant as in the Table above. The reason is that multi-source data provide more styles within the source domains so that MixStyle can be benefited from more diverse styles than single-source DG. Nevertheless, our AdvStyle outperforms MixStyle on both single- and multi-source DG, indicating the effectiveness of our AdvStyle under different contexts. Finally, we would like to indicate that the number of 1.27% is not a marginal improvement due to the difficulty of the DG task. \n\n----\n\n>**Comment 5:** The update of style features is similar to the strategy of obtaining adversarial samples (adversarial perturbations). How to control the parameter of $\\gamma$ in Eq. 3?\n>\n>**Response 5:** Your understanding is right. The update strategy of our AdvStyle is similar to adversarial perturbation, but their implementations are different due to the different motivations. \n>>\n>Adversarial learning requires generating imperceptible adversarial perturbations by several steps, so the update of adversarial samples is controlled by perturbation scope. Instead, AdvStyle aims at generating diverse stylized samples for the current model. For this purpose, we do not need to force strict constraints on the adversarial gradient, since the semantic content of most pixels will be maintained and generating some unrealistic styles is also acceptable. Consequently, we can easily control the AdvStyle by the adversarial style learning rate $\\gamma$, and the adversarial samples can be generated by one step.\n>>\n>We empirically select the adversarial style learning rate $\\gamma$ as 3 in Figure 1 of the supplementary material.\n\n----\n\n>**Comment 6:** Why use both learning rate for $\\mu$ and $\\sigma$?\n>\n>**Response 6:** Good question. We have tried to use different learning rates for $\\mu$ and $\\sigma$ but find that the best choices for them are both 3. We thus use one symbol to represent their learning rates in our paper.\n\n----", " Thanks for your valuable comments. Please find our detailed response below.\n\n\n>**Comment 1:** A significant concern with the submission is that it poses its method as domain generalization (DG) rather than image augmentation (IA) and the subsequent analysis and choice of baselines. I believe this confounds the comparison to prior art and comparable methods. \n>\n>**Response 1:** We first would like to clarify that in this paper we focus on solving the problem of domain generalization in the view of data augmentation and that we have stated that our method is a type of data augmentation method. As far as we know, data augmentation techniques are widely used to solve many problems, such as supervised learning (Mixup [A], Random Erasing [B]), semi-supervised learning (MixMatch[C]), domain adaptation (AugGAN [D]), and domain generalization focused on this paper (MixStyle[E]). However, the community commonly does not compare the methods of different tasks, since they are specially designed for the specific tasks, so as our method.\n>>\n>In addition, in our paper, we have already compared with the state-of-the-art data augmentation methods for domain generalization, including MixStyle, CrossStyle, DRPC, and FSDR. Note that, we have indicated that these four methods are data augmentation methods. We have shown the improvements of our method over them. Note that, all of these compared data augmentation methods in DG do not compare with the data augmentation methods (e.g., Random Erasing [B] that is widely used for supervised learning) of other tasks, due to the different purposes.\n>>\n> In sum, we believe that this paper does not confound the comparison to previous state-of-the-art methods, since we have compared with state-of-the-art methods for domain generalization (including both network-designing and augmentation-designing methods) in a fair way. In addition, we believe this paper makes an interesting and reasonable step to investigate how data augmentation can help to learn generalized models, rather than leads a conflict between the tasks of domain generalization and data augmentation.\n>\n>[A] mixup: Beyond Empirical Risk Minimization. ICLR 2018.\n>\n>[B] Random Erasing Data Augmentation. AAAI 2020.\n>\n>[C] MixMatch: a holistic approach to semi-supervised learning. NeurIPS 2019.\n>\n>[D] AugGAN: Cross Domain Adaptation with\nGAN-based Data Augmentation. ECCV 2018.\n>\n>[E] Domain Generalization with MixStyle. ICLR 2021.\n\n----\n\n>**Comment 2:** If I take the DG perspective I'd have to ask: Will this work for other DG tasks that do not make strong assumptions about the signal (image formation process, meaning of mean/variance)? I can find no evidence for this in the paper and it appears that the main premise is a strong domain assumption (the image formation process) and selection of data to conform with this assumption (i.e. the datasets are chosen such that they differ demonstrably in the chosen and explicitly modeled statistic). Alternatively, the message of the submission could be that for a set of domains, I can a priori examine some low-dimensional formative process that I then can exploit to generate better samples (in terms of generalization across the prior domains). This could be reformulated from the perspective of image augmentation (IA). \n>\n>**Response 2:** We would like to clarify that this paper considers the practical tasks along with commonly used data in the computer vision community. In real-world applications, style variations between different domains/datasets are widespread, e.g., autonomous driving with ever-changing conditions (weather), detecting persons at different day-night times, and recognizing objects in different styles (nature or cartoon). In addition, contexts with style variations are widely considered in domain generalization, including image classification (Office-31, Office-Home) and semantic segmentation (GTAV, Cityscapes). Therefore, this paper does not strongly select data to conform to the assumption of our method. Instead, we (1) consider the widely studied and practical task/context in the computer vision community, i.e., domain generalized urban-scene segmentation, and (2) specially design an augmentation method for solving this problem by explicitly considering the particularly crucial style factor. Moreover, we also show the benefit of our method for image classification tasks where the style variations are also significant between different domains.\n----", " \n>**Comment 3:** If I take on the perspective of IA, I'd have to ask if the cited prior art, baselines and benchmarks are chosen appropriately. For prior art, there would be significant references missing. For instance, [G] and [H] below demonstrate generic, in-training adversarial augmentation.\n>\n>**Response 3:** As explained in our reply to Comment 1, data augmentation methods are used for many tasks, and their purposes are specially designed for the corresponding tasks/problems. For example, [G] is designed to learn a model that is robust to different corrupted and perturbated samples (e.g., gaussian noise). Instead, in this paper, we aim to improve the model's generalization ability on unseen domains where image style variations are significant. Moreover, both [G] and [H] are largely relied on geometric transformations, such as translating, shearing, and rotating, which can be easily applied to global-based classification tasks but will have significant difficulties in pixel-wise semantic segmentation tasks. In other words, different augmentations have particular functions in solving different tasks. Therefore, we believe it is not essential and sometimes not practical to compare with all of the existing augmentation methods. Instead, comparing with methods focusing on the same task is more important. Recall that, in this paper, we consider the problem of domain generalized urban-scene segmentation. We have conducted experiments on popular benchmarks and have compared our method with state-of-the-art methods, which include methods based on data augmentation.\n>\n>[G] AugMax: Adversarial Composition of Random Augmentations for Robust Training. NeurIPS 2021.\n>\n>[H] Adversarial AutoAugment. ICLR 2020.\n\n----\n\n>**Comment 4:** Conceivably, per-channel image mean and variance could be included in these approaches as well without fundamental changes to the algorithms. How would the presented method compare in this case? Could it be extended to other explicitly modeled image formation processes (luminance, in-plane shift, etc.)? As an example: Let's say my domain shift were in-plane rotations (e.g. relatively static car-based camera vs. smart phone from pedestrian view). The presented method would not be expected to demonstrate much benefit, as the domain assumption of shift in image mean and variance are violated. One could probably model the in-plane rotation with a similar adversarial approach to sample generation, but this essentially requires explicit knowledge of the domain shit in a low-dimensional, parametric, forward way.\n>\n>**Response 4:** We acknowledge the reviewer for proposing these interesting suggestions and directions. We fully agree with you that our method, i.e., the proposed adversarial learning augmentation method, can be extended to other explicitly modeled image formation processes. For example, learning adversarial rotations for the domain generalization contexts where rotation variations are critical and should be carefully concerned. However, as explained in our reply to Comments 1&2&3&4, our goal is to learn a robust model under the autonomous driving context where the style variations are significant between domains. Due to being out of the scope of this paper, we would like to investigate the effectiveness of our method in the tasks with other image variations (such as in-plane rotations) in future work.\n\n----\n\n>**Comment 5:** As it is presented, the empirical results are appealing as the method is relatively straightforward and improves over the presented baselines, but the level of contribution is unclear as the choice of baselines lack a thorough sample of and comparison to state of the art image augmentation methods.\n>\n>**Response 5:** As explained in the above replies, (1) this paper focuses on designing a data augmentation method for the domain generalized urban-scene segmentation, and (2) existing augmentations methods are specially designed for different tasks. We thus only compare with the baselines and state-of-the-art methods for the studied domain generalized urban-scene segmentation task. In addition, we have already compared our method with the state-of-the-art data augmentation methods for domain generalization, including MixStyle, CrossStyle, DRPC, and FSDR. The results show the superiority of our method over them. Notice that, all of the compared data augmentations (MixStyle, CrossStyle, DRPC, and FSDR) choose not to compare with the other data augmentation methods of other tasks, due to the different purposes. Taking the above explanations, we hope the reviewer can find (1) the motivation of our proposed augmentation method for the domain generalized urban-scene segmentation and (2) the fair comparison with the state-of-the-art methods, especially the data augmentation methods in domain generalization.\n\n----", " >**Minor Comment A:** While inspecting tables 3 and 4, I noticed that the setup is described the same (Source/backbone), but the baselines differ. For instance, CityScapes changes from 21.64 to 28.95. It took a couple of reads to understand that the baseline in table includes Color Jittering and Gaussian Blur. Mentioning this fact in the exposure may make readings smoother.\n>\n>**Response A:** Thanks for this suggestion. We will add the corresponding description in the captions of Tables 3 and 4 in the revision.\n\n----\n\n>**Minor Comment B:** Table 3: Would it make sense to add a row combined \"Ours\" with \"AdvPixel\"? It may be interesting to see if these are complementary.\n>\n>**Response B:** Good question. We have already included this result in the Table. 3 of the appendix. For your convenience, we provide this experiment in the table below. We can find that AdvPixel can serve to enhance AdvStyle. The performance yields an improvement of 0.81% in mean mIoU.\n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| AdvPixel | 35.42| 33.28 |33.23| 33.97|\n>| AdvStyle (Ours) | 39.62 |35.54 |**37.00** |37.39|\n>| AdvPixel + AdvStyle | **40.65** | **37.16** | 36.77 | **38.20**|\n\n----", " Thanks for your valuable comments. Please find our detailed response below.\n\n>**Comment 1:** If the style features are just mean & variance of R-G-B channels?\n>\n>**Response 1:** Yes, the style features are the channel-wise mean and standard deviation of the input RGB images, and each of them is a 3-dim feature. \n>\n\n>**Comment 2:** Why not use other color spaces or even texture spaces? \n>\n>**Response 2:** Good question.\n>\n> *As for the color space*, we conduct experiments by applying our AdvStyle in the LAB space. Specifically, we first convert the RGB-sample to the counterpart LAB-sample and obtain the learnable mean and standard deviation. Then, we reconvert the LAB-sample to RGB-sample for adversarial learning and model optimization. This manner enables us to implement AdvStyle in the LAB space as well as to use the ImageNet-pretrained parameters. As shown in the Table below, LAB-based AdvStyle also significantly improves the performance on unseen domains but achieves lower results than RGB-based AdvStyle on two of the three benchmarks. On the other hand, converting between RGB and LAB will increase the training time due to the extra computation costs. Thus, considering the effectiveness and efficiency, we keep applying our AdvStyle to the RGB space.\n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| Baseline | 28.95 |25.14| 28.18| 27.42|\n>| AdvStyle (LAB) | 37.09 | 32.89 | **37.13** | 35.70 |\n>| AdvStyle (RBG) | **39.62** |**35.54** |37.00 |**37.39**|\n>\n>*As for the texture space*, one option can be directly augmenting the values of image pixels. We have compared the AdvPixel (applying adversarial learning on each pixel) with our AdvStyle in Table 3 of the main paper and Table 3 of the appendix. For your convenience, we provide these results in the table below. Our AdvStyle can outperform AdvPixel by a large margin, while combining AdvStyle and AdvPixel can achieve better performance. \n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| AdvPixel | 35.42| 33.28 |33.23| 33.97|\n>| AdvStyle (Ours) | 39.62 |35.54 |**37.00** |37.39|\n>| AdvPixel + AdvStyle | **40.65** | **37.16** | 36.77 | **38.20**|\n>\n> In future work, we would like to use/design a module/network that jointly accommodates different types of style variations (e.g., color, lighting, and texture) and employ our AdvStyle strategy on it to learn more robust models.\n\n----\n\n>**Comment 3:** If the target domain data are not used at all during training? \n>\n>**Response 3:** Yes, the model is trained only on the source domain (GTAV, SYNTHIA, or both of them), and the target domain data (CityScapes, BDD100K and Mapillary) are totally unseen during training. During testing, the trained model is directly applied to the target testing data without further training or fine-tuning.\n\n----\n\n>**Comment 4:** If a local augmentation can further improve the performance? \n>\n>**Response 4:** Good suggestion. Following your comment, we apply AdvStyle on local patches instead of the whole image. Specifically, we split each image into 4 patches evenly (top left, top right, bottom left, and bottom right), and regard the channel-wise mean and standard deviation of each patch as learnable parameters (four 6-dim features). Then the model is trained the same as AdvStyle. As shown in the Table below, as you might expect, *AdvStyle + Patches* can further improve the performance on BDD and Mapillary. However, the mean improvement over all domains is marginal. Considering the effectiveness and efficiency, we apply AdvStyle to the whole image in our method.\n>\n>| Method | CityScapes | BDD | Mapillary | Mean |\n>| -------- | -------- | -------- |-------- | -------- |\n>| AdvStyle | **39.62** |35.54 | 37.00 |37.39|\n>| AdvStyle + Patches | 39.50 | **36.37** | **37.42** | **37.76** |\n\n----", " This work aims to address the domain generalization problem in semantic segmentation. A simple and effective adversarial augmentation technic is proposed which changes global statistics at the image level. Through adversarial learning, the adversarial style features seem to capture the characteristics of other datasets successfully and improve the performance significantly. Strengths\n- The main idea of AdvStyle is simple and easy to implement. I believe this can benefit many computer vision tasks, especially the dense prediction tasks.\n\n- The illustration of how style changes affect the segmentation performance in Figure 1 is good. It can be clearly observed that the mIoU decreases a lot when only changing scene colors. \n\n- The t-SNE visualization in Figure 5 provides good proof for the AdvStyle idea: the adversarial style features can capture some characteristics of other datasets. \n\n- Sufficient experiments have been done, including comparisons with common data-augmentation technics, SOTA DG methods and extra experiments on classification tasks.\n\nWeaknesses\n- In Figure 1, It is claimed that image-level mean-variance is used as the style feature. While it is not clear how to calculate and apply such style features. 
In Line 70, it is claimed that “6-dim feature for each example”. So, I wonder if the style features are just mean & variance of R-G-B channels. If so, why not use other color spaces or even texture spaces? I think this is an important ablation study that is missing in the main paper.\n\n I want to double-check if the target domain data are not used at all during training. So the results in Table 1 are obtained by training model on the GTAV training data only, and then applied to other real-world validation data. Right?

\n\nSince the channel-wise augmentation is applied to the global region, I wonder if a local augmentation can further improve the performance.\n The limitations and potential negative societal impact have been well described.", " The paper addresses the general problem of learning a model in one domain and testing it in a second domain, where there is some domain shift between the domains. Specifically, it demonstrates a method that improves testing performance on real, unknown data when learning on synthetic, rendered data in semantic segmentation for autonomous driving. The motivating observation is that in urban semantic segmentation for autonomous driving a concise domain shift is measurable in the per-channel mean and variance of the image data. In other words, the color statistics between different data sets (and acquisition conditions) differ. The authors propose to explicitly target adversarial changes of the mean and variance of the first domain images during training to improve generalization performance in a second, unknown, domain.\n\nThe method works in two phases for each batch iteration, where the mean and variance of the input image can be modified during the forward pass: adversarial updating of the mean/variance perturbation based on the current, frozen model prediction, and then regular updating of model weights with both adversarial as well as unperturbed samples.\n\nThe paper continues to evaluate the performance of the proposed method on one synthetic versus three real datasets in semantic segmentation for autonomous driving (GTAV versus CityScapes, BDD, Mapillary), as well as several choices for backbones and batch normalization schemes. The empirical performance results demonstrate significant improvements over the chosen baselines. Furthermore, the paper extends the results to image classification on two relevant benchmarks, \"Digits\" and and \"PACS\" The paper is written well and the method is laid out with sufficient clarity and detail. The datasets chosen for comparison are relevant and realistic (in terms of potential real-world applications of the method). I appreciate the motivation: We observe measurable population differences between various datasets and that they can be captured in a compact and semantically meaningful vector already (image mean and variance). How can we use this prior knowledge to improve task performance in semantic segmentation? Based on this motivation, the authors conclude to model the observed perturbation explicitly as it happens to be a natural part of compact image formation models already (in the sense that normalization with these statistics is well known and commonly used). Once it is modeled, the authors show that it is a comparatively simple step to adversarially predict a new set of samples that are harder to predict at the current training state. An appealing property of the method is that the perturbation space is fairly low-dimensional (6-vector) and thus more easily characterized compared to high-dimensional perturbations for instance on the level of individual pixels.\n\nA significant concern with the submission is that it poses its method as domain generalization (DG) rather than image augmentation (IA) and the subsequent analysis and choice of baselines. I believe this confounds the comparison to prior art and comparable methods. If I take the DG perspective I'd have to ask: Will this work for other DG tasks that do not make strong assumptions about the signal (image formation process, meaning of mean/variance)? I can find no evidence for this in the paper and it appears that the main premise is a strong domain assumption (the image formation process) and selection of data to conform with this assumption (i.e. the datasets are chosen such that they differ demonstrably in the chosen and explicitly modeled statistic). Alternatively, the message of the submission could be that for a set of domains, I can a priori examine some low-dimensional formative process that I then can exploit to generate better samples (in terms of generalization across the prior domains). This could be reformulated from the perspective of image augmentation (IA). If I take on the perspective of IA, I'd have to ask if the cited prior art, baselines and benchmarks are chosen appropriately. For prior art, there would be significant references missing. For instance, [1] and [2] below demonstrate generic, in-training adversarial augmentation. Conceivably, per-channel image mean and variance could be included in these approaches as well without fundamental changes to the algorithms. How would the presented method compare in this case? Could it be extended to other explicitly modeled image formation processes (luminance, in-plane shift, etc.)? \nAs an example: Let's say my domain shift were in-plane rotations (e.g. relatively static car-based camera vs. smart phone from pedestrian view). The presented method would not be expected to demonstrate much benefit, as the domain assumption of shift in image mean and variance are violated. One could probably model the in-plane rotation with a similar adversarial approach to sample generation, but this essentially requires explicit knowledge of the domain shit in a low-dimensional, parametric, forward way. \n\nAs it is presented, the empirical results are appealing as the method is relatively straightforward and improves over the presented baselines, but the level of contribution is unclear as the choice of baselines lack a thorough sample of and comparison to state of the art image augmentation methods.\n\n```\n[1] @inproceedings{\nwang2021augmax,\ntitle={AugMax: Adversarial Composition of Random Augmentations for Robust Training},\nauthor={Haotao Wang and Chaowei Xiao and Jean Kossaifi and Zhiding Yu and Anima Anandkumar and Zhangyang Wang},\nbooktitle={Advances in Neural Information Processing Systems},\neditor={A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan},\nyear={2021},\nurl={https://openreview.net/forum?id=P5MtdcVdFZ4}\n}\n\n[2] @inproceedings{\nzhang2020adversarial,\ntitle={Adversarial AutoAugment},\nauthor={Xinyu Zhang and Qiang Wang and Jian Zhang and Zhao Zhong},\nbooktitle={International Conference on Learning Representations},\nyear={2020},\nurl={https://openreview.net/forum?id=ByxdUySKvS}\n}\n``` I would like to hear the author's thoughts on my concerns on image augmentation as a fair baseline above.\n\nSome minor comments that do not affect the rating:\n- While inspecting tables 3 and 4, I noticed that the setup is described the same (Source/backbone), but the baselines differ. For instance, CityScapes changes from 21.64 to 28.95. It took a couple of reads to understand that the baseline in table includes Color Jittering and Gaussian Blur. Mentioning this fact in the exposure may make readings smoother.\n- Table 3: Would it make sense to add a row combined \"Ours\" with \"AdvPixel\"? It may be interesting to see if these are complementary.\n The limitations of the work are fairly clear from the submission.", " This paper propose a new adversarial style augmentation strategy (AdvStyle) for domain generalization in the semantic segmentation task. The approach of AdvStyle can generate hard stylized images during training, preventing the model from overfitting on the source domain. The generation is completed via learning adversarial style feature. Experiments on two semantic segmentation benchmarks demonstrate the effectiveness of AdvStyle. Strengths:\nThe proposed approach can obviously improve the performance of baselines' performance on unseen domains, including segmentation and classification tasks.\n\nWeakness:\n1. The proposed AdvStyle is indeed an approach to extend the appearance diversity of the training images, and this approach can not ensure the performance on one unseen domain, i.e., this method lacks the interpretability.\nAnd AdvStyle can improve the generalization ability via color augmentation, but more augmentation effects are needed, such as lighting for the generalization of a model from normal-light training data to low-light testing data.\n\n2. Why not apply the AdvStyle on SOTA classification baselines? like ME-ADA in Table 6 and L2D in Table 7? I wonder whether the performance improvement is highly related with the baseline' performance.\n\n3. The superiority compared with current SOTA augmentation method is not obvious. Like the MixStyle reported in Table 2 of supp. More comparison is needed and such comparisons should be placed in the main paper, not supp.\n\n\n\n 1. The proposed method updates the style feature via the direction of the gradient as shown in Eq. (3). This is very similar to the strategy of obtaining adversarial samples (adversarial perturbations). Thus, I wonder how to control the parameter of $\\gamma$ in Eq.3? Why use both learning rate for $\\mu$ and $\\sigma$? I agree with authors' stated limitations in the supp, which is the training time. The author can consider how to reduce the training cost.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "A8QcUInNalG", "gdzGXPIFILJ", "acXP9BAaRk", "5qILJQ3D6eH", "gRcSA0hKXc", "acXP9BAaRk", "acXP9BAaRk", "5qILJQ3D6eH", "5qILJQ3D6eH", "5qILJQ3D6eH", "gRcSA0hKXc", "nips_2022_lXUp6skJ7r", "nips_2022_lXUp6skJ7r", "nips_2022_lXUp6skJ7r" ]
nips_2022__yEcbgIT68e
HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper Surfaces
We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations. Previous methods focus on learning point weights to fit neighborhoods into a geometric surface approximated by a polynomial function with a predefined order, based on which normals are estimated. However, fitting surfaces explicitly from raw point clouds suffers from overfitting or underfitting issues caused by inappropriate polynomial orders and outliers, which significantly limits the performance of existing methods. To address these issues, we introduce hyper surface fitting to implicitly learn hyper surfaces, which are represented by multi-layer perceptron (MLP) layers that take point features as input and output surface patterns in a high dimensional feature space. We introduce a novel space transformation module, which consists of a sequence of local aggregation layers and global shift layers, to learn an optimal feature space, and a relative position encoding module to effectively convert point clouds into the learned feature space. Our model learns hyper surfaces from the noise-less features and directly predicts normal vectors. We jointly optimize the MLP weights and module parameters in a data-driven manner to make the model adaptively find the most suitable surface pattern for various points. Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset, the real-world indoor and outdoor scene datasets. The code, data and pretrained models are publicly available.
Accept
This paper proposes an approach to fit implicit surfaces for surface normal estimation. Reviewers unanimously agree on its novelty and performance. AC hence recommends acceptance.
train
[ "oij664rsvQ", "xz8TPT1Mmq", "o5IMbK_7eyY", "YyNK6gq_lV", "jewHO9_HSG", "oY4qrJu4DQw", "bHr9OEliqy", "XmWJKSzw9NI", "17OSHflLoq_", "hg2AXbfYUhY", "Afp9ZqtBk_0", "oE0n6gkWtJn", "fp9uZrBsh5W", "inJBplKsrkk", "zYiC2E_Ncws", "LIuBDxA9Ek", "8CIYA-yc_1Q", "B2jf6KM7boa" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Additional experiments are always welcome, but just to clarify: I would condition acceptance on the changes I described above regardless of the outcome of such an experiment. That an explicit fitting with the approach described in Eq. 5 performs worse is not an indication that the network actually approximates Eq. 5. It would rather suggest the opposite: that the network has learned to do something different than Eq. 5, thus claiming that the network is approximating Eq. 5 is misleading.", " Thanks for the comments.\n \nWe will conduct more experiments to compare the performance obtained by our approximation with concatenation and the performance obtained with exact powers in our revision. If the former option produces better performance, we would follow your suggestion to make it very clear that our explanation is our interpretation of what the network could do.\n \nBest,\n \nAuthors\n\n", " Thanks for the answer on Eq. 5, however, I can still not see that powers or products are explicitly formed somewhere. Concatenation is not the same as multiplication. Additionally, something like $\\mathbf{c}_5$ ($\\mathbf{c}$ with subscript 5) is not the necessarily same as $c$ to the power of 5.\n\nI agree that a network could approximate the surface fitting described in Eq. 5, since networks can approximate any function (given enough capacity), but there is currently no reason to believe that it will actually approximate Eq. 5 and not another function.\n\nI would condition acceptance on either removing this misleading description, or on clearly stating in the text that this is the author's interpretation of what the network could do, but that there is currently no clear evidence that this is actually what the network is implementing.", " Thank you very much for your reply and comments. We will update the exposition in the revised version based on your comments.\n\n**Q1: The similarity to PointNet and PointNet++.**\n\nA1:\nAs the feature encoder of our method, the Space Transformation module takes a 3D point cloud patch $P=\\lbrace p_i|i=1,...,N \\rbrace$ as the input and outputs $N/4$ point features, where the patch $P$ is centralized at the query point $p$ and the $N/4$ points are the nearest neighborhoods of the query point.\nFor the patch $P$, we first extract the per-point feature using a chain of Dense Block [1] and maxpooling based on the local coordinate frame of each point $p_i$.\nAt this stage, we do not adopt the PointNet architecture as in PCPNet.\nThen we sequentially extract features from different scale neighborhoods of the query point $p$.\nAt this stage, we do not use the commonly used auto-encoder architecture with skip-connection and the point set abstraction in PointNet++.\nFurthermore, we also do not use the farthest point sampling, the per-point kNN searching and grouping operation in PointNet++.\nWe get the point subsets with different scales from the input point cloud by consecutively selecting the $\\lbrace N, N/2, N/4\\rbrace$ neighborhoods of query point $p$.\nThis step is very efficient as the neighborhoods of $p$ are stored in the array with increasing distance.\nWe simply use the MLP and maxpooling to elevate the per-point feature dimension at scale $N$ and pass its per-point feature and global feature to the next smaller scale $N/2$, and so on.\nFinally, the $N/4$ point features are fed to the following hyper surface fitting module.\nAs described above, our Space Transformation module is different from PointNet and PointNet++.\n\n[1] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks. CVPR 2017.\n\n**Q2: The link to the source code.**\n\nA2:\nAs you suggested, we will provide a link to the source code in the final version of the paper.\n\n\n**Q3: The limitation should be a part of the main paper.**\n\nA3:\nWe plan to shorten or change Sections 3.1 and 3.2 to make some space for discussion of limitations.", " 1. Regarding the similarity to PointNet and PointNet++ - the differences are still not highlighted enough for me. If it is an MLP operating on KNN points and aggregated over several scales this is very similar to PN++ (they do it on global shapes and here on local patches centred around the query point which PCPNet did with PN and not PN++). \n2. You have clarified the point on positional encoding and addressed my concerns. \n3. If the code is to be released, I expect the link to be provided in the final version of the paper. \n4. In my opinion, limitations should be a part of the main paper. ", " Dear Reviewer Md6E,\n\nFollowing your questions, we provided additional explanations about the novelty of our approach and the difference between the Space Transformation module and PointNet++.\nIn light of this, we would like to know whether you believe we have addressed your concerns.\n\nThank you for your time,\n\nThe Authors", " Dear Reviewer o6yD,\n\nWe analyzed the relationship between the network overparameterization and the ‘c’ dimension and discussed the difference between the proposed method and AdaFit.\nIn light of this, we would like to know whether you believe we have addressed your concerns, and if so we hope that you would be willing to increase your score.\n\nThank you for your time,\n\nThe Authors", " Dear Reviewer KzzY,\n\nWe revised the paper based on your comments and provided a discussion about utilizing the noise-free feature to perform other point processing.\nIn light of this, we would like to know whether you believe we have addressed your concerns.\n\nThank you for your time,\n\nThe Authors", " Thank you very much for your reply and comments. We will update the exposition in the revised version based on your comments.\n\n**Q1: Misleading presentation.**\n\nA1: For Eq.(5) in the paper, we provide a more detailed derivation process to clarify its relationship to the designed networks.\nIn order to expand the polynomial surface fitting in 3D dimensional space into the high dimensional feature space using a neural network with parameter $\\Theta$, we define the multiplication of real numbers with order $g^\\omega \\odot c^\\upsilon$ in the polynomial function as $\\mathbf{g} \\boxdot \\mathbf{c}$, i.e., $g^\\omega \\odot c^\\upsilon:=\\mathbf{g} \\boxdot \\mathbf{c}$, and the orders $\\omega,\\upsilon \\in [0,1,...,\\tau]$. For the symbol $\\boxdot$, we experimentally choose the commonly used feature concat operation (corresponding experiments and discussions will be added in the supplementary materials), hence\n$$\n\\begin{aligned}\n \\mathcal{N}\\_{\\theta,\\tau}(G,C) &= \\sum_{k=0}^{\\tau} \\sum_{j=0}^{k} \\theta_{k-j,j} ~ g^{k-j} c^j \\\\\\\\\n &= \\theta_{0,0} + \\theta_{1,0}g + \\theta_{0,1}c + \\cdots + \\theta_{1,\\tau-1}gc^{\\tau-1} + \\theta_{0,\\tau}c^{\\tau} \\\\\\\\\n &= [\\theta_{0,0}, \\theta_{1,0}, \\theta_{0,1}, \\cdots, \\theta_{1,\\tau-1}, \\theta_{0,\\tau}]\n \\begin{bmatrix}\n 1 \\\\\\\\\n g \\\\\\\\\n c \\\\\\\\\n \\vdots \\\\\\\\\n gc^{\\tau-1} \\\\\\\\\n c^{\\tau} \\\\\\\\\n \\end{bmatrix} \\\\\\\\\n &= [\\theta_{0,0}, \\theta_{1,0}, \\theta_{0,1}, \\cdots, \\theta_{1,\\tau-1}, \\theta_{0,\\tau}]\n \\left(\\begin{bmatrix}\n g^0 \\\\\\\\\n g^1 \\\\\\\\\n g^0 \\\\\\\\\n \\vdots \\\\\\\\\n g^1 \\\\\\\\\n g^0 \\\\\\\\\n \\end{bmatrix} \\odot\n \\begin{bmatrix}\n c^0 \\\\\\\\\n c^0 \\\\\\\\\n c^1 \\\\\\\\\n \\vdots \\\\\\\\\n c^{\\tau-1} \\\\\\\\\n c^{\\tau} \\\\\\\\\n \\end{bmatrix} \\right) \\\\\\\\\n &:= \\Theta\n \\left(\\begin{bmatrix}\n \\mathbf{g}_1 \\\\\\\\\n \\mathbf{g}_2 \\\\\\\\\n \\mathbf{g}_3 \\\\\\\\\n \\vdots \\\\\\\\\n \\mathbf{g}\\_{\\_{N\\_{\\tau}-1}} \\\\\\\\\n \\mathbf{g}\\_{\\_{N\\_{\\tau}}} \\\\\\\\\n \\end{bmatrix} \\boxdot\n \\begin{bmatrix}\n \\mathbf{c}_1 \\\\\\\\\n \\mathbf{c}_2 \\\\\\\\\n \\mathbf{c}_3 \\\\\\\\\n \\vdots \\\\\\\\\n \\mathbf{c}\\_{\\_{N\\_{\\tau}-1}} \\\\\\\\\n \\mathbf{c}\\_{\\_{N\\_{\\tau}}} \\\\\\\\\n \\end{bmatrix} \\right) \\\\\\\\\n &= \\Theta (G \\boxdot C).\n\\end{aligned}\n$$\n\nThen, the final bivariate function used in our hyper surface fitting is $\\mathcal{N}_{\\theta,\\tau}(G,C)=\\Theta (G \\boxdot C)$, where $G$ and $C$ are high dimensional features of the 3D point clouds extracted by two different modules, which are introduced in Sec.3.3 and Sec.3.4 of the paper, respectively.\n\n**Q2: Definition of G and C.**\n\nA2: We have changed the original description in the text below Eq.(5) to: \"where $G\\in \\mathbb{R}^c$ and $C \\in \\mathbb{R}^c$ are high dimensional features of the 3D point clouds extracted by two different modules, which are introduced in the following sections\" and removed the concept of basis vectors.\n\n**Q3: Max-pooling in Eq. (9).**\n\nA3:In the traditional polynomial surface fitting, we have [1]\n$$\nJ_{\\alpha,n}(x,y) = \\alpha_{0,0} + (\\alpha_{1,0}x + \\alpha_{0,1}y) + \\frac{1}{2}(\\alpha_{2,0}x^2 + 2\\alpha_{1,1}xy + \\alpha_{0,2}y^2) + \\frac{1}{6}(\\alpha_{3,0}x^3 + 3\\alpha_{2,1}x^2y + \\cdots) + \\cdots ~.\n$$\n\nThe origin, that is the point of the fitted surface where the estimation is performed, is $(0, 0, \\alpha_{0,0})$.\nThe coefficients of the principal terms $(\\alpha_{1,0}x + \\alpha_{0,1}y)$ in the above polynomial equation are used to calculate the normal of the fitted polynomial surface at the origin\n$$\n\\mathbf{n}\\_{p} = h(\\alpha) = (-\\alpha_{1,0}, -\\alpha_{0,1}, 1) / \\sqrt{1 + \\alpha_{1,0}^2 + \\alpha_{0,1}^2}.\n$$\n\nThe rest of the terms after the principal terms in the polynomial equation are not used in the estimation of the normal.\nBased on this, we use the max-pooling over all features from the hyper surface fitting to choose the most prominent feature.\nThen, we use a liner layer $\\mathcal{H}$ to predict the normal of the query point $p$ (i.e. origin)\n$$\n\\mathbf{n}\\_p = || \\mathcal{H}(\\mathop{\\rm MAX} \\lbrace w_i ~ \\mathcal{N}_{\\theta,\\tau}(G_i,C_i) | i=1,...,N \\rbrace) ||,\n$$\n\nwhere $\\mathcal{N}_{\\theta,\\tau}(G_i,C_i)$ means the per-point feature result obtained from the hyper surface fitting process.\n\n[1] Frédéric Cazals, Marc Pouget. Jet_fitting_3: A Generic C++ Package for Estimating the Differential Properties on Sampled Surfaces via Polynomial Fitting. ACM Transactions on Mathematical Software, 2008.", " Thanks for the clarifications, I will also try to clarify some of the points I mentioned in my review, since I feel that the authors misunderstood some of them:\n\n**Misleading presentation**:\nJust to clarify what I mean with the misleading presentation: I agree that the exposition claims that a hypersurface in high-dimensional space is being fitted, rather than a surface in 3D space. However, from my current understanding, I don't think that the polynomial hypersurface fitting described in Eq. 5 is necessarily what the MLP is approximating. It could also be approximating something completely different. Eq. 5 has terms like G^{k-j} C^{k} in the definition of \\mathcal{N}. Are these terms explicitly created anywhere in the architecture? Please correct me if I am wrong, but I do not see any powers of G or C being formed or their products. If the argument is that the MLP is used to approximate them, this may or may not be true - there does not seem to be any explicit supervision for the MLP to approximate these terms (again, please correct me if I am wrong). Another option to convince me that these terms are formed somewhere is to show it empirically - for example that some intermediate layers of the MLP empirically approximate these terms.\n\nHowever, if it can't be shown that terms like G^{k-j} C^{k} are formed somewhere in the network, then Eq. 5 is a bit misleading and I would remove it as an interpretation of what the network is approximating. It might be approximating this, but it might also be approximating something completely different. It would be good if the authors reply what specifically they plan to change in the paper to address this (or alternatively explain what I got wrong about this).\n\n**Definition of G and C**:\nI was referring to the definition of the *basis vectors* that are mentioned in the text below Eq. 5, not the definition of G and C itself. It might just require a re-formulation or a short mention somewhere in the text to clarify what is meant by these basis vectors.\n\n**Max-pooling in Eq. 9**\nMy main concern here is that all steps in Section 3.2 are motivated as approximating hypersurface fitting, however Eq. 9 is not motivated by hypersurface fitting (since the analogy between what the network is doing and hypersurface fitting breaks here), so a different motivation for the max-pooling needs to be given in the text. Why is specifically max-pooling used over what is supposedly the 'height' of the hypersurface to define the normal? This should be discussed in the text.\n", " We would like to thank the reviewer for the insightful comments.\n\n**Q1: The difference between the Space Transformation module and PointNet++.**\n\nA1: Our Space Transformation module (Fig. 3) is different from PointNet++. The way our method extracts features is different from that of PointNet++. Specifically, in order to extract the features for learning hyper surface and further estimating point normal without explicitly fitting polynomial surface, we design Local Aggregation Layer and Global Shift Layer to realize point set abstraction in our Space Transformation module, rather than directly using the PointNet++. The main differences are as follows:\n\n(a) In the Local Aggregation Layer, we group the local neighborhood features at each point by the 3D spatial distance based kNN search. Then we refine each grouped feature via a chain of Dense Block units, rather than using PointNet in PointNet++. In addition, there is no point cloud sampling during this process, so the number of points keeps unchanged.\n\n(b) In the Global Shift Layer, we provide each point with global information by fusing global features extracted from multiple neighborhood size scales of the query point p. To get the global feature, a maxpooling operation is successively performed over neighboring points of the query point p with scale size $N_s=\\lbrace N, N/2, N/4\\rbrace$, where $N$ is the number of points in the input patch. Because our method only estimates the query point (i.e., center point) normal in a patch, we simply select the $N_s$ nearest neighbors of the query point as a subset rather than using the Farthest Point Sampling in PointNet++.\n\n**Q2: Novelty: a newer version of PCPNet with PointNet++ and positional encoding.**\n\nA2: (a) As described in the answer to the previous question, our backbone architecture (Space Transformation module) is not PointNet++. Our Relative Positional Encoding is completely different from the traditional positional encoding which is based on sine and cosine functions. The previous positional encoding encodes the spatial position information of each point according to its coordinate. We design a parameterized and learnable encoding scheme to encode the relative position information, which reveals the local geometric structure of the point cloud. In Table 2(c) of the paper, we showed better results than the traditional position encoding. Thus, these two modules in the proposed method are novel.\n\n(b) Different from PCPNet, the most important contribution of this paper is that we propose a novel Hyper Surface representation in a high dimensional feature space and design a Hyper Surface Fitting module to optimize this surface representation for point cloud normal estimation. We analyze the problems existing in traditional polynomial surface fitting methods. We use the learned hyper surface to get the point normal in a direct regression way, instead of explicitly constructing the geometry surface. This allows our method to effectively avoid the problems caused by polynomial fitting in the normal estimation task, which achieves a significant improvement over the state-of-the-art on several datasets. Extensive ablation experiments also validate the effect of each component that contributes to the performance.\n\n**Q3: Insight: visualize each point's contribution to the hyper surface fitting.**\n\nA3: As you suggested, we have visualized and discussed each point's contribution to the hyper surface fitting in the appendix section of the revised paper.\n\n**Q4: Will the code be released?**\n\nA4: Yes, the source code and processed data will be released after acceptance.\n\n**Q5: No limitation and potential societal impact. Failure case on the clean point cloud.**\n\nA5: In Sec. 3 in the supplementary material, we have already elaborated on the limitation and broader impact of the proposed approach. In all our experiments, except for preserving some sharp corners (see Fig. 1 in the supplementary material), our method can obtain reasonable results on the clean point cloud without failure.", " We thank the reviewer for the detailed comments. We have revised the paper based on your comments and some other details are not listed here. We will make the code publicly available to help readers understand the details of the algorithm more clearly.\n\n**Q1: The motivation and presentation are misleading.**\n\nA1: Yes, our proposed method is a regression-based method and the surface is not fitted explicitly. Eq.(5) is designed as a sequence of skip-connected Residual Blocks in the Hyper Surface Fitting module. The hyper surface $\\mathcal{N}$ is embed in the network of this module, and its output $F=\\mathcal{N}(G,C)$, $F=\\lbrace f_1,f_2,…,f_M\\rbrace$ (see Fig.2).\nAs claimed in our paper, our method estimates point cloud normal by implicitly learning hyper surface rather than explicitly fitting polynomial surface. Our hyper surface is represented by MLP layers and the learnable parameters of the layers interpret the surface structure in a high dimensional feature space. The advantage is that the hyper surface can adaptively fit more complex point patterns in a robust way. The process of hyper surface fitting is equivalent to learning to determine the network parameters.\nWe clearly describe the problems existing in traditional polynomial surface fitting methods in Lines 39-48, which lead to our motivation. Our method deeply draws on the idea of the traditional polynomial surface fitting. However, the biggest difference is that our method extends the 3D geometry-based surface fitting into a high dimensional feature space. We follow the polynomial surface representation (Sec.3.1) to design and describe our formulas, modules and network structures (Sec.3.2). We use the learned hyper surface to predict the point normal in a direct regression way, instead of explicitly constructing the 3D geometric surface. This allows our method to effectively avoid the problems caused by polynomial fitting, and achieve a significant improvement over the state-of-the-arts.\n\n**Q2-1: G and C are not introduced.**\n\nA2-1: G and C are introduced in Sec.3.3 and Sec.3.4 of the paper, respectively. See Eq.(10) and Eq.(11).\n\n**Q2-2: The $\\tau$ in Eq.(6).**\n\nA2-2: Since we no longer use a polynomial function to fit the surface, but replace this process with a network to learn the hyper surface. Here, the $\\tau$ is the parameters of an MLP-based module, which is designed as a sequence of skip-connected Residual Block units (Fig.2). It is optimized with the training of the network.\n\n**Q2-3: Eq.(9) uses the max over all features.**\n\nA2-3: Our algorithm uses Eq.(9) to estimate the normal of the query point p (i.e. center point). To estimate the normal of each point in a patch, the formula is $\\mathbf{n}\\_i=\\mathcal{H}(\\mathcal{N}_{\\theta,\\tau}(G_i,C_i))$ without maxpooling. In the ablation, we verified that solving the neighbor point normal of p at the same time is not helpful. In addition, the loss function only constrains the normal of the query point, so the features of the query point should be the most valuable and prominent. These motivate us to use maxpooling over all features in a patch for estimating the normal of p.\n\n**Q2-4: The kNN search and dense blocks in Sec.3.3.**\n\nA2-4: In Sec.3.3, we introduce the local aggregation layer and global shift layer. As we claimed in Line 190, in the local aggregation layer, we group the local neighborhood features at each point by the 3D spatial distance based kNN search and refine each grouped feature via a chain of Dense Block units. That means we perform the kNN search for neighbors in the input 3D space and the Dense Blocks are applied over the concatenated features of all grouped point features in a neighborhood. So, our local aggregation layer cannot be described as a PointNet over the local neighborhood. In the global shift layer, we do not use kNN search.\n\n**Q2-5: Use the last scale $G_{s,i}$ as $G_i$?**\n\nA2-5: Yes, we use the feature of the last scale $G_{s,i}$ as the $G_i$. After recursively using Eq.(10), the last output contains the information from previous scales.\n\n**Q2-6: How to choose the subset of M points?**\n\nA2-6: Because our method only solves the query point normal in a patch, we choose M points as the M nearest neighbors of the query point. In addition, the value of M is consistent with the number of points output by the Global Shift Layer.\n\n**Q2-7: How to convert $C^j_i$ to $C_i$?**\n\nA2-7: We do not define $C_i$ and there will be no $C_i$. The scheme we adopt in Hyper Surface Fitting module (Fig.2) is $F_i= MAX\\lbrace MLP(C^j_i,G_i)\\rbrace$. That means $C_i$ does not exist in our algorithm.\n\n**Q3: MSE loss for the unoriented normal.**\n\nA3: We use the ground-truth normal vector to make the predicted normal vector in the same direction, and then calculate the MSE loss.\n\n**Q4: Paper citations.**\n\nA4: We have added citations to these three papers in the revised version.\n\n**Q5: Change the motivation.**\n\nA5: We will make some adjustments in the final version.", " We would like to thank the reviewer for the insightful comments.\n\n**Q1: The underfitting and overfitting. The dimension ‘c’.**\n\nA1: Generally, due to the limitations of the network itself and the complexity of the data, it is difficult to make a method absolutely free from underfitting and overfitting. Compared with existing methods, experimental results show that our method can overcome this problem to a large extent. In existing fitting based algorithms, underfitting and overfitting are mainly caused by artificially pre-determining a polynomial order in the process of polynomial surface fitting, and the order may not be suitable to the complexity of unpredictable data. Due to the limitations of human knowledge and experience, an optimal polynomial function is usually difficult to be formulated for the given data. The neural network has the advantages of adaptively learning the order and reasonably fitting the data by learning from a large amount of data. The underfitting and overfitting will not invalidate the algorithm, but it does affect the performance of the method.\nTo further verify the relationship between the network capability and the dimension ‘c’, we use different dimensions to verify the performance of the algorithm on the PCPNet dataset and compare it with SOTA AdaFit [ICCV 2021]. The results are shown in the following table.\nCategory|Clean|0.12%|0.6%|1.2%|Stripes|Gradient|Average\n-|-|-|-|-|-|-|-\nAdaFit|5.19|9.05|16.45|21.94|6.01|5.90|10.76\nOurs(32)|4.33|8.81|16.24|21.65|5.17|5.02|10.21\nOurs(64)|4.53|8.84|16.24|21.64|5.40|5.07|10.28\nOurs(128)|4.17|8.78|16.25|21.61|4.98|4.86|10.11\nOurs(256)|4.18|8.78|16.23|21.65|5.06|4.96|10.14\n\nBased on this experiment, we typically select 128 as the dimension that allows the algorithm to achieve the best performance, and we use it in experiments of the paper. The results also show that our method has better performance than AdaFit under all dimensions [32, 64, 128, 256].\n\n**Q2: Handle non-metric scale point clouds.**\n\nA2: As described in Line 124 of the paper, the input patch is normalized with its patch radius, and the query point is used as the origin to transform the patch into a unified coordinate system. As shown in Fig.2, our Relative Position Encoding module and the Space Transformation module are working in parallel, but only the transformation module processes the point cloud of different scales. We select $M=N/4$ neighboring points of the query point as the input of the encoding module, and the number of points keeps the same during the processing. Thus, $(p^j_i-p_i)$ in the encoding module will not be affected by different patch size scales.\n\n**Q3: About $||\\mathbf{n}||=1$.**\n\nA3: We use an MLP layer to output 3D normal vectors, and then use an additional normalization to ensure that the output vector $||\\mathbf{n}||=1$. We have added the corresponding symbol to indicate this step in Eq.(9) in the revised version.\n\n**Q4: Handle uneven point clouds.**\n\nA4: We have already verified the effectiveness of our method on unevenly sampled point clouds in experiments. The PCPNet dataset contains unevenly sampled data. Please see the category of density stripes and gradient in Table 1. Examples of point clouds for this dataset are shown in Fig.7 and Fig.8 in the supplementary material. The Semantic3D dataset is scanned by LiDAR in real-world outdoor scenes, and its data is also unevenly distributed, please see Fig.5 in the supplementary material. The performance of all algorithms on uneven point clouds does deteriorate, but our results are still the best.\n\n**Q5: Discussion about AdaFit.**\n\nA5: Our method is completely different from AdaFit. Both AdaFit and DeepFit adopt the PointNet to regress the weights of each point in a patch, and then use the traditional polynomial surface fitting to explicitly fit a 3D geometric surface and solve the normal vector of the surface as the point normal. On the contrary, our method implicitly learns the hyper surface to directly regress the 3D normal vector of the point without requiring any fitting for a geometric surface. We have already shown the difference between our method and AdaFit in Fig.1 of the paper and illustrated the problems of geometry-based polynomial fitting methods such as AdaFit in Lines 39-48. Moreover, all experimental results show that our method achieves better performance than AdaFit.\n\n**Q6: Figures 4 and 5.**\n\nA6: As you suggested, we will update Figures 4 and 5 in the revised version.\n\n**Q7: Lack of applications.**\n\nA7: In Sec.2.3 in the supplementary material, we have already shown three applications using the estimated normal.\n\n**Q8: No limitation.**\n\nA8: In Sec.3 in the supplementary material, we have already elaborated on the limitation and broader impact of our approach. We have verified its effectiveness on unevenly sampled point clouds in Table 1 of the paper. As you suggested, we will add a discussion about the network overparameterization in the revised version.", " We would like to thank the reviewer for the insightful comments, in particular for the utilization of features.\n\n**Q1: Organize notations into a table.**\n\nA1: As you suggested, we have added a table of notations in the appendix section of the revised paper.\n\n**Q2: The hierarchical point set abstraction is proposed by PointNet++. Cite the paper.**\n\nA2: We have added a reference to PointNet++ in the revised paper. In fact, our hierarchical point set abstraction is different from the one proposed by PointNet++ and our Space Transformation module (Fig. 3) is also different from the encoder of PointNet++. In order to extract the features for learning hyper surface and further predicting point normal without explicitly fitting polynomial surface, we specifically design Local Aggregation Layer and Global Shift Layer to realize point set abstraction in the Space Transformation module, rather than directly using the PointNet++. The main differences are as follows.\n\n(a) In the Local Aggregation Layer, we group the local neighborhood features at each point by the 3D spatial distance based kNN search. Then we refine each grouped feature via a chain of Dense Block units, rather than using PointNet in PointNet++. In addition, there is no point cloud sampling during this process, so the number of points keeps unchanged.\n\n(b) In the Global Shift Layer, we provide each point with global information by fusing global features extracted from multiple neighborhood size scales of the query point p. To get the global feature, a maxpooling operation is successively performed over neighboring points of the query point p with scale size $N_s=\\lbrace N, N/2, N/4\\rbrace$, where $N$ is the number of points in the input patch. Because our method only estimates the normal of the query point (i.e., center point) in a patch, we simply select the $N_s$ nearest neighbors of the query point as a subset rather than using farthest point sampling in PointNet++.\n\n**Q3: Include a real-world LiDAR example.**\n\nA3: Actually, both Fig. 6 in the main paper and Fig. 5 in the supplementary material show the normal results on the Semantic3D dataset (real-world outdoor LiDAR scan), and Fig. 6 in the main paper is a partial enlarged view of the Semantic3D dataset. As you suggested, we will present more results on LiDAR data in the supplementary material.\n\n**Q4: Utilize the noise-free feature to perform other point processing.**\n\nA4: We believe that such noise-free features based on normal vector estimation tasks should also be helpful for tasks such as point cloud denoising and super-resolution, and we have similar ideas and attempts. Generally, traditional methods implicitly or explicitly determine a surface when solving tasks such as normal vector estimation, point cloud denoising, surface reconstruction, and super-resolution. For example, the normal vector estimation task needs to estimate the vertical vector according to the local plane, the point cloud denoising task needs to pull noisy points onto the surface, and the surface reconstruction task needs to determine a zero iso-surface about SDF, so there is a strong correlation among these tasks. The results of the application experiments in the supplementary material show that better normal vectors enable traditional methods, such as point cloud denoising and surface reconstruction, to achieve better results.\nOur method implicitly learns hyper surfaces to estimate point normals rather than explicitly fitting polynomial surfaces, and achieve the-state-of-art performance. We find that our local feature aggregation layer is similar in design to the work ‘Score-Based Point Cloud Denoising’ [ICCV 2021]. Unlike this work, which focuses on the offset of each point, we focus on the normal vector of the query point, so we use Global Shift Layer to obtain the features of different neighborhood scales of the query point. This shows that the network structures used in solving such tasks have a certain generality.\nIn conclusion, we believe that features in normal vector estimation can lead to better results for other related tasks with deep learning based schemes, but the condition is that we need to design corresponding reasonable constraints according to the specific task. In our denoising experiments with this feature, we need to add new constraints about smooth surfaces, otherwise simply using the features in the normal vector estimation task will not get good denoising results. We will continue to investigate the applicability of this feature in different tasks in the follow-up work.", " The authors propose a novel normal estimation method called HSurf-Net. The method works by implicitly learning hyper surfaces in a high dimensional feature space. The method includes a novel space transformation module consisting of a sequence of local aggregation layers and global shift layers, to reliably build the feature space. Experimental results show that the HSurf-Net can accurately predict normals from point clouds with noise and density variations, and achieves state-of-the-art performance on multiple datasets. Strengths\n- The paper is well-written and easy to follow. The method is technically sound. The authors have done abundant experiments covering a broad range of baselines and datasets, making the results convincing.\n- The idea of transforming points into a higher dimensional feature space and fitting hyper surfaces is interesting and novel. The authors have designed the network accordingly to achieve this goal and performed ablation studies to validate each design choice.\n- The method has shown impressive robustness towards noisy inputs. The HSurf-Net outperforms all other baselines under all settings.\n\nWeaknesses\nThis paper is well-presented and proposes a novel model for normal estimation with state-of-the-art performance. I do not see any major issue in its current form. - There are a lot of different notations (especially hyper-parameters) in this paper. Organizing them into a table in the appendix may help readers to follow the paper more easily.\n- The idea of hierarchical point set abstraction (Fig.3) is actually proposed by PointNet++. The authors might want to cite this paper.\n- Include a real-world example (e.g. a lidar scan) could further strengthen the paper.\n- According to the theory proposed in this paper. The points are lifted to noise free features before fitting hyper surfaces. Could we utilize this feature to perform other point processing tasks (maybe jointly) e.g. denoising / super-resolution? Any discussions on this would be appreciated. Yes, in the supplementary material.", " - This paper introduces an approach to surface normal estimation from the 3D point clouds. Instead of applying function fitting to the input data that may undergo overfitting or underfitting, the approach introduces hyper surface fitting to learn hyper surfaces in the high dimensional feature space implicitly. The approach learns hyper surfaces from the features and directly predicts normal vectors. The approach (HSurf-Net) is validated with the synthetic and real-world indoor and outdoor datasets. **Strengths**\n\n1. The paper revisits the normal estimation problem and considers the problem as the hyper-surface fitting problem. Since the surface fitting happens on the latent feature domain, such implicit fitting is rather robust to the explicit approaches that are easily affected by additive noise. The reviewer is not fully tracking the literature on this field, but it seems novel and interesting.\n2. The paper comprehensively summarizes the related work in the normal estimation field. The paper is self-contained, so the readers readily follow the problem and the recent advances in this field.\n3. The paper is straightforward to understand, and the approach shows compelling results on the PCPNet, SceneNN, and Semantic3D datasets. The visual quality of the estimated normal is reasonable.\n4. The paper explains the proposed idea in detail. The supportive figures (such as Figures 2 and 3) help to understand the approach better.\n\n**Weakness**\n1. It is unclear that the proposed approach is completely free from ‘underfitting and overfitting'. The results may depend on the ‘c’ dimension of the hyper surface coefficients (Sec. 3.2). Even if the relevant features and network output would fit for the ‘c’, there is a danger that network capacity would be overparameterized. Can the authors mention this? This is an important issue because the main argument of the proposed approach is expressed in this manner. For instance, lines 7-9 “fitting surfaces explicitly from arw point clouds suffers from overfitting or underfitting issues caused by inappropriate polynomial orders and outliers… To address these issues, we introduce hyper surface fitting to implicitly learn hyper-surfaces…”\n2. The relative position encoding module directly embeds the relative position in a local frame (p^j_j-p_i). However, such embedding would work for the specific metric scale. How would the approach handle the point clouds with non-metric scale point clouds? What I mean is that the scale set of the patch size {N, N/2, N/4} would be invariant regardless of the scale of the point cloud, but (p^j_j-p_i) would be directly affected by the scale of the point cloud. In other words, if there are two point clouds {X} and {X/10}, can the network would produce the same output?\n3. It is interesting to see that Eq (9) induces the normal vector. However, the equation itself does not guarantee ||n||=1 because it is the network output of the max-pooled feature vectors. Does any additional post-processing is applied to get the normalized n? Please clarify.\n4. One critical concern is whether the approach would handle uneven point clouds. The paper mentions the ‘patch’ over the paper, and mention that ‘patch size’ as the number of the points (Line224 and so on). However, such a definition of ‘patches’ is readily affected by the density of the point cloud. For instance, in the KITTI dataset, in the coarse region, if we set N=700, it will introduce severely biased point cloud samples. The all experiments are conducted with the evenly sampled point clouds, where the density of the point cloud is fixed over the dataset. Please state how the proposed approach would handle density changes or uneven point clouds.\n5. In the experiment section, the paper states the proposed approach outperforms the previous approaches, but it would be better if the paper mentioned the comparison between the second-best approach. For instance, AdaFit[45] is quite similar to the proposed approach. Please provide some discussion or analysis of the suggested approach.\n 1. Figure 5 is hard to distinguish the proposed approach. Is there any way to visualize the errors more effectively? How about utilizing log-scale errors or changing the color map? Similarly, Figure 4 could be improved as well. Because many plots are overlapped, consider adjusting the min and max of PGP to show the other plots better. Consider using other line styles to distinguish the lines better. \n2. Another suggestion is to apply the estimated normal to the various applications. For instance, making a mesh from a raw point cloud remains a challenging problem. How about showing that the noisy raw point clouds can be successfully reconstructed as a mesh using the proposed approach? Maybe using the Poisson reconstruction using the various estimation of surface normal?\n3. Please answer the questions in the paper weakness section.\n - The paper does not address the limitation of the proposed approach. The concern about the network overparameterization and the issue about point cloud densities should be clarified and stated in the limitation section.", " The authors propose a method to estimate normals in point clouds based on a novel patch-based network architecture that integrates global patch information and local neighborhood information inside the patch in multiple steps, making heavy use of densenets and skip connections. A new cross-product-based loss is used to train the network.\n\nI would consider the novel architecture, as well as the novel loss (which seems to be very relevant to achieve good performance according to the ablation) to be the main contributions. Strengths:\n- Normal estimation should be of great interest to the community.\n- The architecture is reasonably novel. It roughly follows a PointNet-like setup in most components, but the component combination into the full architecture seems novel.\n- The evaluation shows clear improvements over the state-of-the-art on multiple datasets, including real-world data.\n- There is a thorough ablation of the design choices (even if their theoretical motivation provided by the authors is misleading, this provides some empirical motivation).\n- The authors include source code and promise to provide their exact data.\n\nWeaknesses:\n- The motivation and presentation of the method is misleading.\n- The exposition is missing some details of the method.\n- Given that the current motivation is misleading, this leaves the design choices for the architecture poorly motivated.\n\nOverall, due to the promising that show a clear improvement over the state-of-the-art on several datasets, I am leaning towards the positive side. However, the misleading exposition and missing details lower the quality of the paper significantly and should be corrected before the paper can be accepted. I believe that this can be done in a minor revision.\n\nDetails:\n- The motivation and presentation of the method is misleading. In the introduction, Section 3.1 and Section 3.2, the approach is carefully motivated as surface fitting in a feature space. However it seems like surface fitting is not done explicitly, and instead a generic network is used that may or may not learn to do something similar as the polynomial surface fitting described in Eq. 5. The surface mathcal{N} is never explicitly constructed or encoded in any specific way in the network structure. Considering this I would clearly count the proposed method as regression-based method in the author's terminology, not as a hybrid or fitting-based method. The motivation of the approach as surface fitting seems to me like carefully motivating a method as specifically using a Sobel Filter to detect edges, and then training a generic CNN to obtain the output. It seems quite misleading and should be changed.\n\n- The exposition is missing some details and could be improved.\n - In Section 3.1, the use of two Taylor expansions seems unnecessarily complicated. Also, since both are defined with the same order it does not become clear that the fitted surface can usually only be an approximation of the true surface. It might be clearer to define the ground truth surface with a generic function f(x,y) instead of the truncated Taylor expansion J_beta,n, and then fit J_alpha,n to this generic function.\n - The introduction should clearly mention that the goal is to find *unoriented* normals, i.e. without giving information which side of the surface is inside or outside.\n - In Eq. 5, the dimensionality of the feature space of F is not defined (the output space of mathcal{N}_theta,tau and mathcal{F})\n - The text below Eq. 5 mentions basis vectors that for G and C that are introduced in later sections, but I could not find mention of them in the rest of the text.\n - Eq. 6 is missing a vector norm for the difference between feature vectors.\n - Also, in Eq. 6, would optimizing over tau not always result in the highest possible tau being used? Since higher-order should always be able to achieve a lower error. (This may not be very relevant because no optimization is done in practice - see concerns about the motivation of the method.)\n - Section 3.2 should clarify how weights w_i are computed.\n - Losses should be discussed in the main paper, since they are non-standard and seem to be an essential part of the contribution. Shortening or changing Sections 3.1 and 3.2 could make space.\n - For Eq. 9, some motivation should be given for the design choice of using the max over all features, since this breaks the analogy to 3D surface fitting, where it would not make sense to take the max over all z values of the fitted surface.\n - In Section 3.3, it should be clarified if the kNN search for neighbors is performed in the input 3D space or the feature space. Also in Section 3.3, it should be clarified if the dense blocks are applied per-point or over the concatenated features of all grouped points in a neighborhood. If it is per-point, the local aggregation layer could be described more succinctly as a PointNet over the local neighborhood, with dense blocks instead of MLPs.\n - Section 3.3 is missing a description how G_s,i is converted to G_i. Is the last (coarsest) scale used? This should be clarified.\n - Section 3.3 and 3.4 are missing descriptions how the coarser subset of M points is chosen from the input N points. Farthest point sampling like in PointNet++? A random subset? This should be clarified.\n - In Section 3.4, the position encoding function phi is not defined. Is it an MLP as well? This should be clarified.\n - Section 3.4 is missing a description of how C^j_i is converted to C_i. Maxpooling? This should be clarified.\n - Section 4 should briefly mention how many point clouds were used from each dataset and the average point cloud size.\n - Section 4 should clarify if all baselines were trained on the same dataset.\n\n- In the ablation, how was the MSE loss applied to the unoriented normals? This should be clarified (a naive application to output normals would not work since the normals are not oriented).\n\n- The following papers could be added to the related work:\n - Stable and efficient differential estimators on oriented point clouds, Lejemble et al., SGP 2021\n - PCT: Point Cloud Transformer, Guo et al., CVM 2021\n - Point Transformer, Zhao et al., CVPR 2021 (they don't demonstrate normal estimation, but their architecture is for general point cloud processing) The authors should clearly state in the rebuttal if they plan to change the motivation of their method as described above, or alternatively if I missed something that justifies the current motivation given by the authors. The authors have sufficiently addressed limitations and societal impact.", " This paper proposes a new method for the task of normal estimation for point clouds using hyper-surface fitting. The model consists of relative positional encoding, and a space transformation module (MLP) to map 3D point clouds into feature space where the hyper-surfaces are fitted. The method is evaluated on three different datasets, including PCPNet, ScenNN, and Semantic 3D and shows state-of-the-art performance in RMSE and PGP metrics. The paper is mostly well written and easy to follow. The method is useful as shown by the results on multiple datasets. Different elements of the model are well analysed by an ablation study and show the contribution of each of the components. Figures 1 and 2 help the reader follow along and clarify the overall framework. The supplemental material also contains interesting results. There are several aspects that I am not clear on: \n1. The space transformation module is unclear to me and I found Figure 3 confusing. How is it different from a PointNet++ idea? While clearly, the architecture is different the concept of multi-scale and grouping was first proposed there, so how is this different? \n2. Novelty: To my understanding, this method could be interpreted as a newer version of PCPNet with PointNet++ as backbone architecture and Positional encoding as input. Each has already been proposed in previous works and showed its effectiveness and the question is how is combining these building blocks novel? \n3. Insight: The main paper lacks insight into what the network is actually learning. There is some of it in the supplemental but not sure that it is enough. It would be interesting to visualize each point's contribution to the hypersurface fitting as a heatmap. \n4. Will the code be released ? \n The limitations are not clearly presented. Clearly, one limitation is robustness to noise. What are other limitations? When does it fail in the clean point cloud cases? Additionally, no potential societal impact was mentioned (since it is a normal estimation method, I am not sure there are any but even if there aren't, it should at least be mentioned). " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "xz8TPT1Mmq", "o5IMbK_7eyY", "17OSHflLoq_", "jewHO9_HSG", "oY4qrJu4DQw", "Afp9ZqtBk_0", "fp9uZrBsh5W", "inJBplKsrkk", "hg2AXbfYUhY", "oE0n6gkWtJn", "B2jf6KM7boa", "8CIYA-yc_1Q", "LIuBDxA9Ek", "zYiC2E_Ncws", "nips_2022__yEcbgIT68e", "nips_2022__yEcbgIT68e", "nips_2022__yEcbgIT68e", "nips_2022__yEcbgIT68e" ]
nips_2022_gkQkZy-pRik
MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning
As a successful approach to self-supervised learning, contrastive learning aims to learn invariant information shared among distortions of the input sample. While contrastive learning has yielded continuous advancements in sampling strategy and architecture design, it still remains two persistent defects: the interference of task-irrelevant information and sample inefficiency, which are related to the recurring existence of trivial constant solutions. From the perspective of dimensional analysis, we find out that the dimensional redundancy and dimensional confounder are the intrinsic issues behind the phenomena, and provide experimental evidence to support our viewpoint. We further propose a simple yet effective approach MetaMask, short for the dimensional Mask learned by Meta-learning, to learn representations against dimensional redundancy and confounder. MetaMask adopts the redundancy-reduction technique to tackle the dimensional redundancy issue and innovatively introduces a dimensional mask to reduce the gradient effects of specific dimensions containing the confounder, which is trained by employing a meta-learning paradigm with the objective of improving the performance of masked representations on a typical self-supervised task. We provide solid theoretical analyses to prove MetaMask can obtain tighter risk bounds for downstream classification compared to typical contrastive methods. Empirically, our method achieves state-of-the-art performance on various benchmarks.
Accept
This paper starts with the experimentally findings that the interference of task irrelevant information and the disadvantages of sample inefficiency in contrastive learning appear due to dimensional redundancy and dimensional confounder. Based on these experimental findings, the authors propose a dimensional mask learning method based on bi-level optimization. Finally, the theoretical basis for learning through meta mask is also presented. All reviewers agreed with the strength of the proposed method. There were some concerns such as insufficient ablation study, but they were all resolved through the authors' rebuttal and subsequent discussion. I hope the authors make it sure that the concerns raised in the review process can be resolved in the final version.
train
[ "Qo2HRTixOFi", "jQG_RzcHBV", "b46CeofYmCz", "yBfEe6VQUjn", "YZz1g2rHMEd", "hRcb1F53F0k", "kl1kw9AW-t2", "Sn1bLiONFb", "t9vrMbI192F", "B8zn0d_0Csm", "WRv2OU_GaDF", "tEppK996_3R", "CMHTpKYP5Ik", "lSDlH6ZH5-I", "b15bU2sH65o", "wNLfpPMduBx", "sNFrzDtmfPY", "6DPAbK2hj6w", "jsfIm5ch8xa", "H41PzShwI9", "Zf9nhuzfVu", "mhcrWMxos_y" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Since the discussion phase is closing soon, this response could be our last chance to discuss with the reviewer. We would be grateful for the careful review and constructive suggestions of the reviewer. We hope our rebuttals may make our intuition behind this paper more understandable and clearer. Following the reviewer's suggestions, we improve our paper from multiple aspects, and the we expect that the revised manuscript can address your concerns.", " Before the end of the discussion phase, we would like to thank the reviewer again for his/her careful review and helpful suggestions.", " We thank the reviewer for his/her efforts and time to review our paper. The comments and suggestions are very professional and constructive.", " We are glad to thank the reviewer for the constructive comments and suggestions, and we will add the important rebuttals to the updated draft, including the answers to Q3 and Q5.", " I thank the author to provide thorough clarifications of all the questions I have. I am increasing the score by one point, and recommend the acceptance of this work. I also recommend the authors incorporate the answers provided in this rebuttal to an updated draft, especially the answers to Q3, which provides the details of the experiment that serves as the main motivation of this paper, and Q5, which contrasts with the observation from Barlow Twins because of the scale of dimensions.", " Dear Reviewers,\n\nThank you again for your time and effort in reviewing our paper.\n\nIn our early response, we have included detailed analyses and descriptions of the proposed method. Moreover, we have conducted multiple experiments prove the effectiveness of the proposed method.\n\nSince the discussion stage is closing soon, we would be grateful if you could let us know whether our responses and revised manuscript have addressed your concerns and whether there are further comments.\n\nSincerely, Authors", " We thank the reviewer for the valuable comments and constructive suggestions. We are encouraged that the reviewer found this work is novel, the explored problem is widely ignored, and the theoretical proof is integrated. The mentioned issues are addressed as follows:\n\nQ1: The masks are trainable means that in the end, they are fixed and will ignore several features known as confounders. However, this needs to be verified via ablation experiments. For example, a linear probe on the whole unmasked features should give lower accuracy.\n\nA1: Thanks for the suggestions! First, we clarify that MetaMask trains $\\mathcal{M}$ by adopting a meta-learning-based training approach, which ensures that $\\mathcal{M}$ can partially mask the ''gradient contributions'' of dimensions containing task-irrelevant information and further promote the encoder to focus on learning task-relevant information. So, MetaMask only performs the gradient mask during training instead physically masking dimensions in the test. We provide theoretical explanation and proofs in Appendix A.1 and A.2. The reason behind our behavior (adjusting the gradient weight of each dimension in training instead of directly masking these dimensions in the test) is that even dimensions that contain dimensional confounders are also possible to contain discriminative information so that lowering the gradient contribution of such dimensions can not only prevent the over-interference of the dimension confounders on the representation learning but also preserve the acquisition of the information of these dimensions. Accordingly, the foundational idea behind self-supervised learning is to learn ''general'' representation that can be generalized to various tasks. In MetaMask, we introduce a meta-learning-based approach to train the dimensional mask $\\mathcal{M}$ with respect to improving the performance of contrastive learning. However, the theorems, proposed by [41] and us (in Section 5, Appendix A.1, and Appendix A.2), only prove that the contrastive learning objective is associated with the downstream classification task, while there is no evidence to demonstrate the connections between contrastive learning objective and other downstream tasks. Therefore, we consider not directly masking the dimensions containing dimensional confounders in the test.\n\nFurthermore, we conduct experiments to explore the performance of the variant that directly masks these dimensions in the test, which is demonstrated in Appendix A.4.7 of the rebuttal revised version. For the exploration of our masking scheme and its variants, we conduct experiments as follows: after training, we collect the final dimensional weight matrix $\\mathcal{M}$ and then choose dimensions with weights below average as the masked dimensions. These dimensions are considered to be associated with dimensional confounders. To prove whether these dimensions have confounders, we perform random dimensional masking to these dimensions, and when the masking rate is 100%, the model turns to the variant that directly masks all these dimensions in the test. The experiments are based on SimCLR + MetaMask. Note that we conduct 10 trials per mask rate (except for 0% and 100% mask rates) for fair comparisons. We observe that the original MetaMask (i.e., mask rate is 0%) achieves the best performance on average, and MetaMask outperforms the variant masking the dimensions with confounders by a significant margin, which proves that our proposed approach, i.e., masking the ''gradient contributions'' of dimensions in the training is more effective than the compared approach, i.e., directly masking dimensions in the test. While, several trials with specific mask rates demonstrate better performance than MetaMask, which proves that the dimensions filtered by MetaMask indeed contain dimensional confounders. Additionally, observing the results reported in Figure 2 (Page 3) and the results in Appendix A.4.7, we find that the results achieved by the proposed variants are better than Barlow Twins with random dimensional masks on average, which can further prove the filtered dimensions containing confounders and that MetaMask indeed assigns lower gradient weights to the dimensions containing confounders.", " Additionally, following the suggestion of the reviewer, we conduct experiments to directly impose a linear probe on the whole masked or unmasked features. Note that the definition of masked features is mentioned above, and the experiments are conducted on CIFAR10 with ResNet-18. As shown in Appendix A.4.7, a linear probe is solely imposed on the whole ''unmasked'' features (without ''masked'' features), which is the same as the 100% mask rate variant for the ''masked’’ features above, and the result is 79.09. We provide the corresponding reasons and analyses above. Additionally, such results may be due to the masking scheme for these experiments, i.e., collecting the final dimensional weight matrix $\\mathcal{M}$ and then masking dimensions with weights below average, which is dramatically different from MetaMask's behavior, and this scheme is only used for this exploration. The result achieved by a linear probe on the whole ''masked'' features is 56.30, which demonstrates our consideration that dimensions containing dimensional confounders are also possible to contain discriminative information, because the achieved accuracy is not under 10. The result also proves that MetaMask can indeed assign lower gradient weights to the dimensions containing confounders, since such a model far underperforms MetaMask (86.01).\n\nQ2: The experiments are not convincing due to a lack of fair comparison. The authors use a very customized setting, e.g., AlexNet on various joint-embedding frameworks. No hyperparameter tuning is conducted on these models.\n\nA2: Thanks for the comments. For the experiments using different layers of AlexNet, we follow the experimental setting of [5], and part of the experimental results refer to the corresponding papers, where the hyper-parameters are tuned for different compared models. Additionally, we also provide experiments by using ResNet-18 as the backbone network, and the results also demonstrate the effectiveness of MetaMask. All reimplementations are built by following the official implementations.\n\nTable 1. The complexity comparisons between MetaMask and benchmark methods on the CIFAR10 dataset. Note that for fair comparisons, this experiment is based on 1 GPU of NVIDIA Tesla V100.\n|Methods | Parameters | Training time cost for an epoch |\n| :------| :----| :----|\n| ResNet-18 | 11.2M | - |\n| SimCLR | 13M | 70s |\n| Barlow Twins | 22.7M | 80s |\n| SimCLR + Barlow Twins | 24.6M | 85s |\n| MetaMask | 24.6M + 512 | 210s |\n\nTable 2. The comparisons between MetaMask and benchmark methods on the CIFAR10 dataset by using the same total time costs. Note that for fair comparisons, this experiment is based on 1 GPU of NVIDIA Tesla V100.\n|Methods | Epoch | Training time cost | Accuracy |\n| :------| :----| :----| :----|\n| SimCLR | 2400 | 46h | 81.75 |\n| Barlow Twins | 2100 | 46h | 85.71 |\n| SimCLR + Barlow Twins | 2000 | 47h | 85.79 |\n| MetaMask |800 | 46h | 86.01 |\n\nQ3: Second-order optimization creates significant computational overhead. The authors need to show how much more training time is needed for each model.\n\nA3: To compare the training complexity of MetaMask and benchmark methods, we conduct experiments on CIFAR10 by using ResNet-18 as the backbone network. The results are demonstrated in Table 1, which shows that the parameter number used by MetaMask is close to the ablation model, i.e., SimCLR + Barlow Twins, and Barlow Twins. Compared with Barlow Twins, SimCLR + Barlow Twins only adds a MLP with several layers as the projection head for the contrasting learning of SimCLR. Comparing SimCLR + Barlow Twins and MetaMask, we only add a learnable dimensional mask $\\mathcal{M}$ in the network of MetaMask. Note that to decrease the time complexity of MetaMask, we create an additional parameter space to save the temporary parameters for computing second-derivatives, but such cloned parameters do not participate in the calculation of the network during training.\n\nFor the time complexity, due to the learning paradigm of meta-learning (second-derivatives technique), MetaMask has larger time complexity than benchmark methods (including the ablation model SimCLR + Barlow Twins). While, we further conduct experiments to evaluate the performance of the compared methods using similar total training time costs. The results are reported in Table 2, which demonstrates that MetaMask still achieves the best performance, and MetaMask can also beat the ablation model SimCLR + Barlow Twins. This proves that although the time complexity of MetaMask is a little bit high, the improvement of MetaMask is consistent and solid. We also include these results and the corresponding analysis in Appendix A.4.4 of the rebuttal revised version. Thank you for this suggestion.", " Q4: No experiment details.\n\nA4: We provide the implementation details in the code of the supplementary file. For example, for the experiments on CIFAR10, we use ResNet-18 as the backbone network. We train the network for 800 epochs with the batch size of 512. In particular, for the experiments of BYOL, we adopt Adam to optimize the network with the learning rate of 5e-4. We leverage SGD as the optimizer for all other methods and set the learning rate 1e-3. The cosine annealing strategy is used when updating the learning rate. We adopt the principle experimental settings by following the experiments of the corresponding implementations of benchmark methods, e.g., Barlow Twins, SimCLR, BYOL, etc.\n\nQ5: What's the reason behind MetaMask's collapse for large embedding dimensions? This is not observed in standard BarlowTwins.\n\nA5: Thanks for your comments. We conduct the motivating experiments in Figure 1 (Page 2) by adopting the official implementation of Barlow Twins, and thus the phenomenon of the model collapsing for large embedding dimensions are shared between standard Barlow Twins and MetaMask. We consider the reason why our exploration sharply contrasts with Barlow Twins are: 1) Barlow Twins does not take experiments on small-scale datasets, e.g., CIFAR10; 2) the dimensionality taken by Barlow Twins is not large enough. In particular, the projection head dimensionality range of Barlow Twins is 16 to 16384, while that in our paper is 512 to 40960, and the results of lower dimensionality are consistent between Barlow Twins and ours.\n\nFurthermore, in further exploration, we propose that such a model collapsing phenomenon is due to two reasons: 1) the proposed dimensional confounder; 2) the curse of dimensionality. And we elaborate on the difference and our intuitive analysis in Appendix A.3. The experiments, shown in Figure 4 (Page 9) and Figure 7 (Page 19), demonstrate that MetaMask can yield significant performance boosts against the mode collapse.\n\nQ6: Why is this considered \"meta\"? It's basically second-order optimization.\n\nA6: Thanks for the comment. The original idea of meta-learning is to learn an initial model (with parameters) that can quickly learn discriminative information by a few epochs of training, and the knowledge of such an initial model is considered the ''meta'' knowledge. This goal is achieved by adopting a second-derivative technique to train the initial model. The intuition on this behavior is to train the initial model with respect to ''improving'' the performance of the trained model on various tasks. [40] and our method holds the shared intuition of meta-learning. Specifically, [40] proposes the meta auxiliary learning to train the auxiliary learning model with respect to ''improving'' the self-supervised learning, and our method proposes to train the learnable dimensional mask matrix $\\mathcal{M}$ with respect to ''improving'' the contrastive learning. Therefore, following the naming principle, we call our second-order optimization alike training paradigm as ''meta''.\n\nReferences:\n\n[5] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.\n\n[40] S. Liu, Andrew J Davison, and E. Johns. Self-supervised generalisation with meta auxiliary learning. 2019.", " We thank the reviewer for the valuable comments and constructive suggestions. We are encouraged that the reviewer found that this work is novel and technically sound. The mentioned issues are addressed as follows:\n\nQ1: The ablation study is insufficient, as the method uses both Barlow-twin and contrastive loss. Still, it's unclear that this meta-learning alike optimization scheme is essential compared with training on a loss combination of Barlow-twin and contrastive loss. The comparisons have only been made against either one of them.\n\nA1: Thanks for the suggestions! We added the corresponding analysis in Appendix A.4.8 of the rebuttal revised version. Specifically, we conduct ablation studies obtained by MetaMask on CIFAR-10 with ResNet-18 and conv encoders in Appendix A.4.8, where ''BT'' denotes Barlow Twins, ''w/o ML'' denotes the ablation model without the proposed $\\mathcal{M}$ and the corresponding meta-learning-based learning paradigm, i.e., SimCLR + Barlow Twins, and ''w/o drr'' denotes the ablation model without the dimensional redundancy reduction loss $\\mathcal{L_{drr}}$. (a) We conduct experiments to prove the effectiveness of the proposed dimensional mask $\\mathcal{M}$ on CIFAR10 with ResNet-18, and the results show that only w/o ML (SimCLR + Barlow Twins) cannot improve the performance of the model by a significant margin, and the proposed approach is crucial to the performance improvement. (b) We conduct experiments on CIFAR10 with ResNet-18 and conv, respectively. The results further prove the effectiveness of the proposed $\\mathcal{M}$ and $\\mathcal{L_{drr}}$.\n\nBarlow Twins handles dimensional redundancy but suffers dimensional confounder. MetaMask mitigates dimensional confounders by learning and applying a dimensional mask. We conduct experiments to demonstrate our statement, and the results are reported in Figure 10 (Page 24). For the experiments shown in Figure 10 (Page 24) (a), we demonstrate the effectiveness of the proposed dimensional mask $\\mathcal{M}$ and the corresponding meta-learning-based training paradigm by directly removing such approaches. The results show that the sole w/o ML only improves Barlow Twins by 0.04, while MetaMask can improve BT by 0.29, which proves the proposed $\\mathcal{M}$ is pivotal to the performance promotion. In Figure 10 (Page 24) (b), to verify whether there would be performance gain only from alleviating dimensional confounder without $\\mathcal{L_{drr}}$, we evaluate the performance of w/o drr and MetaMask. We observe that for the experiments with ResNet-18, w/o drr (without Barlow Twins) improves SimCLR by 0.2 but cannot reach the performance of BT, and MetaMask can improve both SimCLR and BT by 4.28 and 0.29, respectively. For the experiments with conv, w/o drr improves SimCLR by 1.03 and also outperforms BT by 4.08, and MetaMask improves both SimCLR and BT by 2.27 and 6.22, respectively. Concretely, the performance of w/o drr is related to the performance of SimCLR, and it can always improve SimCLR but underperform the complete MetaMask. MetaMask has consistent best performance by using different encoders.\n\nWe consider that $\\mathcal{L_{drr}}$ (Barlow Twins loss) could exacerbate dimensional confounder, but as our discussion in Appendix A.4.7 (also A2 in the responses), i.e., dimensions containing confounders are also possible to contain discriminative information, more dimensions with confounders due to $\\mathcal{L_{drr}}$ may also carry more discriminative information. Likewise, the model without $\\mathcal{L_{drr}}$ may generate representations with over-redundant dimensions so that the total amount of available discriminative information will decrease. However, roughly using the representations with complex dimensional information (without $\\mathcal{L_{drr}}$) may result in insufficient discriminative information mining, e.g., w/o ML can only outperform SimCLR by a limited margin. Our proposed MetaMask effectively avoids the appearance of such an undesired phenomenon by leveraging $\\mathcal{M}$ and the corresponding meta-learning-based training paradigm, which is supported by the empirical results.", " Table 1. Top-1 and top-5 accuracies (in %) under linear evaluation on ImageNet with ResNet-50 encoder. MetaMask is performed based on Barlow Twins and SimCLR.\n| Model | Top-1 | Top-5 |\n| :------| :----| :----|\n| Supervised | 76.5 | - |\n| MoCo | 60.6 | - |\n| SimCLR | 69.3 | 89.0 |\n| SimSiam | 71.3 | - |\n| BYOL | 74.3 | 91.6 |\n| Barlow Twins | 73.2 | 91.0 |\n| MetaMask | 73.9 | 91.4 |\n\nQ2: Also, in Figure 1, the performance improvement by the random mask is not significant enough to persuade me that the feature redundancy is sufficiently essential (only within 0.1% improvement on ImageNet). Finally, why is the performance not tested on full ImageNet while the motivation figure is?\n\nA2: Thanks for your careful review! The experiments conducted on ImageNet are based on ResNet-50, while the experiments conducted on other benchmark datasets are based on ResNet-18 or conv or fc. Throughout our exploration, we observe a trend: MetaMask improves the benchmark methods with weak encoders by excessively significant margins, but the improvement with strong encoders is relatively limited. Our consideration behind this phenomenon is that although representations learned by both weak and strong encoders (without our method) may contain dimensional confounder, the strong encoder can better capture semantic information so that the dimensional confounder of the learned representation is naturally less, and the useful discriminative information is much more the representation learned by the weak encoder. [41] provided the theorem and corresponding proof to demonstrate that the contrastive loss can bound the cross-entropy loss in downstream tasks. Therefore, strong encoders better minimize the contrastive loss, which means the representations learned by strong encoders contain more semantic information, and accordingly, fewer dimensional confounders. However, from the results reported in Table 2 (Page 9), our method can still improve the benchmark methods.\n\nWe further conduct experiments by imposing random dimensional masks on the learned representations for weak and strong encoders. The results are reported in Appendix A.4.3 of the rebuttal revised version. The comparisons demonstrate that under the consistent setting of the 5% random dimensional mask rate, the results of the weak encoder (conv) range from 52.79 to 54.13, and the results of the strong encoder (ResNet-18) range from 60.71 to 61.06. Note that we conduct 10 trials for each experiment to achieve unbiased results. The quality of the strong encoder's representation is better, and the performance of the strong encoder is more stable (the variance of the strong encoder's results is smaller), which proves that the representation learned by the strong encoder contains less dimensional confounder, and each dimension contains more discriminative information, and our method can improve weak encoders more than strong encoders.\n\nImageNet, as a large-scale dataset, has enough training data to improve the encoder to learn discriminative information, and the backbone encoder used on ImageNet is stronger, i.e., ResNet-50. Therefore, in Figure 1, compared with the improvement on CIFAR10, the performance improvement by the random mask on ImageNet is less significant.\n\nTraining one trial of MetaMask on ImageNet takes about 14 days for our server. It is hard for us to impose sufficient hyperparameter tuning experiments, because tuning parameters on the validation set and then retraining is time-consuming. For the current version, we principally adopt the hyperparameter of MetaMask on ImageNet-200 with ResNet-18 for the experiments on ImageNet with ResNet-50. We report the results in Table 1, which demonstrates that MetaMask achieves the top-2 best performance. Note that the major results are reported by [14], and the implementation of MetaMask is based on SimCLR and Barlow Twins. For comparisons with the main baselines, MetaMask beats SimCLR by 4.6 on top-1 accuracy and 2.4 on top-5 accuracy, and MetaMask beats Barlow Twins by 0.7 on top-1 accuracy and 0.4 on top-5 accuracy. The improvement of MetaMask is in accordance with our observation above. Furthermore, ImageNet-200 is a truncated dataset of ImageNet, so the domain shift between these datasets is slight. The comparisons in Table 1 (Page 8), and Table 2 (Page 9) demonstrate that the proposed MetaMask can still improve the benchmark methods in large-scale datasets. We add the newly achieved results and corresponding analysis in Appendix A.4.6 of the rebuttal revised version.", " Q3: Theoretically, the authors show that the conditional variance can be reduced with the optimal mask. Still, since the identity matrix (no mask) is also a special case of masks, this is a seemingly trivial statement.\n\nA3: Thanks for your comments! The identity matrix (without mask) aims to reduce the conditional variance of each specific dimension, i.e., the same dimension of different views of a sample contains similar information. Although such an approach can reduce the conditional variance of the representations, the theoretical proof to demonstrate that such an approach can reduce the conditional variance (conditional on the ''label'' not the dimensions themselves) of the learned representation in hidden space is insufficient, because Theorem 4.2 [41] only proves that the conditional variance (conditional on the ''label'') is related to the contrastive objective, and the theoretical evidences proving the connection between such a conditional variance and the identity matrix is deficient. Our Theorem 5.1 (Page 8) further proves that benefiting from the proposed learning paradigm (meta-learning-based approach), MetaMask can further reduce the variance of the learned representation conditional on the label. We provide corresponding proofs in Appendix A.2. Furthermore, the empirical evidences, in Table 1 (Page 8) and Table 2 (Page 9), demonstrate that MetaMask has better performance than benchmark methods, including Barlow Twins (only using the identity matrix), so compared with other methods, given the label, the conditional variance of the representation learned by MetaMask is reduced.\n\nQ4: I think the improvement is good, but I want to know that this meta-learning type is more essential than just optimizing a loss of Barlow-twin and contrastive loss. Also, the theoretical point that mask includes identity seemingly makes the theorem vacuous, and I wonder if the authors can respond to my concern.\n\nA4: Thanks for your careful review! We conduct the corresponding exploration in Appendix A.4.8, and the analysis is mentioned in A1. For the issue of theoretical points, we clarify the significance and explanations in A3.\n\nReferences:\n\n[14] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, Proceedings of\nMachine Learning Research. PMLR, 2021.\n\n[41] Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. arXiv preprint arXiv:2203.13457, 2022.", " We thank the reviewer for the valuable comments and constructive suggestions. We are encouraged that the reviewer found that this work is novel and technically sound, and the presentation is excellent. The mentioned issues are addressed as follows:\n\nTable 1. The complexity comparisons between MetaMask and benchmark methods on the CIFAR10 dataset. Note that for fair comparisons, this experiment is based on 1 GPU of NVIDIA Tesla V100.\n|Methods | Parameters | Training time cost for an epoch |\n| :------| :----| :----|\n| ResNet-18 | 11.2M | - |\n| SimCLR | 13M | 70s |\n| Barlow Twins | 22.7M | 80s |\n| SimCLR + Barlow Twins | 24.6M | 85s |\n| MetaMask | 24.6M + 512 | 210s |\n\nTable 2. The comparisons between MetaMask and benchmark methods on the CIFAR10 dataset by using the same total time costs. Note that for fair comparisons, this experiment is based on 1 GPU of NVIDIA Tesla V100.\n|Methods | Epoch | Training time cost | Accuracy |\n| :------| :----| :----| :----|\n| SimCLR | 2400 | 46h | 81.75 |\n| Barlow Twins | 2100 | 46h | 85.71 |\n| SimCLR + Barlow Twins | 2000 | 47h | 85.79 |\n| MetaMask |800 | 46h | 86.01 |\n\nQ1: MetaMask adopts bi-level optimization, so it would necessarily require significant amounts of additional computational cost during computing second-derivatives. However, authors do not provide any analysis of computational costs.\n\nA1: To compare the training complexity of MetaMask and benchmark methods, we conduct experiments on CIFAR10 by using ResNet-18 as the backbone network. The results are demonstrated in Table 1, which shows that the number of parameters used by MetaMask is close to the ablation model, i.e., SimCLR + Barlow Twins, and Barlow Twins. Compared with Barlow Twins, SimCLR + Barlow Twins only adds a MLP with several layers as the projection head for the contrasting learning of SimCLR. Comparing SimCLR + Barlow Twins and MetaMask, we only add a learnable dimensional mask $\\mathcal{M}$ in the network of MetaMask. Note that to decrease the time complexity of MetaMask, we create an additional parameter space to save the temporary parameters for computing second-derivatives, but such cloned parameters do not participate in the calculation of the network during training.\n\nFor the time complexity, due to the learning paradigm of meta-learning (second-derivatives technique), MetaMask has larger time complexity than benchmark methods (including the ablation model SimCLR + Barlow Twins). While, we further conduct experiments to evaluate the performance of the compared methods using similar total training time costs. The results are reported in Table 2, which demonstrates that MetaMask still achieves the best performance, and MetaMask can also beat the ablation model SimCLR + Barlow Twins. This proves that although the time complexity of MetaMask is a little bit high, the improvement of MetaMask is consistent and solid. We also include these results and the corresponding analysis in Appendix A.4.4 of the rebuttal revised version. Thank you for this suggestion.", " Q2: Authors maintain that MetaMask reduces the gradient effect of the dimension containing the confounder (in lines 76-80), but they do not explicitly demonstrate that masked dimensions (having low weights in MetaMask) have confounder (non-discriminative features such as backgrounds). It would be great if the authors suggest experiments about it.\n\nA2: Thanks for your suggestion! First, we clarify that MetaMask trains $\\mathcal{M}$ by adopting a meta-learning-based training approach, which ensures that $\\mathcal{M}$ can partially mask the ''gradient contributions'' of dimensions containing task-irrelevant information and further promote the encoder to focus on learning task-relevant information. So, MetaMask only performs the gradient mask during training instead of physically masking dimensions in the test. We provide theoretical explanation and proofs in Appendix A.1 and A.2. The reasons behind our behavior (adjusting the gradient weight of each dimension in training instead directly masking these dimensions in the test) include: even dimensions that contain dimensional confounders are also possible to contain discriminative information so that lowering the gradient contribution of such dimensions can not only prevent the over-interference of the dimension confounders on the representation learning but also preserve the acquisition of the information of these dimensions. Accordingly, the foundational idea behind self-supervised learning is to learn ''general'' representation that can be generalized to various tasks. In MetaMask, we introduce a meta-learning-based approach to train the dimensional mask $\\mathcal{M}$ concerning improving the performance of contrastive learning. However, the theorems, proposed by [41] and us (in Section 5, Appendix A.1, and Appendix A.2), only prove that the contrastive learning objective is associated with the downstream classification task, while there is no evidence to demonstrate the connections between contrastive learning objective and other downstream tasks. Therefore, we did not directly mask the dimensions containing dimensional confounders in the test.\n\nFurthermore, we conduct experiments to explore the performance of the variant that directly masks these dimensions in the test, which is demonstrated in Appendix A.4.7 of the rebuttal revised version. For the exploration of our masking scheme and its variants, we conduct experiments as follows: after training, we collect the final dimensional weight matrix $\\mathcal{M}$ and then choose dimensions with weights below average as the masked dimensions. These dimensions are considered to be associated with dimensional confounders. To prove whether these dimensions have confounders, we perform random dimensional masking to these dimensions, and when the masking rate is 100%, the model turns to the variant that directly masks all these dimensions in the test. The experiments are based on SimCLR + MetaMask. Note that we conduct 10 trials per mask rate (except for 0% and 100% mask rates) for fair comparisons. We observe that the original MetaMask (i.e., mask rate is 0%) achieves the best performance on average, and MetaMask outperforms the variant masking the dimensions with confounders by a significant margin, which proves that our proposed approach, i.e., masking the ''gradient contributions'' of dimensions in the training is more effective than the compared approach, i.e., directly masking dimensions in the test. While, several trials with specific mask rates demonstrate better performance than MetaMask, which proves that the dimensions filtered by MetaMask indeed contain dimensional confounders. Additionally, observing the results reported in Figure 2 (Page 3) and the results in Appendix A.4.7, we find that the results achieved by the proposed variants are better than Barlow Twins with random dimensional masks on average, which can further prove the filtered dimensions containing confounders and that MetaMask indeed assigns lower gradient weights to the dimensions containing confounders.", " Q3: The improvement of MetaMask seems marginal in important cases although MetaMask improves baseline performance by combining with them in most cases. Specifically, in Table 2 (Page 9), MetaMask combined with existing algorithms in the modern architecture (ResNet18) shows comparable performance to the most competitive baselines (such as BYOL in CIFAR-10, NNCLR in CIFAR-100, and NNCLR in IN-200).\n\nA3: Thanks for the comments. We also observed such a trend: MetaMask improves the benchmark methods with weak encoders by excessively significant margins, but the improvement with strong encoders is relatively limited. Our consideration behind this phenomenon is that although representations learned by both weak and strong encoders (without our method) may contain dimensional confounder, the strong encoder can better capture semantic information so that the dimensional confounder of the learned representation is naturally less, and the useful discriminative information is much more than the representation learned by the weak encoder. [41] provided the theorem and corresponding proof to demonstrate that the contrastive loss can bound the cross-entropy loss in downstream tasks. Therefore, strong encoders better minimize the contrastive loss, which means the representations learned by strong encoders contain more semantic information, and accordingly, fewer dimensional confounders. However, from the results reported in Table 2 (Page 9), our method can still improve the benchmark methods.\n\nWe further conduct experiments by imposing random dimensional masks on the learned representations for weak and strong encoders. The results are reported in Appendix A.4.3 of the rebuttal revised version. The comparisons demonstrate that under the consistent setting of the 5% random dimensional mask rate, the results of the weak encoder (conv) range from 52.79 to 54.13, and the results of the strong encoder (ResNet-18) range from 60.71 to 61.06. Note that we conduct 10 trials for each experiment to achieve unbiased results. The quality of the strong encoder's representation is better, and the performance of the strong encoder is more stable (the variance of the strong encoder's results is smaller), which proves that the representation learned by the strong encoder contains less dimensional confounder, and each dimension contains more discriminative information. Therefore, our method improves weak encoders more than strong encoders.\n\nTable 3. The comparisons of Barlow Twins using different dropout ratios on CIFAR10 with ResNet-18.\n| Dropout ratio | Dropout shared between views | Accuracy |\n| :------| :----| :----|\n| 0 | No | 85.72 |\n| 0.01 | No | 85.76 |\n| 0.05 | No | 86.11 |\n| 0.1 | No | 85.72 |\n| 0.2 | No | 85.13 |\n| 0.5 | No | 81.54 |\n| 0.05 | Yes | 31.17 |\n| 0.1 | Yes | 37.15 |\n| 0.5 | Yes | 31.62 |\n\nQ4: (simple baseline) Applying dropout on h_i in Barlow Twins might be the simple baseline for MetaMask and it would be effective in that BarlowTwins with randomly masked dimensions outperform naive BarlowTwins in Figure 2. Would authors provide the performance of Barlow Twins + dropout?\n\nA4: Thank you for this suggestion. We apply dropout to randomly set the features generated by the backbone network to 0 with a given probability. We use two different settings: 1) features from different views are randomly set to 0 independently with 6 probabilities, including 0.01, 0.05, 0.1, 0.2, and 0.5, which is shown as the setting of dropout shared between views is ''No''; 2) for the same sample, we set the same channels of the features from different views to 0. In detail, we let the features from the first view pass the dropout layer, and then record the channels which are set to 0. Finally, we set the same feature channels of the second view to 0 and multiply the features with 1/(1-p), where ''p'' refers to the probability of dropout. In this case, we use three probabilities: 0.05, 0.1, and 0.5. This setting is shown as ''Yes'' for the dropout shared between views. The results are reported in Table 3. We observe that models trained by following the second setting are collapsed, and these models far underperform MetaMask. For the first setting, the performance trend is in accordance with our and the reviewer's expectations, using several well-chosen dropout ratios can improve the performance of Barlow Twins to a certain extent, but the improvement is inconsistent. Furthermore, even the best model of Barlow Twins using the dropout trick cannot achieve comparable performance to MetaMask (as shown in Table 2, Page 9, MetaMask achieves 87.53 on CIFAR10 with ResNet-18). We add this exploration in Appendix A.4.5 of the rebuttal revised version.", " Q5: In Figure 1, (a) and (b) are missing in the figure (only mentioned in the caption).\n\nA5: Thanks for your scrutiny! We revise this in the rebuttal revised version.\n\nQ6: (ablation study) Barlow Twins handles dimensional redundancy but suffers dimensional confounder. MetaMask mitigates dimensional confounders by learning and applying a dimensional mask. There would be performance gain only from alleviating dimensional confounder. Can authors show the performance of MetaMask without redundancy-reduction objective function? (I understand that Barlow Twins could exacerbate dimensional confounder.)\n\nA6: Thanks for the suggestions! We added the corresponding analysis in Appendix A.4.8 of the rebuttal revised version. Specifically, we conduct ablation studies obtained by MetaMask on CIFAR-10 with ResNet-18 and conv encoders in Appendix A.4.8, where ''BT'' denotes Barlow Twins, ''w/o ML'' denotes the ablation model without the proposed $\\mathcal{M}$ and the corresponding meta-learning-based learning paradigm, i.e., SimCLR + Barlow Twins, and ''w/o drr'' denotes the ablation model without the dimensional redundancy reduction loss $\\mathcal{L_{drr}}$. (a) We conduct experiments to prove the effectiveness of the proposed dimensional mask $\\mathcal{M}$ on CIFAR10 with ResNet-18, and the results show that only w/o ML (SimCLR + Barlow Twins) can not improve the performance of the model by a significant margin, and the proposed approach is crucial to the performance improvement. (b) We conduct experiments on CIFAR10 with ResNet-18 and conv, respectively. The results further prove the effectiveness of the proposed $\\mathcal{M}$ and $\\mathcal{L_{drr}}$.\n\nBarlow Twins handles dimensional redundancy but suffers dimensional confounder. MetaMask mitigates dimensional confounders by learning and applying a dimensional mask. We conduct experiments to demonstrate our statement, and the results are reported in Figure 10 (Page 24). For the experiments shown in Figure 10 (Page 24) (a), we demonstrate the effectiveness of the proposed dimensional mask $\\mathcal{M}$ and the corresponding meta-learning-based training paradigm by directly removing such approaches. The results show that the sole w/o ML only improves Barlow Twins by 0.04, while MetaMask can improve BT by 0.29, which proves the proposed $\\mathcal{M}$ is pivotal to the performance promotion. In Figure 10 (Page 24) (b), to verify whether there would be performance gain only from alleviating dimensional confounder without $\\mathcal{L_{drr}}$, we evaluate the performance of w/o drr and MetaMask. We observe that for the experiments with ResNet-18, w/o drr (without Barlow Twins) improves SimCLR by 0.2 but cannot reach the performance of BT, and MetaMask can improve both SimCLR and BT by 4.28 and 0.29, respectively. For the experiments with conv, w/o drr improves SimCLR by 1.03 and also outperforms BT by 4.08, and MetaMask improves both SimCLR and BT by 2.27 and 6.22, respectively. Concretely, the performance of w/o drr is related to the performance of SimCLR, and it can always improve SimCLR but underperform the complete MetaMask. MetaMask has consistent best performance by using different encoders.\n\nWe consider that $\\mathcal{L_{drr}}$ (Barlow Twins loss) could exacerbate dimensional confounder, but as our discussion in Appendix A.4.7 (also A2 in the responses), i.e., dimensions containing confounders are also possible to contain discriminative information, more dimensions with confounders due to $\\mathcal{L_{drr}}$ may also carry more discriminative information. Likewise, the model without $\\mathcal{L_{drr}}$ may generate representations with over-redundant dimensions so that the total amount of available discriminative information will decrease. However, roughly using the representations with complex dimensional information (without $\\mathcal{L_{drr}}$) may result in insufficient discriminative information mining, e.g., w/o ML can only outperform SimCLR by a limited margin. Our proposed MetaMask effectively avoids the appearance of such an undesired phenomenon by leveraging $\\mathcal{M}$ and the corresponding meta-learning-based training paradigm, which is supported by the empirical results.\n\nReferences:\n\n[41] Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. arXiv preprint arXiv:2203.13457, 2022.", " We appreciate the thoughtful feedback of the reviewer m2DS. We are glad the reviewer found that this work is novel and technically sound, the writing is clear. The mentioned issues are addressed as follows:\n\nQ1: The correctness of Equation 8.\n\nA1: Thanks for the careful review! Theorem 4.2 in [41] provides sufficient proof to the statement that the contrastive loss in the self-supervised learning stage can constrain the upper and lower bounds of the cross-entropy loss in the supervised learning stage for downstream tasks. In Theorem 5.1 (Page 7), we extend this theorem to fit the learning paradigm of MetaMask, i.e., adding the learnable dimensional mask into the theorem. We have checked our proof and did not find flaws/errors.\n\nQ2: Figure 1(a) is interesting -- the proposed MetaMask seems to have much higher contributions on most dimensions than SimCLR -- why is this and is this desirable?\n\nA2: First, we clarify that the abscissa axis represents the feature dimensions, and the ordinate axis represents samples of different classes in Figure 1(a) (Page 2). The observation shows that compared with SimCLR, our method using redundancy-reduction can indeed learn representations with decoupled dimensions. For the classification contributions, we further find that some dimensions of the representation learned by MetaMask indeed have higher contributions, while the difference in the classification contributions of the representation learned by SimCLR is relatively small. We consider the reason behind such a result is that the redundancy-reduction technique, proposed by [14], empowers our method to learn dimension-decoupled representation, while the methods without such a regularization, e.g., SimCLR, may learn a representation that has many redundant dimensions, i.e., many dimensions contain very similar information. Therefore, for our method, the dimensions containing discriminative information naturally have a higher classification contribution. For SimCLR, multiple dimensions contain the shared information, and the differences are not very large so that the contribution of each individual dimension is weakened.\n\nThe foundational idea of self-supervised learning is to learn a general representation that can be generalized in various downstream tasks. The representation learned by SimCLR cannot sufficiently explore the semantic information from the input because of the dimensional redundancy, while the representation learned by our method can learn dimension-decoupled representation so that more semantic information can be contained by the representation if the dimensionality of the projection head is suitable.\n\nQ3: How the experiments for Figure 1(b) are a little unclear. How many layers were you using? Were you using the same dimension for all the layers of the projection head, or did you shrink the dimension at the last layer to serve as a dimension bottleneck? These are important because your results sharply contrast with Barlow Twins'.\n\nA3: We follow the official code of Barlow Twins. Specifically, in the experiments of Figure 1(b) (Page 2), three layers are used. We use the same dimension for the first two layers of the projection head, and only change the dimension of the last layer, which is as same as the implementation in the paper of Barlow Twins (Section 4 in [14]). For the experiments of CIFAR10, the dimensions of the first two layers are both 2048, and the dimension of the last layer is changing from 512 to 12288. For the experiments of ImageNet, the dimensions of the first two layers are 8192, and the dimension of the last layer ranges from 512 to 40960.\n\nWe think the reasons why our results sharply contrast with Barlow Twins are: 1) Barlow Twins does not take experiments on small-scale datasets, e.g., CIFAR10; 2) the dimensionality taken by Barlow Twins is not large enough. In particular, the projection head dimensionality range of Barlow Twins is 16 to 16384, while that in our paper is 512 to 40960, and the results of lower dimensionality are consistent between Barlow Twins and ours.\n\nFurthermore, in further exploration, we propose that such a model collapsing phenomenon is due to two reasons: 1) the proposed dimensional confounder; 2) the curse of dimensionality. And we elaborate on the difference and our intuitive analysis in Appendix A.3. The experiments, shown in Figure 4 (Page 9) and Figure 7 (Page 19), demonstrate that MetaMask can yield significant performance boosts against the mode collapse. We have added the experimental details and explanations on the results between Barlow Twins and our method in Section 6 and Appendix A.4 of the rebuttal revised version. Please refer to the codes in the supplementary file for the detailed implementation.", " Q4: Could you provide the dimensional mask rate that improves the performance on both ImageNet and CIFAR-10? Although you claim the ImageNet rate is higher, the difference seems indistinguishable from Figure 2.\n\nA4: We would like to clarify that the experiments conducted in Figure 2 (Page 3) are based on random masks, which aim to prove the existence of dimensional confounders, i.e., if the dimensional confounder is randomly masked, the performance will be promoted. The results support our statement. For the dimensional mask rate that improves the performance on both ImageNet and CIFAR-10, we observe from the experiments and find that around 0.5% may improve the performance on both datasets. The reasons behind using the 0.5% mask rate can improve the performance includes: 1) the encoder is strong (ResNet) so that most of the dimensions contain discriminative information; 2) the masking scheme is a random process so that it is hard to exactly mask the dimensions containing confounders.\n\nThanks for your careful review! This is a typo of the statement, and such a claim is contrary to the following explanation from Line 67 to Line 73, and also this typo claim is contrary to the experiments in Figure 2 (Page 3). The dimension-to-class ratio on CIFAR-10 (51.2) is much higher than that on ImageNet (2.048). Therefore, the presence of dimensional confounder naturally remains at a low level on ImageNet since supporting $1000$-category classification requires a large amount of heterogeneous discriminative information. Thus, the dimensional mask rate should be lower on ImageNet so that the performance of the model can be improved. We have revised this typo in the rebuttal revised version.\n\nQ5: The claim you made, that (Line 50) \"the dimensionality of the projection head ... acts as a dimensionality bottleneck\" is not the same as the claim from Barlow Twin, \"the output of the ResNet is kept fixed to 2048, which acts as a dimensionality bottleneck\". The bottleneck, according to Barlow Twin, seems to be the ResNet output dimension, rather than the projection head dimension. Could you clarify this?\n\nA5: Thanks for pointing out this. We consider that the statement in our paper is similar to the statement in Barlow Twins to a certain extent. Specifically, our statement describes the dimensionality bottleneck only in the training phase, and the features are generated by the cascading structure of the backbone and projection head, and thus the projection head with fixed output dimensionality can be treated as the dimensionality bottleneck for the features. The statement in Barlow Twins describes the dimensionality bottleneck in both training and inference phases. In the inference phase, the projection head is discarded, and the representations are only generated by the backbone. Thus, the backbone can be treated as the dimensionality bottleneck for the representation. The intuition behind both statements is shared, i.e., the fixed dimensionality of the network can restrict the dimensionality of the generated representation/feature.\n\nConcretely, we thank the reviewer again, and we will change the statement from the current version to the statement in Barlow Twins' original paper, because that statement is based on the understanding of Barlow Twins's behavior. We believe it is better to restate the original claim.\n\nQ6: The authors have not shed light on the negative societal impact.\n\nA6: Thanks for mentioning this. Because this work presents a general method to tackle the dimensional confounder issue for self-supervised contrastive learning, we did not see particular foreseeable negative societal impacts and ethical consequences.\n\nReferences:\n\n[14] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, Proceedings of\nMachine Learning Research. PMLR, 2021.\n\n[41] Yifei Wang, Qi Zhang, Yisen Wang, Jiansheng Yang, and Zhouchen Lin. Chaos is a ladder: A new theoretical understanding of contrastive learning via augmentation overlap. arXiv preprint arXiv:2203.13457, 2022.", " This paper first investigates the problem of dimensional confounder in contrastive learning, which refers to a subset of dimensions learning only task-irrelevant background information. The author shows the existence of the dimension confounder, where the performance of Barlow Twin starts to drop and eventually collapses after the dimension of the projection head reaches a certain point. The author then proposes to learn a dimensional mask approach, MetaMask, consisting of learnable weights to reweight each dimension of the encoder output in contrastive learning, so that the dimensional confounder can be assigned lower weights. The authors show theoretically that the proposed MetaMask achieves tighter risk bounds (lower and upper) for downstream classification tasks, and empirically improves the performances on downstream tasks and robustness towards dimension sizes by applying MetaMask to the prior models. Originality: \n\nStrengths: the idea is novel, addressing the issue of dimensional confounders by reweighting encoder outputs using meta-learning techniques. The idea is related to MAXL[1], but the two approaches are different. MAXL applies a mask with weights on the softmax logits in a hierarchical binary prediction setup, where the logits are single scalars. On the other hand, this work applies a mask with weights on the encoder output for contrastive learning, where the output has a large dimension.\n\nThe reviewer is not familiar with meta-learning literature and only evaluates the originality from a contrastive learning perspective.\n\nQuality: \n\nStrengths: the idea overall has technical soundness, with theoretical justification on the proposed MetaMask creating tighter lower and upper bounds of risk, and shows reducing the risk can improve downstream performances. \n\nWeaknesses: The reviewers have some additional clarification questions listed in the next section.\n\nThe reviewer cannot verify the correctness of Equation 8. \n\nClarity: \n\nStrengths: the paper is very well written, with clear motivation, thorough related work, a succinct method section, and well-organized theoretical analysis and experimental results.\n\nSignificance: \n\nStrengths: the community can adopt the method to reduce the impact of the dimensional confounder, especially in large models. The performance improvements are significant in some cases (Table 2, SimCLR on STL-10 and NNCLR on CIFAR-10).\n\n[1] Liu, Shikun, Andrew Davison, and Edward Johns. \"Self-supervised generalisation with meta auxiliary learning.\" Advances in Neural Information Processing Systems 32 (2019).\n Q1: Figure 1(a) is interesting -- the proposed MetaMask seems to have much higher contributions on most dimensions than SimCLR -- why is this and is this desirable?\n\nQ2: How the experiments for Figure 1(b) are a little unclear. How many layers were you using? Were you using the same dimension for all the layers of the projection head, or did you shrink the dimension at the last layer to serve as a dimension bottleneck? These are important because your results sharply contrast with Barlow Twins'. \n\nQ3: Could you provide the dimensional mask rate that improves the performance on both ImageNet and CIFAR-10? Although you claim the ImageNet rate is higher, the difference seems indistinguishable from Figure 2.\n \nQ4: The claim you made, that (Line 50) \"the dimensionality of the projection head ... acts as a dimensionality bottleneck\" is not the same as the claim from Barlow Twin, \"the output of the ResNet is kept fixed to 2048, which acts as a dimensionality bottleneck\". The bottleneck, according to Barlow Twin, seems to be the ResNet output dimension, rather than the projection head dimension. Could you clarify this? The authors discussed the limitations of the work. The limitation states that the proposed MetaMask is specific towards improving contrastive learning but could not yet show the theoretical guarantees of MetaMask on other self-supervised learning methods, such as masked image modeling. This is a correct assertion and the discussions of limitations are helpful for the community.\n\nThe authors have not shed light on the negative societal impact.", " This paper handles the problems which existing self-supervised learning algorithms suffer. Authors demonstrate that existing self-supervised learning algorithms suffer dimensional redundancy and dimensional confounder. They propose MetaMask which learns a dimensional mask through meta learning to decrease the learning signals of features having confounders. For the rationales of MetaMask, they provide theoretical support where MetaMask has tighter risk bounds in downstream tasks compared to baselines. They demonstrate that MetaMask combined with competitive self-supervised algorithms has superior performance to baselines on multiple benchmark datasets under two architectures. The paper has several strong and weak points.\n\nStrengths:\n1. This paper shows that the existing self-supervised learning algorithms suffer dimensional redundancy and dimensional confounder. The problem is also quite intuitive.\n2. Theoretical support for the reasons why MetaMask works is provided. (I have not carefully checked the proofs)\n3. Authors conduct extensive experiments and demonstrate that MetaMask combined with existing methods improves performance in most cases.\n\nWeaknesses:\n1. MetaMask adopts bi-level optimization, so it would necessarily require significant amounts of additional computational cost during computing second-derivatives. However, authors do not provide any analysis of computational costs.\n2. Authors maintain that MetaMask reduces the gradient effect of the dimension containing the confounder (in lines 76-80), but they do not explicitly demonstrate that masked dimensions (having low weights in MetaMask) have confounder (non-discriminative features such as backgrounds). It would be great if the authors suggest experiments about it.\n3. The improvement of MetaMask seems marginal in important cases although MetaMask improves baseline performance by combining with them in most cases. Specifically, in Table 2, MetaMask combined with existing algorithms in the modern architecture (ResNet18) shows comparable performance to the most competitive baselines (such as BYOL in CIFAR-10, NNCLR in CIFAR-100, and NNCLR in IN-200).\n4. (simple baseline) Applying dropout on $h_i$ in Barlow Twins might be the simple baseline for MetaMask and it would be effective in that BarlowTwins with randomly masked dimensions outperform naive BarlowTwins in Figure 2. Would authors provide the performance of Barlow Twins + dropout?\n\nMiscellaneous minor issues:\n1. In Figure 1, (a) and (b) are missing in the figure (only mentioned in the caption). \n\n---\n\n[After reading authors' answers] I appreciate the detailed responses and authors address several of my concerns. Accordingly, I change my rating to weak accept. 1. (ablation study) Barlow Twins handles dimensional redundancy but suffers dimensional confounder. MetaMask mitigates dimensional confounder by learning and applying a dimensional mask. There would be performance gain only from alleviating dimensional confounder. Can authors show the performance of MetaMask without redundancy-reduction objective function? (I understand that Barlow Twins could exacerbate dimensional confounder.) It would be great if the authors suggest possible issues from the adoption of bi-level optimization such as computational costs.", " The authors propose a self-supervised learning scheme with a mask on top of the contrastive learning and the Barlow-twin method to perform dimensional reduction and confounder elimination. The concept is to use a meta-learning approach to seek a mask deleting redundant features while optimizing the feature representation and model parameters on the fly. The training loss is based on a combination of contrastive and Barlow-twin. Strengths:\n\nThe proposed method seems easy to implement based on existing works and achieve good performance improvement. The method has been validated with theory.\n\nWeakness:\n\nThe ablation study is insufficient, as the method uses both Barlow-twin and contrastive loss. Still, it's unclear that this meta-learning alike optimization scheme is essential compared with training on a loss combination of Barlow-twin and contrastive loss. The comparisons have only been made against either one of them. Also, in Figure 1, the performance improvement by the random mask is not significant enough to persuade me that the feature redundancy is sufficiently essential (only within 0.1% improvement on ImageNet). Finally, why is the performance not tested on full ImageNet while the motivation figure is?\n\nTheoretically, the authors show that the conditional variance can be reduced with the optimal mask. Still, since the identity matrix (no mask) is also a special case of masks, this is a seemingly trivial statement.\n I think the improvement is good, but I want to know that this meta-learning type is more essential than just optimizing a loss of Barlow-twin and contrastive loss. Also, the theoretical point that mask includes identity seemingly makes the theorem vacuous, and I wonder if the authors can respond to my concern. Yes", " This paper proposed a hybrid joint-embedding framework, which adds learnable masks on representations and applies both contrastive loss and redundancy reduction loss. This mask is updated when fixing the rest of the network. \nThe experiments show that the proposed method improves most of the recent frameworks (BarlowTwins, SimCLR, SwAV, BYOL, etc.) on various datasets (CIFAR-10, CIFAR-100, STL-10, IN-200) Strengths\n1. The authors point out that joint-embedding methods will learn harmful features. This is a widely ignored problem in the self-supervised learning method.\n2. Experiment results show consistent improvement of the proposed method overall frameworks on various datasets. \n3. The second-order optimization on the mask is supported by theoretical proof. \n\n\nWeakness\n1. The masks are trainable means that in the end, they are fixed and will ignore several features known as confounders. However, this needs to be verified via ablation experiments. For example, a linear probe on the whole unmasked features should give lower accuracy. \n2. The experiments are not convincing due to a lack of fair comparison. The authors use a very customized setting, e.g., AlexNet on various joint-embedding frameworks. No hyperparameter tuning is conducted on these models. \n3. Second-order optimization creates significant computational overhead. The authors need to show how much more training time is needed for each model.\n4. No experiment details. \n\n\n====== post rebuttal comments =====\n\nI've carefully read the authors' responses. All my major concerns are addressed. Though the AlexNet setting on 1 GPU is not convincing enough, the fair amount of controlled experiments in the paper convinces me of the effectiveness of the proposed idea.\n\nI've increased my score.\n 1. What's the reason behind MetaMask's collapse for large embedding dimensions? This is not observed in standard BarlowTwins.\n\n2. Why is this considered \"meta\"? It's basically second-order optimization.\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "mhcrWMxos_y", "Zf9nhuzfVu", "H41PzShwI9", "YZz1g2rHMEd", "jsfIm5ch8xa", "nips_2022_gkQkZy-pRik", "mhcrWMxos_y", "mhcrWMxos_y", "mhcrWMxos_y", "Zf9nhuzfVu", "Zf9nhuzfVu", "Zf9nhuzfVu", "H41PzShwI9", "H41PzShwI9", "H41PzShwI9", "H41PzShwI9", "jsfIm5ch8xa", "jsfIm5ch8xa", "nips_2022_gkQkZy-pRik", "nips_2022_gkQkZy-pRik", "nips_2022_gkQkZy-pRik", "nips_2022_gkQkZy-pRik" ]
nips_2022_vKBdabh_WV
Meta Optimal Transport
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT. This helps repeatedly solve similar OT problems between different measures by leveraging the knowledge and information present from past problems to rapidly predict and solve new problems. Otherwise, standard methods ignore the knowledge of the past solutions and suboptimally re-solve each problem from scratch. Meta OT models surpass the standard convergence rates of log-Sinkhorn solvers in the discrete setting and convex potentials in the continuous setting. We improve the computational time of standard OT solvers by multiple orders of magnitude in discrete and continuous transport settings between images, spherical data, and color palettes.
Reject
This paper proposes an amortized optimization approach for predicting optimal transport (OT) maps. Three reviewers found that the proposed method is interesting. However, some concerns raised by another reviewer on the improvement of the computational efficiency and the generality of the proposed method were raised: 1) The experiments on computational efficiency are of very small scale, and insufficient to justify the improvement. The authors may consider larger scale problems, where the run time should be significantly larger than other computational overheads, e.g. >10 seconds. 2) As shown in Tables 1 and 2, the improvement of the proposed method on computational efficiency is marginal. 3) The current experiments only use MLP for small scale and relatively easy low dimensional problem. To demonstrate the generality of the proposed method, the authors should consider other neural architectures for more complex data. This paper can be significantly strengthened if these issue could be addressed.
train
[ "TEFRc700U-k", "46SihsL-I9", "yi1jfC2PxvY", "t7UlgTjnK1", "7FeKRZ80NAH", "b-KeH74gJgZ", "gBT7GILDQ1j", "hQ6mC4_EhjQ", "MJOZ0gJCMXU", "iHGUYC-zNvY", "qZ445NM_Fdt", "eAep-RDgLYIM", "kGrqHEcmTW", "GOB1t5NIqK", "7Di0JvA6YeV" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " This paper considers learning a meta-model to predict the solution to the OT problem. The idea is novel and the paper is well-written.\n\nCompared to other OT methods that run from scratch, it can save the time of learning OT potential for a new task. However, the experiments in this paper are only 2 or 3 dimensions. And I don't see how it can scale to a high dimension in the current framework (figure 3). For example, if $\\alpha,\\beta$ are a dataset of images and each image is a sample point, then $z_1$ and $z_2$ would be of very high dimension, and I conjecture MLP will not work anymore. And if it's only this kind of low dimension, for example, do people really care about the difference between 1e-5 seconds and 1e-3 seconds? I have a question mark about its real application.\n\nAlso, the pipeline in Figure 3 is quite problem-oriented. Given a new format of distributions $\\alpha,\\beta$ that are not images, for example, single-cell RNA data or a set of point clouds, one needs to re-design or re-select the encoder of $\\alpha,\\beta$. ResNet may not be able to extract the correct information.\n\nRegarding the continuous measure $W_2$ solver, I think W2GN is not a good choice, at least not the best. It primarily considers OT maps in latent space of pre-trained autoencoder, which limits practical application in high dimensional space, see the discussion in the second paragraph of [Rout et al. 2021](https://openreview.net/pdf?id=5JdLZg346Lw). The best continuous $W_2$ solver has been shown to be MM:R in [W2 benchmark](https://openreview.net/pdf?id=CI0T_3l-n1) paper. Why do authors choose to use outdated W2GN instead of state-of-art $W_2$ solver?\n\nSince the computation speed is a highlighted advantage, the authors should also consider the comparison with the GeomLoss package (https://www.kernel-operations.io/geomloss) and “Fast geometric learning with symbolic matrices” Feydy et al., NeurIPS 2020. It's a GPU implementation and it supports arbitrary cost functions and scales up to millions of high-dimensional samples in seconds.\n\nThe authors can be more careful about the literature:\n\nLine 235: Li et al. 2020 didn't use ICNN. Instead, I find these two papers use ICNN\n\nAlvarez-Melis, David, Yair Schiff, and Youssef Mroueh. \"Optimizing functionals on the space of probabilities with input convex neural networks.\" Transactions on Machine Learning Research 2022.\n\nFan, Jiaojiao, Amirhossein Taghvaei, and Yongxin Chen. \"Scalable computations of Wasserstein barycenter via input convex neural networks.\" ICML 2021.\n\n I have a question about the line 49-50 out-of-sample setting. If Meta OT can approximate the potential of W2GN well, then it should be able to apply to any samples in the considered task, not only just training samples. see weakness", " Thank you for the response! We will update the phrasing on the stream of images and continue to think more about other initializations and quantitative comparisons to make.", " **Specific Quantitative Results**\n\nIt would be interesting to see quantitative results using distortion metrics such as PNSR or SSIM where reference images are available. Also, the authors may provide quantitative results using a perceptual metric such as LPIPS to better capture the quality of the transported image samples. \n\n**Regarding initialization**\n\nI would like to see if a better initialization would give results comparable to Meta OT. Since the *Rethinking initialization* paper appeared after the submission of this work, I think it is not necessary to compare it with that work at this moment. \n\n**Quantitative metrics**\n\nBy quantitative metrics, I don't mean the OT distance or the runtime because it poorly correlates with the perceptual image quality. I suggest the authors may consider perception/distortion metrics which are commonly used in practice. \n\n**New stream of images**\n\nI kindly ask the authors to rephrase the sentence.\n\nI thank the authors for the detailed response. Having read the rebuttal, I am raising my score from 5 to 6. \n\n", " Dear Reviewer 96EJ, as the discussion period will close tomorrow, we would greatly appreciate if you'd take a look at our response to your review soon and let us know if you have any remaining questions. We look forward to addressing any remaining concerns before the end of the discussion period. If our response was satisfactory, we ask that you consider raising your score for our submission. Thank you for your time.", " Dear authors,\n\nThank you for the very detailed response. The new version of the paper has addressed my questions/comments and I'm deciding to leave my score unchanged.", " I thank the authors for their response.\n\nAfter reading their rebuttal, I believe that the authors have adequately addressed all of my questions. In addition, the revised version of the paper does improve both the quality and clarity of the paper. My only concern is still about the practical applications of Meta OT.\n\nAll in all, I would like to increase my score from 6 to 7.", " > Related work. I came across a recently published paper “Rethinking Initialization of the Sinkhorn Algorithm”. Because both papers try to generate an efficient initialization for speeding up the Sinkhorn algorithm, it may be useful to cite and discuss that paper.\n\nThis is a brilliant paper! It was posted after the submission deadline and proposes an important idea to compare to: using the Wasserstein-2 OT between Gaussian approximations of the measures to initialize the Sinkhorn dual potentials rather than predicting an initialization with Meta OT. We have added a comparison to their approach in all of the relevant settings (in L205-209 Figure 4 and Figure 7) and find that Meta OT predictions often provide better starting points. We also note that their initial setting is mostly scoped to the Euclidean Wasserstein-2 setting while Meta OT methods do not make this assumption: this makes their method not exactly applicable in our spherical setting, which uses the spherical geodesic cost rather than the Euclidean one. We find that their initialization is still useful in the spherical setting as the Euclidean distance in the ambient space is correlated with the geodesic distance. (Very tangentially, one other interesting idea for their setting here is that a Riemannian extension of their method could be created too, that looks at known Riemannian OT-based initializations between, for example, wrapped Gaussians on the sphere.)\n\n> Other than those two, it would be interesting to see the applications of Meta OT on deep learning tasks such as generative modeling or domain adaptation.\n\nPlease see our earlier comment on these not being Meta OT settings and let us know if you have any other questions or comments on this. We hope that you will reconsider your evaluation in light of this.", " Many thanks for the careful reading of our paper! We have attached a new version of the paper that we hope will clarify the points that you have raised, and include a more detailed response inline below. Please let us know if there are any other ablations or updates you think would be useful to get in.\n\nWe would like to start by addressing a **misunderstanding** in the applicability of Meta OT methods and emphasize that **Meta OT methods may not be useful in every OT setting**. Your review requests that we evaluate in the standard image and generative modeling settings and state that not including these results is a weakness and limitation of our paper. These settings are out-of-scope and we do not see a useful way of incorporating Meta OT into these settings as they often seek to estimate a single transport map, or they only care about estimating the transport map between the latest model and the data distribution (and make use of warm-starting the dual potentials). We have added a new section to the introduction entitled “settings that are not Meta OT” to attempt to prevent this misunderstanding in the future and will respond in more detail to your points inline below. We hope that you will reconsider your evaluation of our paper in light of this clarification.\n\nHere are further responses inline:\n\n> While the use of amortization in OT is not completely new\n\nWe are not aware of previous work using amortization for computing OT duals. We have a brief discussion of [Amortized Projection Optimization](https://arxiv.org/abs/2203.13417): they use amortized optimization to compute informative directions to project on in the context of *sliced* Wasserstein distances. The use case is deeply different from ours predicting the optimal duals given the input measures. Please let us know if you have any other references in mind.\n\n> However, there is no demonstration of some standard machine learning tasks that famously gain benefits from OT (e.g. generative modeling, domain adaptation). Therefore, it is difficult to judge the “quality” of transport maps of the proposed solvers in practical applications.\n\nPlease see our earlier comment on these not being Meta OT settings and let us know if you have any other questions or comments on this. We hope that you will reconsider your evaluation in light of this.\n\n> There are some minor parts that can be improved to increase the readability of the paper.\n\nWe hope you find the new version of our paper to be improved. Please let us know if there are any other updates you would like for us to get in.\n\n> The sentence in L158 is not clear. Could you explain it in more detail?\n\nThis is on the delicate topic of comparing the convergence of Meta OT to a classical method, such as Sinkhorn. We thought more about this part of the paper and decided to remove it for now, as Meta OT methods are about prediction rather than convergence. Our intention in this part was to say that amortized optimization provides a ~constant-time (and hopefully computationally cheap) way of predicting solutions to optimization problems that are otherwise iteratively solved. The standard theoretical convergence analysis results applied to, for example, gradient descent or Sinkhorn considers the rate the iterates approach the optimal solution: the point we would like to make here is that a Meta OT model’s prediction is often significantly computationally cheaper than running an iterative algorithm to the same level of accuracy. This is because the Meta OT model only amortized a subspace of OT problems. Please let us know if you have any outstanding questions or comments on this point, or if you would find it insightful for us to include anything back in the paper.\n\n> What is the formula for the marginal error in Table 1 and Figure 4? Is it Equation (7)? What if we use a stricter threshold like 10e-4, 10e-5, etc. Does MetaOT + Sinkhorn still converge faster?\n\nYes, it’s the marginal error in eq. (7). We have updated this in the text. We arbitrarily selected 1e-3 as the convergence threshold because it is a commonly used default value. We have ablated other values of 1e-2, 1e-4, and 1e-5 in Table 5 and 6 in Appendix C.2 and show that Meta OT’s initialization improves the runtime in all cases.\n\n> What is the formula for the normalized dual objective value in Figure 7? Is 1.0 the optimized value?\n\nWe estimate the dual objective by exactly conjugating the model and then normalize the value for each instance by the smallest and largest values encountered during the W2GN fine-tuning so that the instances are comparable. Without the normalization, the optimal dual objective between the color palette transfers can be significantly different and make it difficult to easily compare how the methods converge.", " > The paper claims that we would receive a stream of new images in deployment which could be different from the images used to obtain the OT map (lines 31-32). However, the individual experiments on MNIST, Spherical, and WikiArt are conducted with samples from the same distribution. I understand that while standard OT solvers need retraining from scratch, Meta OT provides a better initialization. The authors should discuss to what extent Meta OT can handle the stream of new images in deployment.\n\nWhile “new stream of images” can have many interpretations, in this informal phrasing we meant that a new stream of images that’s close to the i.i.d. samples from the meta-distribution used to train the model. It is indeed important in practice for any machine learning system to adapt to a stream of data that is not producing i.i.d. samples and is also likely to have distribution shift: we did not intend to claim any improvements in these settings and are very open to rephrasing any parts of our paper to make it clear that we are not addressing this.\n\n> It would be helpful to assess the performance of color transfer if the authors provide quantitative results. It is hard to judge qualitatively since images visually look all the same. I suggest both perception and distortion metrics to analyze perceptual quality and geometric distortion in the pushforward samples.\n\nWe believe this is another minor misunderstanding, as Figure 7 quantitatively shows the dual objective on the color transfer between test images in comparison to the convergence of standard W2GN training, and Table 2 compares the runtime and dual values. We agree our contribution would be difficult to assess without this. We also hope that you are willing to re-evaluate your assessment of our paper in light of this new information.", " Many thanks for the careful reading of our paper! We have attached a new version of the paper that we hope will clarify the points that you have raised, and include a more detailed response inline below. Please let us know if there are any other ablations or updates you think would be useful to get in.\n\nWe would like to start by addressing a **misunderstanding** in the applicability of Meta OT methods and emphasize that **Meta OT methods may not be useful in every OT setting**. Your review requests that we evaluate in the standard image and generative modeling settings and state that not including these results is a weakness and limitation of our paper. These settings are out-of-scope and we do not see a useful way of incorporating Meta OT into these settings as they often seek to estimate a single transport map, or they only care about estimating the transport map between the latest model and the data distribution (and make use of warm-starting the dual potentials). We have added a new section to the introduction entitled “settings that are not Meta OT” to attempt to prevent this misunderstanding in the future and will respond in more detail to your points inline below. We hope that you will reconsider your evaluation of our paper in light of this clarification.\n\nHere are further responses inline:\n> The lack of quantitative results makes it hard to evaluate the overall performance.\n\nMany of our results are quantitative: Table 1 quantifies the runtime of Meta OT in comparison to standard Sinkhorn solves (with zero and Gaussian init) when using the default marginal error of 1e-3, Table 2 quantifies the runtime and dual objective values in comparison to standard W2GN solves, Figure 4 quantitatively shows the marginal error of Meta OT on test data in comparison to Sinkhorn (with zero and Gaussian init). Figure 7 shows the dual objective values in comparison to W2GN on test data. And our new version of the paper contains a few more quantitative details: Table 5 and 6 in the appendix quantitatively compare the runtime against Sinkhorn when varying the convergence threshold, and Figure 10 in the appendix quantitatively compares cross-domain training/evaluations to asses the generalization capabilities of Meta OT methods.\n\nAre there any other specific quantitative settings and evaluations you were referring to in your original comment that you would be curious to see?\n\n> Ablation study is needed to measure the distance of fully trained weights from random initialization and Meta OT predicted initialization.\n\nWe would be open to running additional experiments for ablations like this. If you would be interested in seeing these results, can you please clarify what you would expect to see in this ablation?\n\n> Whether a better initialization would give results comparable to Meta OT?\n\nWhen we submitted the paper, we were not aware of better initialization strategies. The paper [Rethinking Initialization of the Sinkhorn Algorithm](https://arxiv.org/abs/2206.07630) was posted after we submitted this, and we think it is a brilliant idea. We have added a comparison to the Gaussian initializations proposed here. We find that it is indeed able to improve upon Sinkhorn’s initialization and Meta OT’s predictions still further improve upon those. Please let us know if there are any other initialization schemes that you are aware of that would also make sense for us to include.\n\n> One suggestion would be to cite the published version of papers where applicable.\n\nThanks, we have went through and updated the citations to the published versions. Please let us know if we have missed any of them.\n\n> In my opinion, the paper currently lacks sufficient experimentation that may vouch for its clear acceptance. I suggest the authors include stronger experimental results on CIFAR10, CelebA, CelebA-HQ, or other relevant datasets. If time permits, the authors may choose the task of generative modeling or image restoration where OT has shown promising results. How efficiently can Meta OT predict the parameters of the OT map in the aforementioned tasks on these harder datasets?\n\nPlease see our earlier comment on these not being Meta OT settings and let us know if you have any other questions or comments on this. We hope that you will reconsider your evaluation in light of this.", " We were delighted to receive your review of our paper! Thanks for the encouraging comments and insightful questions. We have attached a new version of the paper that we hope will clarify the points that you have raised, and include a more detailed response inline below. Please let us know if there are any other ablations or updates you think would be useful to get in.\n\n> I think that the authors should even motivate more their contributions by providing some contexts where the proposed learning procedure could be useful.\n\nWe agree and have added a few more settings in the introduction: to comparing seismic signals as in Engquist et al. and for single-cell perturbations as in multiple Bunne et al. papers. We also think the repeated couplings that arise in reinforcement learning settings may be useful for Meta OT methods.\n\n> How do you take into account that the learned dual potential is also a function of the underlying cost in the discrete setting? As it seems that the MLP takes only into account the atoms of the measures.\n\nWe have not strongly considered this setting as our applications did not require it, but there are a few options worth considering if it comes up in the future. We have added the following text discussing this to the conclusion:\n*In the discrete setting, we only considered settings where\nthe cost remains fixed, but the Meta OT model can also be conditioned\non the cost by considering the entire cost matrix as an input\n(which may be too large for most models to handle), or considering\na lower-dimensional parameterization of the cost that changes between\nthe Meta OT problem instances.*\n\n> How does this method compare to the supervised approach where we consider the exact same parametrization of the potential however we aim at minimizing the distance between the predicted potential and the true one?\n\nWe used regression-based amortization onto the ground-truth potentials in our early prototypes in the discrete/Sinkhorn setting, but did not want to assume access to high-accuracy ground-truth solutions and switched to the objective-based approach presented throughout out paper. In the settings we consider, we do not think there are any insightful ablations between the regression- and objective-based losses.\n\nWe did not try regressing in the continuous Wasserstein-2 setting as approximate ground-truth solutions would have taken ~2 seconds to obtain per pair of images by running W2GN. In contrast, our Meta OT model is able to make a prediction in 1ms on an image pair and locally improves the meta parameters directly by using the gradient of the dual objective.\n\nThe design choice of the loss may be important to consider for future settings, and we have updated the text in Section 2.2 to mention this choice.\n\n> How long is the training stage? It would be nice to provide for each experiment, the plot of the training loss against the CPU/GPU time or the number of operations.\n\nWe’ve added plots to appendix C.3 showing that the MNIST experiments consistently converge after 2 minutes (!) of training on our GPU while the color transfer experiment takes a few hours. We unfortunately do not have a good training metric to report when training W2GN as the loss there gives the correct gradients of the dual objective, but unfortunately does not have a meaningful value. We instead try to estimate the dual objective by numerically conjugating the ICNN potential on a set of training instances. While in the main paper we are able to normalize the dual objectives by the maximum and minimum values encountered for each instance, it is not easy to add this into our training code and we have left the dual values unnormalized in this comparison.\n\n> It would be nice also to see how versatile can be the model in the sense that if the network considered is sufficiently large, then it may be able to learn the optimal potential in various settings and not only for a specific distribution of problems.\n\nWe’ve added some cross-domain experiments to appendix D where we find that in many cases, the learned Meta OT potentials can generalize beyond the distribution it is trained on. Here, we considered discrete OT problems between MNIST, Google Doodles, USPS, and random data and train and evaluate on every pairwise combination of these datasets.\n\nPerhaps one dream would be to train a Meta OT model on random data (perhaps even between measures of varying sizes) that is then able to be immediately useful for most downstream problems encountered. We are unfortunately not optimistic that a model like this would be possible to create, and even if it was, it may need to be prohibitively large. For now, we recommend and prefer to only train and evaluate Meta OT models in similar domains.", " We are grateful to the reviewers for carefully going through our paper, and are delighted to hear from the reviewers that *“the paper is of very good quality, very clear and concise, and that the contributions presented here may be of real interests for practitioners using OT”* (uWt4), as well as *“the idea of predicting the parameters of convex Brenier potentials approximated by ICNNs is a major strength”* (96EJ).\n\nWe are fully committed to incorporating all of the reviewer feedback the final version of our paper and have attached a new version of the paper that clarifies a few points that came up. We have run all of the additional experiments and analyses suggested by the reviewers and added the new results to the paper: 1) the Gaussian setting from [Rethinking Initialization of the Sinkhorn Algorithm](https://arxiv.org/abs/2206.07630) as a baseline to Table 1, Figure 4, and Figure 10, 2) cross-domain experiments to appendix D, 3) more training details on the convergence and runtimes to Appendix C.3, and 4) further ablations of the Sinhkorn convergence times as the threshold is varied in Appendix C.2.\n\nWe would also like to address a **misunderstanding** in the applicability of Meta OT methods and emphasize that **Meta OT methods may not be useful in every OT setting**. The reviews request that we evaluate in the standard image and generative modeling settings and state that not including these results is a weakness and limitation of our paper. These settings are out-of-scope and we do not see a useful way of incorporating Meta OT into these settings as they often seek to estimate a single transport map, or they only care about estimating the transport map between the latest model and the data distribution (and make use of warm-starting the dual potentials). We have added a new section to the introduction entitled “settings that are not Meta OT” to attempt to prevent this misunderstanding in the future and will respond in more detail to your points inline below. We hope that the reviewers will reconsider their evaluations of our paper in light of this clarification.\n\nWe will respond with more specific comments individually in the review threads.", " In this work, the authors introduce a new and very efficient procedure to solve optimal transport problems between both discrete and continuous distributions using amortized optimization. More precisely, in the setting where one aims at solving multiple OT problems between distributions sharing similar structures, the authors propose to learn, in an unsupervised manner, a predictor able to infer the optimal coupling instead of solving from scratch each OT problem considered. Such learned function can then be applied to quickly predict the optimal coupling between two test distributions (in the sense that they have not been used in the training stage) sharing the same structures as the distributions considered in the training stage. To do so, the authors propose to learn a function parametrized by neural networks mapping the input distributions and the underlying cost to the optimal potential. In order to learn such function, the authors minimize the expectation on a distribution of problems (defined by two input measures and an underlying cost) of the dual objective of the OT problem under the same constraints. Note that if the family of functions generated by the network is sufficiently expressive, then solving such problem is the same as solving each single OT problem and therefore finding the true optimal potential on each of these problems. Note also that the proposed method is completely unsupervised as they do not require to solve each OT problem considered in the training stage. In the discrete setting, the authors consider the dual formulation of the entropic OT which is an unconstrained problem, therefore they end up with an unconstrained optimization problem studied and propose a simple gradient-descent procedure to solve it. In the continuous setting, the authors restrict themselves to the learning of the Wasserstein-2 distance, and take advantage of a reformulation of its dual involving convex potential to recover an unconstrained optimization problem too. Indeed in this case, the authors aim at predicting an ICNN by learning a function mapping the input measures to the parameters of the ICNN. The authors propose also to refine the prediction on test problems by applying few steps of either the Sinkhorn algorithm in the discrete setting or of the W2GN algorithm in the continuous one. Finally, they show on various experiments on real-world data that the proposed approach is able to recover almost the true potentials (and couplings) while being much faster. \n I think that the paper is of very good quality, very clear and concise, and that the contributions presented here may be of real interests for practitioners using OT. More precisely, I think that learning a map that given a geometry is able to infer the optimal coupling for a family of problems is a very promising line of work which may be useful in various setting. For example one could think at some networks which during the forward pass, aim at some point to align data and therefore for each forward pass, one OT problem has to be solved. Another example could be that some studies require to compute multiple OT problems between similar point clouds which can be very time-consuming. The approach proposed here could significantly improves the computational time of this kind of tasks. I think that the authors should even motivate more their contributions by providing some contexts where the proposed learning procedure could be useful. How do you take into account that the learned dual potential is also a function of the underlying cost in the discrete setting? As it seems that the MLP takes only into account the atoms of the measures.\n\nHow does this method compare to the supervised approach where we consider the exact same parametrization of the potential however we aim at minimizing the distance between the predicted potential and the true one?\n\n\nHow long is the training stage? \nIt would be nice to provide for each experiment, the plot of the training loss againts the CPU/GPU time or the number of operations. It would be nice also to see how versatile can be the model in the sense that if the network considered is sufficiently large, then it may be able to learn the optimal potential in various settings and not only for a specific distribution of problems.\n Yes.", " As an experimental paper, Meta OT provides better initialization to OT solvers by leveraging shared representations from previous tasks. It helps accelerate the training of OT solvers which is well supported by experimental results provided in this paper. One key requirement is that downstream tasks must follow the identical distribution as the previous tasks. This was acknowledged by the authors under the limitations of meta OT.\n\nIn general, the paper is concise and well-written. The main idea is clear. My major concerns are listed in the main review. I am willing to raise my score if these concerns are properly addressed. \n\n ## Main Review\n### Strengths\n1. The idea of predicting the parameters of convex Brenier potentials approximated by ICNNs is a major strength due to the following reasons. First, directly learning the parameters of an ICNN while satisfying the convexity constraint is shown to be challenging in prior works. Second, the gradient of the ICNN may not contain the true OT map due to its poor expressive power. This paper takes a step towards resolving the first challenge by learning to predict its parameters, e.g. through a ResNet and MLP. As an extension, one might increase the complexity of ICNNs which may resolve the second challenge.\n2. While prior works have tried to directly approximate the gradient of Brenier potentials using a neural network, there is insufficient theoretical evidence on whether the learned neural network is indeed an OT map. On the other hand, this paper uses a meta-learning approach to directly learn the parameters of Brenier potentials. Thereby, it ensures the optimality of the learned transport map as per Brenier’s theorem. \n\n### Paper organization/presentation\n1. The paper is well organized and nicely written. \n2. The presentation is clear and concise. \n\n### Experiments\n1. At present, the experiments are conducted on relatively easier datasets. \n2. The lack of quantitative results makes it hard to evaluate the overall performance. \n3. Ablation study is needed to measure the distance of fully trained weights from random initialization and Meta OT predicted initialization. Whether a better initialization would give results comparable to Meta OT?\n\n### References\n1. The paper includes a comprehensive list of relevant literature. \n2. One suggestion would be to cite the published version of papers where applicable. \n\n Please see **Main Review** and **Weaknesses** for questions to address during the rebuttal. ### Weaknesses:\n1. In checklist 2, the authors mention that the paper is not a theory paper. In my opinion, the paper currently lacks sufficient experimentation that may vouch for its clear acceptance. I suggest the authors include stronger experimental results on CIFAR10, CelebA, CelebA-HQ, or other relevant datasets. If time permits, the authors may choose the task of generative modeling or image restoration where OT has shown promising results. How efficiently can Meta OT predict the parameters of the OT map in the aforementioned tasks on these harder datasets?\n2. The paper claims that we would receive a stream of new images in deployment which could be different from the images used to obtain the OT map (lines 31-32). However, the individual experiments on MNIST, Spherical, and WikiArt are conducted with samples from the same distribution. I understand that while standard OT solvers need retraining from scratch, Meta OT provides a better initialization. The authors should discuss to what extent Meta OT can handle the stream of new images in deployment.\n3. It would be helpful to assess the performance of color transfer if the authors provide quantitative results. It is hard to judge qualitatively since images visually look all the same. I suggest both perception and distortion metrics to analyze perceptual quality and geometric distortion in the pushforward samples.", " This paper views the dual problem of optimal transport via the lens of amortized optimization. Then the authors propose a novel method to efficiently predict the optimal transport maps from the input measures. The predicted solutions can be then used as an initialization for standard OT solvers. Empirically, this method improves the computational time of standard OT solvers by multiple orders of magnitude in both discrete and continuous settings.\n While the use of amortization in OT is not completely new, this work proposed a novel approach to find good initializations to solve multiple OT problems more efficiently. Experimental results back the computational advantages of Meta OT. Using the initial prediction from Meta OT, standard OT solvers converge much faster. However, there is no demonstration of some standard machine learning tasks that famously gain benefits from OT (e.g. generative modeling, domain adaptation). Therefore, it is difficult to judge the “quality” of transport maps of the proposed solvers in practical applications. Overall, this paper is well-structured and easy to follow. There are some minor parts that can be improved to increase the readability of the paper. I have the following questions:\n1. The sentence in L158 is not clear. Could you explain it in more detail?\n2. What is the formula for the marginal error in Table 1 and Figure 4? Is it Equation (7)? What if we use a stricter threshold like 10e-4, 10e-5, etc. Does MetaOT + Sinkhorn still converge faster? \n3. What is the formula for the normalized dual objective value in Figure 7? Is 1.0 the optimized value?\n4. **Related work.** I came across a recently published paper “Rethinking Initialization of the Sinkhorn Algorithm”. Because both papers try to generate an efficient initialization for speeding up the Sinkhorn algorithm, it may be useful to cite and discuss that paper.\n\n**Minors:**\n* *Sinkhorn algorithm.* The update for $g_i$ in Algorithm 1 should use $f_i$ instead of $f_{i-1}$.\n The authors provided two limitations of Meta OT which may be practically important but are out of their problem settings. Other than those two, it would be interesting to see the applications of Meta OT on deep learning tasks such as generative modeling or domain adaptation.\n" ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2022_vKBdabh_WV", "yi1jfC2PxvY", "MJOZ0gJCMXU", "GOB1t5NIqK", "qZ445NM_Fdt", "gBT7GILDQ1j", "hQ6mC4_EhjQ", "7Di0JvA6YeV", "iHGUYC-zNvY", "GOB1t5NIqK", "kGrqHEcmTW", "nips_2022_vKBdabh_WV", "nips_2022_vKBdabh_WV", "nips_2022_vKBdabh_WV", "nips_2022_vKBdabh_WV" ]
nips_2022_5VCT-DptDTs
Heterogeneous Skill Learning for Multi-agent Tasks
Heterogeneous behaviours are widespread in many multi-agent tasks, which have not been paid much attention in the community of multi-agent reinforcement learning. It would be a key factor for improving the learning performance to efficiently characterize and automatically find heterogeneous behaviours. In this paper, we introduce the concept of the skill to explore the ability of heterogeneous behaviours. We propose a novel skill-based multi-agent reinforcement learning framework to enable agents to master diverse skills. Specifically, our framework consists of the skill representation mechanism, the skill selector and the skill-based policy learning mechanism. We design an auto-encoder model to generate the latent variable as the skill representation by incorporating the environment information, which ensures the distinguishable of agents for skill selection and the discriminability for the skill learning. With the representation, a skill selection mechanism is invented to realize the assignment from agents to skills. Meanwhile, diverse skill-based policies are generated through a novel skill-based policy learning method. To promote efficient skill discovery, a mutual information based intrinsic reward function is constructed. Empirical results show that our framework obtains the best performance on three challenging benchmarks, i.e., StarCraft II micromanagement tasks, Google Research Football and GoBigger, over state-of-the-art MARL methods.
Accept
The reviewers have largely agreed upon the value of the paper's concept (heterogeneous skills for MARL) and appreciated its impressive experimental gains on a range of environments. Each reviewer pointed out unique areas for improvement: citations to classical work, precision in derivations around conditional entropy and general writing improvements - which I find were largely addressed in the rebuttal discussions. Although the precise notion of skills and their application in the MA setting can be debated (is it just an exploration guide, as stated by reviewer j8Gw?) that is quite an interesting debate to engage in. The relatively unique concept for the area and strong performance on a core set of benchmarks will be quite interesting for the NeurIPS community.
train
[ "i_tW9DWucI", "4ROKEFRvQwZ", "aLsJPqd3Y-X", "8ahvVDHBfa-", "eCwPkBDF5lo", "p2Z3N-zntt", "J0W62fywyJc", "g00EqJSGE7", "bZehBivUbfE", "M2sQe29rYC", "Pvt8gSVFZ-", "dNOBm-1eKFv", "EPlkwVwUcmz", "O7IxswKA8zs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the inspiring and insightful comments which really helped us to improve our work!\nWe have submitted a revised paper. We hope that we have addressed all your concerns.\nPlease let me know if you have any other issues.", " Thank you for the response. \nTotally, unclear points for me are clarified. \n", " Thanks for your positive feedback on our work and for your valuable comments. \n\nInspired by your second comment and the derivation of $H[z|o]$ in our last response, we find an easier way to construct the lower bound of the intrinsic reward. In this way, we do not have to generate the term $\\mathbb E_{p(a,z|o)}[\\log \\frac{1}{p(z|o)}]$ mentioned in your first comment. Further, we make a careful check on the derivation of the lower bound of the intrinsic reward. Your concerns are answered below.\n\nFirstly, we can construct the correct lower bound of $H[a|s]$ according to the second comment.\n\\begin{align}\nH[a|s] =& \\sum_{s} \\sum_{a} p(a,s) \\log \\frac{1}{p(a|s)} \\\\\\\\\n\\geq& \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\log\\frac{p(z|a,s)}{p(a,z|s)}\\Big] \\\\\\\\\n=& \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\log p(z|a,s)\\Big] + \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\frac{1}{\\log p(a,z|s)}\\Big] \\\\\\\\\n=& \\mathbb{E}_\\{p(a,s)\\}\\Big[\\mathbb{E}_\\{p(z|a,s)\\} \\log p(z|a,s)\\Big] + \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\frac{1}{\\log p(z|s)p(a|z,s)}\\Big] \\\\\\\\\n=& -\\mathbb{E}_\\{p(a,s)\\}\\Big[H[p(z|a,s)]\\Big] + \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\frac{1}{\\log p(z|s)}\\Big] + \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\frac{1}{p(a|z,s)}\\Big] \\\\\\\\\n=& -\\mathbb{E}_\\{p(a,s)\\}\\Big[H[p(z|a,s)]\\Big] + \\mathbb{E}_\\{p(a,z,s)\\}\\Big[\\frac{1}{\\log p(z|s)}\\Big] + \\mathbb{E}_\\{p(z,s)\\}\\Big[H[p(a|z,s)]\\Big]\n\\end{align}\n\nThen we detail the derivation of the lower bound of the intrinsic reward:\n\\begin{align}\nr^m=&I(z;o)+I(a;z|o)+H[a|z,o] \\\\\\\\\n=&(H[z]-H[z|o])+(H[a|o]-H[a|z,o])+H[a|z,o] \\\\\\\\\n=&H[z]-H[z|o]+H[a|o] \\\\\\\\\n\\geq&H[p(z)]-\\mathbb{E}_{p(a,o)}\\Big[H[p(z|a,o)]\\Big]+\\mathbb{E}_\\{p(z,o)\\}\\Big[H[p(a|z,o)]\\Big] -H[z|o]+ \\mathbb E_\\{p(a,z,o)}\\Big[\\log \\frac{1}{p(z|o)}\\Big]\n\\end{align}\n\nNow we focus on the last term $-H[z|o]+ \\mathbb E_\\{p(a,z,o)}\\Big[\\log \\frac{1}{p(z|o)}\\Big]$.\n\\begin{align}\n\\mathbb E_\\{p(a,z,o)}\\Big[\\log \\frac{1}{p(z|o)}\\Big] =& \\sum_{z} \\sum_{o} \\sum_{a} p(z,o, a) \\log \\frac{1}{p(z|o)} \\\\\\\\\n=& \\sum_{z} \\sum_{o} \\sum_{a} p(a)p(z,o | a) \\log \\frac{1}{p(z|o)} \\\\\\\\\n=& \\sum_{z} \\sum_{o} p(z,o) \\log \\frac{1}{p(z|o)} \\\\\\\\\n=& H[z|o]\n\\end{align}\n\nTherefore, we can eliminate the term $-H[z|o]+ \\mathbb E_\\{p(a,z,o)}\\Big[\\log \\frac{1}{p(z|o)}\\Big]$. In summary, we can get the lower bound of the intrinsic reward in our paper in this easier way. We will fix the constructing of the lower bound of the intrinsic reward in the revised version.\n\nThanks again for your inspiration and great suggestions!\n\n", " Thanks for answers to my feedback. The most of concerns are solved, but the derivation of Intrinsic reward isn't still clear.\n\n1. The value $\\mathbb{E}_{p(a,z|o)}[\\log \\frac{1}{p(z|o)}]$ is expectation over the variables $a,z$ not $o$.\n\n \\begin{align}\n \\sum_{a'} \\sum_{z'} \\frac{p(a',o,z')}{p(o)}\\log \\frac{1}{p(z'|o)}\n \\end{align}\n\n2. When the authors find the lower bound on $H[a|s]$ in Theorem 1(Appendix A), the authors should take expectation over the $s$. Then the lower bound will become\n\n \\begin{align}\n \\mathbb{E}_{p(a,z,s)}\\Big[\\log\\frac{p(z|a,s)}{p(a,z|s)}\\Big]\n \\end{align}\n", " These two tables show the extended ablation experiment of the skill representation mechanism where HSL no skill repr is the same in ablation study in the main body and HSL skill-id replaces the latent skill variables in the skill selector with skill-ids. We can observe that HSL with latent skill variables in the skill selector achieves the best performance among all scenarios. This shows that latent skill variables play an important role in the skill selector because these variables contain information of the environmental model mentioned in A1. HSL skill-id outperforms HSL no skill repr because skill-ids are orthogonal to each other. Orthogonality ensures the distinguishability of the skill-id, which is useful for skill selection. HSL no skill repr uses raw states in the skill selector and does not includes any distinguishable features for skills. Therefore, it gets the lowest win rates. The lack of information about the environment model makes the quality of skill selection in HSL HSL skill-id is lower than that of HSL skill-id. This is the reason for the performance gap between these two methods.\n\n[1] David Barber Felix Agakov. The im algorithm: a variational approach to information maximization. Advances in Neural Information Processing Systems, 16:201, 2004.", " Now we talk about how to eliminate term $ -H[z|o]+\\mathbb{E}_\\{p(a,z|o)\\} [\\log \\frac{1}{p(z|o)}]$.\n\nFor $H[z|o]$, we get:\n\\begin{aligned}\nH[z|o]=&\\sum_\\{o'\\} p(o')H[z|o=o']\\\\\\\\\n=&\\sum_\\{o'\\}p(o')\\sum_\\{z'\\} p(z'|o') \\log \\frac{1}{p(z'|o')}\\\\\\\\\n=&\\sum_\\{o'\\}\\sum_\\{z'\\}p(o',z')\\log \\frac{1}{p(z'|o')}\\\\\\\\\n=&\\sum_\\{a'\\}p(a')\\sum_\\{o'\\}\\sum_\\{z'\\}p(o',z')\\log \\frac{1}{p(z'|o')}\\\\\\\\\n=&\\sum_\\{a'\\}\\sum_\\{o'\\}\\sum_\\{z'\\}p(a',o',z')\\log \\frac{1}{p(z'|o')}\n\\end{aligned}\n\nFor $\\mathbb{E}_\\{p(a,z|o)\\} [\\log \\frac{1}{p(z|o)}]$, we get:\n\\begin{aligned}\n\\mathbb{E}_\\{p(a,z|o)\\} [\\log \\frac{1}{p(z|o)}]=&&\\sum_\\{a'\\}\\sum_\\{o'\\}\\sum_\\{z'\\} \\frac{p(a',o',z')}{p(o')}\\log \\frac{1}{p(z'|o')}\n\\end{aligned}\n\nThen we can get:\n\\begin{aligned}\n0\\leq p(o') \\leq 1 \\Rightarrow & \\frac{1}{p(o')} \\geq 1 \\Rightarrow \\frac{p(a',o',z')}{p(o')} \\geq p(a',o',z') \\Rightarrow \\mathbb{E}_\\{p(a,z|o)\\} [\\log \\frac{1}{p(z|o)}] \\geq H[z|o]\\\\\\\\\n\\Rightarrow & -H[z|o]+\\mathbb{E}_\\{p(a,z|o)\\} [\\log \\frac{1}{p(z|o)}] \\geq 0\n\\end{aligned}\n\nIn summary, we can get $r^m \\geq H[p(z)]-\\mathbb{E}_\\{p(a|o)\\}[H[p(z|a,o)]]]+\\mathbb{E}_\\{p(z|o)\\}[H[p(a|z,o)]]$, which is the lower bound in our paper. We will fix the constructing of the lower bound of the intrinsic reward in the revised version.\n\n---\n\n**Q6:** Paragraphs on lines 58-70 should go to the relevant work section\n\n**A6:** This paragraph forms a connecting link between the preceding and the following. The previous paragraph introducing the skill learning problem, and then this part introduces some similar works. It explains how those works deal with the skill learning problem and points out their limitations. The possible room for improvement is also described, which provides preparation and groundwork for the method proposed in the following paragraphs. \n\nHowever, this paragraph involves many details of previous work, we will organize these details and put them into the related work section in the revised version.\n\n---\n\n**Q7:** There is no description how to compute the local $Q_i^v$ in Secion 3.2\n\n**A7:** The computation of the local $Q_i^v$ is shown in Figure 1 but omitted in Section 3.2. We explain it in detail here. As shown in Figure 1, the computation of Qiv is simple. Once we have latent observation variables of all agents and latent skill variables of all skills, we represent these two variables as matrices. The dimensions of matrices transformed from latent observation variables and latent skill variables are *(num_agents, num_latent_dim)* and *(num_skills, num_latent_dim)*, respectively. Then we perform transpose operation on the second matrix. Finally, we apply matrix multiplication of these two matrices and get the result $Q^v$ with the dimension of *(num_agents, num_skills)*. Row $i$ in $Q^v$ is the local $Q_i^v$ of agent $i$ and each element $j$ in $Q_i^v$ means the probability of choosing skill $j$.\n\n---\n\n**Q8:** The loss function for $q_{θ_z}$ is missing\n\n**A8:** In the implementation, we adopt variational inference to estimate $p(z|o,a)$ and learn the variational distribution $q_{θ_z}(z|o,a)$. The variational inference method is proposed in Agakov, 2004, which is not the main contribution of our paper. We could add these details into the appendix. $q_{θ_z}(z|o,a)$ is parameterized by a neural network and $θ_z$ are parameters of this neural network. This network takes observations and actions of agents as inputs and outputs the estimation of chosen skill $z$. We apply supervised training of this network, with the actual skill selected as the supervision. Therefore, the loss function of $q_{θ_z}$ is $MSE(z_\\{out\\}, z_\\{true\\})$, where MSE represents the MSE loss, $z_\\{out\\}$ is the output of the network and $z_\\{true\\}$ denotes the actual chosen skill $z$.\n\n---\n\n**Q9:** As previously explained, the reviewer believes that the skill-id is sufficient to be used as a latent skill variable. Could you provide the results of using skill-ids as skill varaibles instead of skill representation method\n\n**A9:** We have conducted experiments of replacing latent skill variables. Results are shown in the following table.\n| Scenario | 5m_vs_6m | MMM2 | 27m_vs_30m | Corridor | 3s5z_vs_3s6z | 6h_vs_8z |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n| HSL no skill repr | 51% | 91% | 84% | 35% | 62% | 44% | \n| HSL skill-id | 67% | 92% | 93% | 54% | 73% | 61% | \n| HSL | 85% | 100% | 100% | 82% | 84% | 82% | \n\n| Scenario | 3_vs_1_with_keeper | Hard Counter-attack | Corner | GoBigger:3_vs_3 | GoBigger:3_vs_3 _with_thorn | \n| ---- | ---- | ---- | ---- | ---- | ---- | \n| HSL no skill repr | 42% | 28% | 17% | 0% | 0% | \n| HSL skill-id | 55% | 37% | 33% | 14% | 26% | \n| HSL | 70% | 65% | 56% | 38% | 55% | \n", " Thank you for the detailed and constructive comments. Below please find the responses to some specific comments.\n\n**Q1:** The reviewer does not understand why the skill representation mechanism is needed in the proposed framework\n\n**A1:** The skill representation mechanism essentially generates skill latent variable $z_{latent}$ which implies certain correlations among latent skill variables, reward function and state transition function through the training of the proposed auto-encoder structure. $z_{latent}$ is important for the distinguishable of agents for skill selection, which plays an indirect role in improving the discriminability for the skill learning. For the skill selector, $z_{latent}$ and the observation latent are both used as the input feature. Due to the implied correlation, $z_{latent}$could greatly improve the learning efficiency of the skill selector. In fact, we find that the skill selector works not well with only the observation latent.\n\n---\n\n**Q2:** Two decoders can be trained without the latent skill variable, since two decodes already take observation and actions. So I could not agree that the auto-encoder generates latent skill variables that reveal different effect of different skills\n\n**A2:** The reason of combining latent skill variables with observations and actions is that embedding information from the environment into latent skill variables. The encoder in the auto-encoder model transforms the one-hot variables of skill-ids into latent skill variables. However, only with latent skill variables extracted from the encoder, the skill selector cannot assign proper skill for agents because it does not know how latent skill variables related to rewards and the state transition procedure of the environment. In the implementation of the auto-encoder, we concatenate observations, actions and latent skill variables extracted from the encoder, which is then used as the input feature for the two decoders. We train the decoders with supervised reward and next observations from the environment, which embeds implicated relations among latent skill variables, reward function and state transition function into $z_{latent}$.\n\n---\n\n**Q3:** In equation (2), the cosine distance is used to distinguish latent skill variables, but I think the skill-id is already enough for distinguishability\n\n**A3:** As mentioned in A1 and A2, the skill selector requires not only the observation latent feature but also the skill latent feature. The skill latent feature can be replaced with the skill-id. The different skill-ids are orthogonal to each other, which ensures the distinguishability of these features. However, compared to the latent skill variables, the skill-id lacks the encoding of environmental information. This leads to the fact that the skill selector can only obtain the agents’ perception of the environment and cannot know the interaction between the skill policy and the environment. Therefore, the latent skill variables are very important for the skill selector and cannot be simply replaced by the skill-id. In our implementation, we found that if the cosine distance is not constrained, latent skill variables will tend to be similar with a certain probability during the training process. Thus, we add the constraint of the cosine distance in Equation (2) in order to ensure the distinguishability of latent skill variables.\n\n---\n\n**Q4:** The motivation of introducing $H[a|o,z]$ for intrinsic reward rm is not acceptable. The reviewer thinks that the effect of maximizing policy entorpy is just effective exploration not discrimability of skill\n\n**A4:** Thank the reviewer for pointing out this problem. In fact, the word “discrimability” is not used exactly in this paper. Diversity is more proper here. Our aim of introducing $H[a|o,z]$ is to encourage the diversity of skills, which incentivizes the skills to be as diverse as possible by learning skills that act as randomly as possible. This is consistent with the aim of effective exploration.\n\n---\n\n**Q5:** There is something wrong with constructing the lower bound of the intrinsic reward rm. In equation 4, two terms 1 and 2 are conditional entropy, but the author use these conditional entropy as just common shannon entropy.\n\n**A5:** Thank the reviewer for the detailed check. In fact, the lower bound for the intrinsic reward is correct. Some formulas are not precisely written, and here we give more explanations.\n\nTerm 1 and term 2 in Equation (4) are conditional entropy and we cannot simply eliminate these terms. After careful formula derivation, the correct method for constructing the lower bound of the intrinsic reward can be described as follows:\n\n\\begin{aligned}\n r^m=&I(z;o)+I(a;z|o)+H[a|z,o] \\\\\\\\\n =&(H[z]-H[z|o])+(H[a|o]-H[a|z,o])+H[a|z,o] \\\\\\\\\n =&H[z]-H[z|o]+H[a|o] \\\\\\\\\n \\geq&H[p(z)]-\\mathbb{E}_\\{p(a|o)\\}[H[p(z|a,o)]]+\\mathbb{E}_\\{p(z|o)\\}[H[p(a|z,o)]]-H[z|o]+ \\mathbb E_\\{p(a,z|o)\\}[\\log \\frac{1}{p(z|o)}]\n\\end{aligned}\n", " **Q5:** Appendix A: Eq. (12) is different from eq. (4). If s can be replaced with $o$, but what is $x$? Where is the 3rd term of eq. (4)? In Appendix Table 2, $n_{skill}$ of Hard Counter-attack in GRF was 3, but in Appendix C.1 Skill Demonstration on GRF, the authors trained HSL with 4 skills. Which is correct?\n\n**A5:** Thanks very much for your constructive comments. Equation (12) in Appendix A is not consist with Equation (4) in the main body because Equation (12) is a previous wrong version. Equation (12) in Appendix A is a more general case. We replace partially observation features o in Equation (4) with state features s. But x in the second element of the first term is wrong. The correct one is state features $s$. Missing the third term in Equation (4) is also wrong. The correct Equation (12) includes the third term. The correct value of $n_{skill}$ of Hard Counter-attack in GRF is 4. This scenario is harder than the *3_vs_1_with_keeper* scenario. Therefore, Hard Counter-attack scenario requires more skills than *3_vs_1_with_keeper* scenario does. We will correct these issues in the revised version.\n\n---\n\n**Q6:** The limitations and potential negative societal impact were described in Appendices D and E, respectively. However, the latter was not concrete (“Therefore, we do not anticipate a direct negative outcome. In practical applications involving our method, potentially negative outcomes might occur.”).\n\n**A6:** Here we add some examples to show potentially negative outcomes of our method. The first is related to human oversight of the MARL system. The Ethics Guidelines for Trustworthy AI report published in 2019 by the European Commission’s High Level Expert Group on AI states AI systems have to allow for human oversight in order to support human autonomy and decision-making. However, the data flow in a DRL system is acting upon may be incomprehensible to humans, or simply too large and fast-moving for meaningful oversight to be maintained. As for MARL methods, understanding and intervening at the level of a single agents is a more difficult problem, which poses additional challenges for oversight. The other is MARL agents apply trial-and-error approach to explore the environment to discover actions that lead to the highest reward over time. However, this is unacceptable in many real-world contexts. For instance, we cannot have self-driving cars running over pedestrians, or an energy control system accidentally switching off electricity in a hospital, before learning not to do these things. ", " Thank you very much for your detailed and constructive comments. Below please find the responses to some specific comments.\n\n**Q1:** L45- “However, extra introduced networks for all agents hinder the application of CDS on large-scale tasks, which will be illustrated in the experiment.” The proposed HSL requires extra networks to obtain the skill diversity. Since HSL outperformed CDS in MARL performance, the problem of CDS can be mentioned differently\n\n**A1:** Our approach does require extra network which is responsible for skill discovery. But the extra network in CDS is not the same as the extra network in our method. The extra network in CDS refers to the policy learning network. Previous MARL methods introduce the parameter sharing mechanism to reduce the policy search space. The parameter sharing mechanism uses only one policy network for all agents. However, this causes the problem of learning similar policies for different agents. In order to alleviate this problem, CDS adds an extra policy learning network for each agent. Note that the parameter sharing mechanism is not applied to extra networks. The final policy of each agent is the weighted sum of the outputs of agent’s two policy networks. For large-scale tasks, the number of extra networks added by CDS increases linearly with the number of agents, which will greatly degrade the performance of CDS. In the method HSL, we apply the parameter sharing mechanism both in the skill selector and the skill policy learning model. The number of neural networks will not increase with the number of agents, which is the reason that HSL performs better than CDS. Besides, we will include some of these details and make this part clearer in the revised version.\n\n---\n\n**Q2:** L248- In association with the above, “Still, the reason is that the number of agents in this scenario is much higher than that in the other two scenarios, which leads to an adverse effect on training extra networks in CDS”. I can understand the former (“the number of agents in this scenario is much higher than that in the other two scenarios”), but the latter (“which leads to an adverse effect on training extra networks in CDS”) was unclear. What did the authors try to mention?\n\n**A2:** As mentioned in A1, the parameter sharing mechanism is often used in MARL methods. In order to address the problem of learning similar policy for agents in the parameter sharing mechanism, CDS adds an extra policy network for each agent to learn heterogeneous policies. The policy network of each agent in CDS is consists of the shared policy network and the individual policy network. The number of individual policy networks need to be trained equals to that of agents in multi-agent tasks. Suppose a large-scale task contains $N$ agents, the typical MARL method QMIX trains only one shared policy network while CDS needs to train $N+1$ policy network. Obviously, the training efficiency of CDS is much lower than that of QMIX, which is called an adverse effect on training extra networks in CDS. \n\n---\n\n**Q3:** Figure 1: How did the authors create the Skill_ID pool? I cannot find this. In particular, I want to know whether this is created in a rule-based or data-driven manner\n\n**A3:** The Skill_ID pool contains one-hot vectors of skill-ids. The creation of the Skill_ID pool is simple. For example, if a total of 4 skills are learned, we first set up the skill-id list $\\\\{1,2,3,4\\\\}$. Then we encode each element in the skill-id list into one-hot vectors and we get $\\\\{0001,0010,0100,1000\\\\}$. Finally, we add these one-hot vectors into the Skill_ID pool. One of the features of our HSL is the automatic skill learning. Therefore, we cannot use prior information and expert knowledge to construct skill features or create the Skill_ID pool. We just set up simple one-hot vectors of skill-ids and use these vectors as skill features to learn skill policies automatically. The more these skill features differ from each other, the better. Therefore, the orthogonal one-hot vectors of skill-ids is a very appropriate choice. The creation of the Skill_ID pool is a simple rule-based manner, and the rule is to ensure orthogonality between features in the Skill_ID pool as much as possible.\n\n---\n\n**Q4:** L142- “This procedure is essentially a many-to-many assignment problem”. However, I cannot find how to solve this problem. How did the authors solve this problem?\n\n**A4:** Thanks very much for your constructive comments. We think that we are not using the accurate words. The assignment problem you are referring to should be a particular case of transportation problem where the objective is to assign a number of resources to an equal number of activities so as to minimize total cost or maximize total profit of allocation. The problem mentioned in L142 is that skill selector chooses proper skills for different agent at the same time. However, the statement in L142 is not strict. We will use more appropriate word in the revised version.\n", " **Q5:** the differences between the scenarios in each multi-agent task are not clear, as only some arbitrary naming is used.\n\n**A5:** Here we give more explanations for the differences on all used scenarios.\n\nWe choose six maps in SMAC, i.e., *MMM2, 3s5z_vs_3s6z, 5m_vs_6m, 27m_vs_30m, corridor* and *6h_vs_8z*. The first two maps are heterogeneous maps with different agents. The rest four maps are homogeneous maps with same agents. It is important to learn heterogeneous policies to beat enemies in these maps.\n\nFor the GRF environment, we select three scenarios which are *3_vs_1_with_keeper, Hard_counter_attack* and *Corner*. Agents in these scenarios are homogeneous, resulting in similar observations and action spaces. The scenario *3_vs_1_with_keeper* contains three players and *Hard_counter_attack* is a more difficult scenario with four players to be controlled. *Corner* scenario contains all players in a football match. \n\nIn GoBigger environment, we choose *3_vs_3* and *3_vs_3_with_thorn* for experiments. These two scenarios are very similar, except that the latter has four thorn balls. The existence of thorn balls brings greater uncertainty to the scenario and increases the difficulty for policy learning in MARL methods. \n\n---\n\n**Q6:** it is worth to make examples of skills and actions as the way the paper discusses them they appear to be the same.\n\n**A6:** The skill is the conditioned policy $\\pi(A|S,Z)$. The action generated from the skill is based on both the agent’s observation and its selected skill feature. The action generated from the vanilla policy is only based on the agent’s observation. The skill is more like a mask on original action space when generating skill policies. For example, agents can run in four directions and attack one of enemies in SMAC. The running skill works like a mask, the four directions of running are optional in this skill, while attacking one of the enemies is not optional. For the attacking skill, attacking one of the enemies is optional while the four directions of running are not optional. If the appropriate skill assignment can be found, we can achieve efficient policy learning because the policy searching space is reduced. Moreover, another benefit of the skill is the ease of learning heterogeneous policies. \n\n---\n\n**Q7:** It is worth to discuss how the framework could scale to more complex real-world scenarios\n\n**A7:** We consider the scale our framework to a real-world multi-agent autonomous driving task. In this task, each agent needs to not only drive the vehicle, but also interact with vehicles controlled by other agents. Interact between vehicles includes changing lanes, merging and overtaking other vehicles. Therefore, skills can be clearly defined according to the kind of interaction between the agents. Initial training is performed on a simulator, which has many rules and available data. Therefore, to speed up the training process and improve robustness, imitation learning can be used to quickly learn skill policies for different scenarios. When the training on the simulator converges, the learned policies learned need to be migrated to the real-world environment using the sim2real approach. Methods such as domain transformation and transfer learning can be used. Finally, the completed training algorithm framework along with the rule system is deployed to the real-world environment and the fine-tuning technique of our framework is performed on a real-world multi-agent autonomous driving task to finally achieve the desired results.\n\n---\n\n**Q8:** The section about the impact discusses that overall the method itself should not have a direct negative outcome, however notes that in practical applications they could occur. Some examples and ways to address them could be discussed\n\n**A8:** Here we give two examples to show the potential negative outcome of our method in practical applications. \n\nThe first is related to the human oversight of a MARL system. Trusty AI systems must allow for human oversight to support human autonomy and decision-making. However, MARL applications may pose challenges to human oversight because these applications aim to increase the autonomy of machines. For example, a MARL system for monitoring and adjusting energy usage in a building may be constantly making so many small decisions. These decisions are difficult for a human to review and change decisions after the fact. One way to address this problem is to impose constraints while the system is being designed such as waning a human if levels go beyond a certain threshold.\n\nThe second is related to the security of a MARL system. For example, even a demonstrably safe MARL-based multi robot system could be forced into dangerous collision scenarios by perturbing its sensory input or disrupting its reward function. Possible ways to address is to oversee the transparency of the training data and the reward function or to develop safe multi-agent reinforcement learning methods. \n", " We sincerely thank you for your time and efforts. Below please find the responses to some specific comments.\n\n**Q1:** While the paper shows an interesting direction, the tasks are relatively constrained making the significance of the proposed framework more limited.\n\n**A1:** The tasks in this paper are representative benchmarks of partially observable multi-agent games in MARL methods, for example, in QMIX, RODE, CDS etc. Most work in this research area takes those tasks to conduct experiments. In fact, those tasks are not easy to be tackled. Below we give some descriptions on those tasks. \n\nAll scenarios in SMAC environment are gradually conquered by MARL algorithm in recent years. Therefore, we introduce more complex environments which are GRF and GoBigger. The difficulty of both these two environments lies in the randomness of the opponent's policy which is built-in strong rules. The strong randomness policies pose a huge challenge for MARL algorithms to learn effective policies. Our algorithm achieves STOA performance on all three environments, which indicates that our algorithm has made great progress compared to other MARL algorithms in this field.\n\nAnother very popular and skill-related MARL research field is multi robot control. However, the action spaces of the agents in the robot environment are continuous, while those in the problem we are working on are discrete action spaces. Our work tries to learn skill policies on discrete action spaces, which is different from the robot control environment.\n\n---\n\n**Q2:** Indeed, as noted also in the limitations, the framework does not appear to be able to learn appropriate policies when more skills are present.\n\n**A2:** As long as the number of skills does not exceed a threshold, our proposed framework can still learn appropriate policies when the number of skills increases. This is proven in experimental results in Figure 5 in Appendix. Our framework is organized as a bi-level learning structure. The policy searching space becomes huge as the number of skills increases. All MARL methods will counter such problem when the task is too complex. Further, increasing the scalability of a MARL method on more complex tasks is always one of the common research directions. It would also a good direction to extend our method for other more complex tasks.\n\n---\n\n**Q3:** In addition, the paper could discuss not only the literature on MARL, but also classic methods of heterogeneous multi-agent task allocation\n\n**A3:** Typical multi-agent task allocation assigns the decomposed subtasks to the agents and arranges the execution order of the subtasks properly according to the priority or constraint relationship of the subtasks. Typical heterogeneous multi-agent task allocation must first split the task into subtasks and then pre-define the priority and the costs of performing tasks and communication. Different from the typical task allocation problem, multi-agent tasks in our paper can be modeled as a partially observable Markov decision process (POMDP). There is no priority and the costs of performing tasks and communication in POMDP. Therefore, we design skill learning method to automatically discover skills without any pre-definition. In the skill selector, the only information we know is the latent skill variables extracted from the skill representation mechanism. It is obviously different from the typical task allocation problem. Therefore, we cannot apply classic methods of heterogeneous multi-agent task allocation on the skill selection procedure.\n\n---\n\n**Q4:** the methods compared appear to be from the same group; it is worth to include and/or discuss the choice for the comparison. For example, MAVEN could be used as comparison.\n\n**A4:** The methods compared in experiments are selected from different aspects of the MARL research. The first group includes value decomposition methods such as QMIX and QPlex. These methods designs a suitable model structure and efficient training methods. The second group contains role-based methods such as ROMA and RODE. The idea of these methods is that a MA task can be divided into several sub-tasks. They introduce the concept of role to tackle these sub-tasks, which improves the efficiency of solving the total task. Methods in the third group aim to learn diversity policies. CDS designs a diversity-based approach and HSD introduces a simple skill mechanism. MAVEN, belonging to the third group, encourages effective exploration by learning a hierarchical policy to condition agents’ policies. Our proposed method belongs to the third group as well. Our method designs effective skill discovery mechanism and skill policy learning mechanism and outperforms HSD.\n", " The paper proposes a framework for multi-agent reinforcement learning with the goal of selecting heterogeneous behaviors and allocating them to agents so that an optimal policy can be achieved. The proposed framework represents skills as latent variables which are used to assign skills so that in the end agents can learn heterogeneous policies. The proposed framework is tested on three different multi-agent tasks and compared with other methods. In addition, an ablation study is included. The paper appears to inspiration from a few other papers that are cited, including [4] and [32], but the view taken by the paper in identifying skills to improve MARL is significant to potentially achieve cooperative tasks. The paper is also overall well written, providing the intuition on the main choices made for the proposed framework and presenting sound technical details. It is appreciated also the experiments in three different problems and the evaluation with other methods, as well as the ablation study.\n\nWhile the paper shows an interesting direction, the tasks are relatively constrained making the significance of the proposed framework more limited. Indeed, as noted also in the limitations, the framework does not appear to be able to learn appropriate policies when more skills are present.\n\nIn addition, the paper could discuss not only the literature on MARL, but also classic methods of heterogeneous multi-agent task allocation, which is currently missing, including\n- Wu, H., Ghadami, A., Bayrak, A. E., Smereka, J. M., & Epureanu, B. I. (2021). Impact of Heterogeneity and Risk Aversion on Task Allocation in Multi-Agent Teams. IEEE Robotics and Automation Letters, 6(4), 7065-7072.\n- Korsah, G. A., Stentz, A., & Dias, M. B. (2013). A comprehensive taxonomy for multi-robot task allocation. The International Journal of Robotics Research, 32(12), 1495-1512.\n- Emam, Y., Mayya, S., Notomista, G., Bohannon, A., & Egerstedt, M. (2020, May). Adaptive task allocation for heterogeneous multi-robot teams with evolving and unknown robot capabilities. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7719-7725). IEEE.\n- Schillinger, P., Bürger, M., & Dimarogonas, D. V. (2018). Simultaneous task allocation and planning for temporal logic goals in heterogeneous multi-robot systems. The international journal of robotics research, 37(7), 818-838. In addition to commenting to the two points raised in \"Strengths and weaknesses\" -- i.e., the non direct applicability to complex scenarios and the comparison with non-MARL classic methods:\n\nA few details that could make the paper clearer:\n- the methods compared appear to be from the same group; it is worth to include and/or discuss the choice for the comparison. For example, MAVEN could be used as comparison.\n- the differences between the scenarios in each multi-agent task are not clear, as only some arbitrary naming is used. \n- it is worth to make examples of skills and actions as the way the paper discusses them they appear to be the same.\n\nThere are a few language problems, including:\n- \"attracts widely attention\" -> \"attracts wide attention\"\n- space after ; -- e.g., \"are represented;(2)\" -> \"are represented; (2)\"\n- \"dispose of\" -> \"consider\" \n- \"With the representation\" -> \"With this representation\"\n- \"one-hot\" -> \"one-shot\"\n- \"should explore differently and access different states of the environment.\" There is different repeated, and \"explore differently\" could be more specific.\n- \"described as follows Equation (10).\" -> \"described as follows.\"\n- \"comparing results\" -> \"comparative results\"\n- \"Another notification\" -> \"Another note\"\n- \"sparce\" -> \"sparse\" The appendix includes a section where the main limitations are discussed and providing some some potential venues of future work. It is worth to discuss how the framework could scale to more complex real-world scenarios.\n\nThe section about the impact discusses that overall the method itself should not have a direct negative outcome, however notes that in practical applications they could occur. Some examples and ways to address them could be discussed.", " The authors suggest a skill-based multi-agent learning algorithm that generates diverse skills for finding heterogeneous behaviours. In the proposed mehtod, there are three mechanisms for capturing heterogeneous skills: skill representation, skil selector and skill-based policy learning. Experiment results show that the proposed method outperformed than other multi-agent methods on three multi-agent RL benchmarks.\n Strenghts\n \n 1. The proposed method (HSL) suggests two types of intrinsic rewards for skill selector and policy learning to enable effective exploration on both skill selection and heterogeneous behaviour.\n\n 2. Experiment results show the significant improvements on several benchmarks.\n\nWeakenesses\n 1. The reviewer does not understand why the skill representation mechanism is needed in the proposed framework. \n 1) Two decoders can be trained without the latent skill varialbe, since two decodes already take observation and actions. So I could not agree that the auto-encoder generates latent skill variables that reveal different effect of different skills.\n 2) In equation (2), the cosine distance is used to distinguish latent skill varialbes, but I think the skill-id is already enough for distinguishability. \n \n 2. The motivation of introducing $H[a|o,z]$ for intrinsic reward $r_m$ is not acceptable. The reviewer thinks that the effect of maximizing policy entorpy is just effective exploration not discrimability of skill.\n\n 3. There is something wrong with constructing the lower bound of the intrinsic reward $r_m$. \n\n In equation 4, two terms 1 and 2 are conditional entropy, but the author use these conditional entropy as just common shannon entropy.\n 1) $H[a|z, o] = \\mathbb{E}_{p(z|o)}[H(p(a|z,o))] \\neq H(p(a|z,o))$ is the conditional entropy between random variable $a$ and $z$ given $o$.\n 2) $H[z|o] =\\mathbb{E}_{p(o)}[H(p(z|o))] \\neq H(p(z|o)) $ is the conditional entropy between random variable $z$ and $o$ 1. Paragraphs on lines 58-70 should go to the relevant work section.\n\n 2. There is no description how to compute the local $Q_i^v$ in Secion 3.2\n\n 3. The loss function for $q_{\\theta_z}$ is missing.\n\n 4. As previously explained, the reviewer believes that the skill-id is sufficient to be used as a latent skill variable. Could you provide the results of using skill-ids as skill varaibles instead of skill representation method. yes", " The authors proposed a skill-based MARL framework to enable agents to master diverse skills. The framework consists of the skill representation mechanism with an auto-encoder model, the skill selector to realize the assignment from agents to skills, and the skill-based policy learning mechanism with a mutual information-based intrinsic reward function. Experimental results show that the framework obtains the best performance on three challenging benchmarks, i.e., StarCraft II micromanagement tasks, Google Research Football and GoBigger, over state-of-the-art MARL methods.\n The strength of the paper is as follows:\n* Novelty: The authors proposed the skill representation mechanism with an auto-encoder model, the skill selector to realize the assignment from agents to skills, and the skill-based policy learning mechanism with a mutual information-based intrinsic reward function. \n* Clear results: Experimental results show that the framework obtains the best performance on three challenging benchmarks, i.e., StarCraft II micromanagement tasks, Google Research Football and GoBigger, over state-of-the-art MARL methods.\n\nThe weakness of the paper is as follows (please also see below “Questions):\n* The presentation was sometimes inconsistent.\n* The method descriptions were sometimes unclear. \n * L45- “However, extra introduced networks for all agents hinder the application of CDS on large-scale tasks, which will be illustrated in the experiment.” The proposed HSL requires extra networks to obtain the skill diversity. Since HSL outperformed CDS in MARL performance, the problem of CDS can be mentioned differently. \n* L248- In association with the above, “Still, the reason is that the number of agents in this scenario is much higher than that in the other two scenarios, which leads to an adverse effect on training extra networks in CDS”. I can understand the former (“the number of agents in this scenario is much higher than that in the other two scenarios”), but the latter (“which leads to an adverse effect on training extra networks in CDS”) was unclear. What did the authors try to mention? \n* Figure 1: How did the authors create the Skill_ID pool? I cannot find this. In particular, I want to know whether this is created in a rule-based or data-driven manner. \n* L142- “This procedure is essentially a many-to-many assignment problem”. However, I cannot find how to solve this problem. How did the authors solve this problem? \n* Appendix A: Eq. (12) is different from eq. (4). If s can be replaced with o, but what is x? Where is the 3rd term of eq. (4)?\nIn Appendix Table 2, n_skill of Hard Counter-attack in GRF was 3, but in Appendix C.1 Skill Demonstration on GRF, the authors trained HSL with 4 skills. Which is correct? \n The limitations and potential negative societal impact were described in Appendices D and E, respectively. However, the latter was not concrete (“Therefore, we do not anticipate a direct negative outcome. In practical applications involving our method, potentially negative outcomes might occur.”)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "8ahvVDHBfa-", "g00EqJSGE7", "8ahvVDHBfa-", "p2Z3N-zntt", "p2Z3N-zntt", "J0W62fywyJc", "EPlkwVwUcmz", "bZehBivUbfE", "O7IxswKA8zs", "Pvt8gSVFZ-", "dNOBm-1eKFv", "nips_2022_5VCT-DptDTs", "nips_2022_5VCT-DptDTs", "nips_2022_5VCT-DptDTs" ]
nips_2022_RF5Lb6NaZp
End-to-End Learning to Index and Search in Large Output Spaces
Extreme multi-label classification (XMC) is a popular framework for solving many real-world problems that require accurate prediction from a very large number of potential output choices. A popular approach for dealing with the large label space is to arrange the labels into a shallow tree-based index and then learn an ML model to efficiently search this index via beam search. Existing methods initialize the tree index by clustering the label space into a few mutually exclusive clusters based on pre-defined features and keep it fixed throughout the training procedure. This approach results in a sub-optimal indexing structure over the label space and limits the search performance to the quality of choices made during the initialization of the index. In this paper, we propose a novel method ELIAS which relaxes the tree-based index to a specialized weighted graph-based index which is learned end-to-end with the final task objective. More specifically, ELIAS models the discrete cluster-to-label assignments in the existing tree-based index as soft learnable parameters that are learned jointly with the rest of the ML model. ELIAS achieves state-of-the-art performance on several large-scale extreme classification benchmarks with millions of labels. In particular, ELIAS can be up to 2.5% better at precision@$1$ and up to 4% better at recall@$100$ than existing XMC methods. A PyTorch implementation of ELIAS along with other resources is available at https://github.com/nilesh2797/ELIAS.
Accept
The paper considers extreme multilabel classification (XMC) and proposes a two-stage retrieval and classification model which replaces the usual initial hard-partitioning with a soft learnable partitioning. The reviewers concur that the end-to-end methodology for jointly training the representation, indexing, classification parameters is novel and leads to notable improvements over the performance of current SOTA XMC methods.
train
[ "OCtOJ1Y0_6", "V1ibGOOn9DH", "MDJHNo2vuLm", "6P9ZIFPYiTlM", "MXXeohPFqN", "1YDUkPQCmKQ", "d-W76dvQ7X0", "xcnxoZ0hXn_", "dtmHfgmFhBL", "OzhxGuCHy_", "SiPq9kAfKVS", "LUi3e6XOQo", "TmFn_GKaGA_", "ArBzhqlbSrd" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We're happy to know that our response helped in addressing your concerns. Thanks again for the helpful feedback and upgrading the score!", " The author-rebuttal phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!", " Thanks for the authors' response and it has addressed most of my concerns. I'll raise my score accordingly.", " Many thanks for your appreciative comments and for upgrading your score! Please find below our response to the follow-up questions\n\n> So you trained a separate classifier for every label, right? \n\nYes\n\n> How did you mine negative examples for a label? Did you use all datapoints $x_i \\in X_{train}$ such that $y_{i, l} = 0$ as negatives when training classifier for label $\\ell$ or did you do some form of subsampling?\n\nYou're correct, all training points which are not positive get counted as the negative, there is no subsampling performed.\n\n> Is the following a correct description of the classifier? First encode an instance $x$ using BERT and pass the encoded representation through a linear classifier corresponding to label $\\ell$ to classify as 0/1 wrt label $\\ell$. Did you train OvA classifiers by stacking a linear layer with $\\lvert \\mathcal{L} \\rvert$-dim output on top of BERT encoder and using sigmoid over each output unit?\n\nThat's correct, the whole Bert-OvA model is BERT encoder followed by a $d \\times \\lvert \\mathcal{L} \\rvert$ classifier matrix where each column represents a label $\\ell \\in \\mathcal{L}$. Forward pass of this model will generate $\\lvert \\mathcal{L} \\rvert$ outputs which are passed through sigmoid function to produce logits.\n\n> Are BERT model parameters frozen or trained together with linear classifier parameters? If they are trained together, then I am a little surprised that it was possible to train the model in 18 hours! (unless there is some sort of subsampling involved or BERT model parameters are frozen)\n\nBERT parameters are trained together with the linear classifier parameters. Basic operations like matrix multiplication are very heavily parallelized on GPU implementations, because of this even brute-force matrix multiplication on a label space of 670K labels is manageable. On moderately sized datasets ($\\sim$500K), the computational cost of the BERT encoder makes it hard to observe a stark difference in the training times between the brute-force baseline and any subsampling-based XMC method (which uses BERT). Although, as the label space grows ($>$1M) the difference grows larger and larger because then the cost of brute-force matrix multiplication starts to heavily outweigh the constant cost of the BERT encoder.\n\n> Also, what does L323 mean when you say \"We follow the same training procedures as ELIAS for this baseline\"?\n\nWe meant to say that we keep the same training setup as ELIAS's code (i.e. AdamW optimizer, mixed precision training, etc) for implementing this baseline\n\n> ... How were the dense features obtained here? Did you use some pretrained BERT model or did you use a trained model from previous papers such X-Transformer?\n\nWe experimented with two options here 1) obtain clusters using pre-trained BERT and train stage 1 model, 2) obtain clusters using pre-trained BERT and train stage 1 model for a few epochs, then recompute clusters based on current BERT embeddings and continue stage 1 training for the remaining epochs. On Wikipedia-500K and Amazon-3M, we don't observe any significant difference in final accuracy, on Amazon-670K the second approach gives slightly better results.\n\n> Would it be reasonable to train a model with sparse+dense features on the final index obtained while keeping dense model fixed and the index fixed? Or will there be some complications due to labels belonging to multiple clusters?\n\nWe believe it should be possible to train such a model and in some sense, ELIAS's sparse re-ranker is trying to do something similar but it's training only the leaf layer. Training each layer one at a time seems reasonable although how to assign training points to cluster nodes is not very straightforward.", " Thank you for the clarification!! This is really great work!! I updated my review rating!\n\nRe: Brute-force OvA baseline:\nSo you trained a separate classifier for every label, right? How did you mine negative examples for a label? Did you use all datapoints $x_i \\in X_{train}$ such that $y_{i, \\ell} = 0$ as negatives when training classifier for label $\\ell$ or did you do some form of subsampling?\n\nIs the following a correct description of the classifier?\nFirst encode an instance $x$ using BERT and pass the encoded representation through a linear classifier corresponding to label $\\ell$ to classify $x$ as 0/1 wrt label $\\ell$. \n\nDid you train OvA classifiers by stacking a linear layer with $|\\mathcal{L}|$-dim output on top of BERT encoder and using sigmoid over each output unit? \n\nAre BERT model parameters frozen or trained together with linear classifier parameters? If they are trained together, then I am a little surprised that it was possible to train the model in 18 hours! (unless there is some sort of subsampling involved or BERT model parameters are frozen).\nAlso, what does L323 mean when you say \"We follow the same training procedures as ELIAS for this baseline\"? \n\n\nRe: Using Dense + Sparse Features\n\nI understand how it might not be possible to perform end-to-end learning with dense+sparse features using existing deep learning frameworks. I was reading through the supplement and realized that dense+sparse features were used for clustering **before** stage 1 training. \nHow were the dense features obtained here? Did you use some pretrained BERT model or did you use a trained model from previous papers such X-Transformer?\n\nJust curious (no need to run this experiment for the rebuttal), instead of training a separate re-ranker using dense+sparse features, would it be reasonable to train a model with sparse+dense features on the final index obtained while keeping dense model fixed and the index fixed? Or will there be some complications due to labels belonging to multiple clusters?\n\n\n\n ", " Yes, we do plan to put results from all the additional experiments and a summary of the rebuttal discussion in the main paper/appendix. \n\n(1) Following other methods like AttentionXML and LightXML, we create the validation set by randomly sampling 5000 points from the training set, this is usually $\\le$ 1% of the training data for most of the large extreme classification datasets (Amazon-670K, Wikipedia-500K, Amazon-3M). You're correct that the validation set is more likely to have head labels but so is the whole extreme classification dataset since the random sample represents a smaller unbiased version of the full dataset. Yes, that's definitely a possibility that the validation split completely removes extremely tail labels with 1 or 2 data points from the training set but usually, these labels don't contribute to the final prediction accuracy anyway since label classifiers learned on just 1 or 2 training points is often not of good quality and tends to overfit to those particular training points. One way to overcome the problem of information loss in the validation set is to re-learn the model on the full training set after we have validated the hyperparameters on the split training and validation set. In practice, this gives very minor improvements for the extra computation cost it incurs.\n\n(2) The Bert-OvA model took $\\sim$18 hours to train on Amazon-670K dataset\n\n(3) Table 16 below reports the contribution to P@5 of each decile on Amazon-670K dataset. Sparse re-ranker improves performance in almost all deciles. As you mentioned, we believe a major factor could be the truncated text used in deep encoder but there are other factors which might favor the addition of sparse re-ranker for e.g. a) sparse classifiers have a much bigger parameter space than dense classifiers, b) since sparse re-ranker only ranks the top 100 labels generated by ELIAS, it's addition acts in a similar way as boosting where sparse re-ranker's job is to correct some of the mistakes made in the top 100. Thanks for the suggestions on using Longformer and Big-Bird, we agree with you that there is a need to investigate efficient encoders for large-scale classification settings because even with the truncated text of length 128 the computational cost of BERT models ($T_{\\text{bert}}$) is significantly high which makes them hard to use in practical settings and becomes one of the major computational bottleneck in any state of the art method.\n\n*Table 16. ELIAS-1 vs ELIAS-1 + sparse re-ranker decilewise contribution to P@5 on Amazon-670K dataset*\n| Method | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | P@5 |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| ELIAS-1 | 7.35 | 6.57 | 5.44 | 4.48 | 3.73 | 3.10 | 2.59 | 2.33 | 1.74 | 2.72 | 40.04 |\n| ELIAS-1 + sparse re-ranker | 7.75 | 6.80 | 5.52 | 4.51 | 3.70 | 3.05 | 2.60 | 2.35 | 1.82 | 3.16 | 41.27 |\n\nWe're happy to discuss any further questions/clarifications needed", " Thank you for running these additional experiments and analysis! I hope that you will be able to add this analysis to the main paper or to the appendix! \nI have a few minor lingering questions\n1) It is still not clear to me how the validation data was created. Were labels for validation data sampled uniformly at random? If so, then validation data is more likely to have head labels (i.e. labels with high frequency). Also, it might be possible that a rare label which occurred only once in training data gets moved to validation split. Since the proposed approach can not handle new labels at test time, it will fail for such rare labels that get moved to validation split. I would appreciate some more clarification on this.\n2) How long did it take to train Brute-force OvA baseline?\n3) In order to better understand the contribution of sparse re-ranker towards overall performance, it might be helpful to breakdown performance of ELIAS and ELIAS++ based on label frequency. If ELIAS++ consistently improves over ELIAS across all label frequency buckets, then it might suggest that the reason behind ELIAS++ being better than ELIAS is simply because of deep encoder used in ELIAS receiving truncated text while ELIAS++ used the entire text (by converting them to sparse features). \nAlthough clearly beyond the scope of this paper, it might be useful to investigate models such as Longformer [1] and Big-Bird[2] which can encode long documents more efficiently than BERT, thus potentially avoiding the need of any sparse re-ranking model.\n\n[1] Beltagy, Iz, Matthew E. Peters, and Arman Cohan. \"Longformer: The long-document transformer.\" arXiv preprint arXiv:2004.05150 (2020).\n\n[2] Zaheer, Manzil, et al. \"Big bird: Transformers for longer sequences.\" Advances in Neural Information Processing Systems 33 (2020): 17283-17297.", " > **Brute-force OvA details**\n\nYes, Bert-OvA baseline is BERT encoder followed by a linear classification layer with L outputs. It is unlikely that this approach suffers from the same optimization challenges as ELIAS since it doesn’t have any moving assignments i.e. the training feedback that the model gets is always consistent because we always know what are the right labels for a given training point. In ELIAS the major challenge is that since there’s no unique path from the root to a particular label $l$, we don’t have this explicit training signal that what are the right clusters for a training point, this leads to the optimization challenge when jointly training every component of the model from random initialization. Note that this doesn’t happen when training with a fixed index structure where a label is uniquely assigned to a cluster because if a label is uniquely assigned then the right clusters for a training point are always going to be clusters of the positive labels.\n\n> **What is the overall training time for the proposed model**\n\nPlease refer to Table 8 in our response to reviewer BfTa\n\n> **How were hyper-parameters such as $\\kappa$ chosen? Was k-fold cross-validation used?**\n\nMost of the hyperparameters such as $\\kappa$, $\\lambda$, etc are tuned only on the smallest LF-AmazonTitles-131K dataset, on the rest of the bigger datasets we only tune learning rate on a small held-out validation set\n\n> **Why have previous XMC papers such as SiameseXML and DeepXML been not compared with?**\n\nDeepXML numbers are reported in Table 1 under the name of Astec since the DeepXML paper refers \"DeepXML\" name as the framework, and the method as \"Astec\". We don’t compare with SiameseXML because it uses additional label features which most of the standard XMC methods don’t use nor do the standard XMC datasets have these label features (Amazon-670K, Wikipedia-500K, Amazon-3M).\n\n> **Which dataset is used for Figure 5?**\n\nAmazon-670K is used for Figure 5, we'll update the figure caption to mention this\n\n> **Instead of using a separate sparse ranker, why is the proposed model not trained with a combination of dense and sparse features for input as done in baseline methods such X-Transformer, Overlap-XMC?**\n\nJoint training is not possible when learning on combination of dense and sparse features because currently, no deep learning frameworks (pytorch, tf, etc) support efficient learning with sparse features. X-Transformer, Overlap-XMC decouple learning of the deep encoder from the learning of the classifiers i.e. they first learn their deep encoder on the matching task with only dense features, they then obtain dense representations from the encoder and learn the ranker classifiers level by level on the concatenated fixed dense and sparse representations of the input using convex LIBLINEAR solvers.\n\n---\nWe hope that our response helped in addressing your concerns, we'll be happy to answer/discuss any further clarifications needed.\n", " We thank reviewer i6NX for their very thorough feedback and helpful suggestions for understanding the learned index structure! We provide below additional analysis and answers to address the important questions raised by the reviewer.\n\n> **Analysis of the learned index**\n\n**Distribution of number of assigned clusters**: Following the suggestion 5a we report the fraction of labels which get assigned to different number of clusters in the learned index. We say that a label $l$ is assigned to a cluster $c$ iff the weight $a_{c,l}$ in the learned adjacency matrix $\\mathbf{A}$ is greater than $0.25$. Most labels ($\\sim 80$%) get assigned to $\\le 2$ clusters.\n\n*Table 12. Number of assigned clusters in learned index vs fraction of such labels in Amazon-670K dataset*\n| Num assigned clusters | 0 | 1 | 2 | 3-5 | 6-10 | 10+ |\n| -- | -- | -- | -- | -- | -- | -- |\n| Percentage of labels | 0.48% | 51.61% | 29.60% | 16.31% | 1.72% | 0.25% |\n\n**Overlap with the stage 1 fixed tree**: [here](https://www.dropbox.com/s/sxrr78b7t4w6etp/ELIAS_stage-1_vs_stage-2_overlap.pdf?dl=0) we plot the fraction of edges of the stage 1 tree that still remain in the learned adjacency matrix $\\mathbf{A}$ after thresholding $\\mathbf{A}$ at various cutoff thresholds (i.e. for a threshold $\\gamma \\in [0, 1]$ we only retain entries in $\\mathbf{A}$ which are greater than $\\gamma$ and evaluate how many edges of stage 1 tree remains). The plot reveals that almost $\\sim 60$% stage 1 cluster assignments remain in the learned $\\mathbf{A}$ with good confidence.\n \n**Threshold based pruning ablation**: in the following table we report the final accuracy numbers of ELIAS-1 model after cutoff threshold based pruning of the learned label-to-cluster assignments. These results indicate that about $\\sim 84$% edges can be pruned without hurting the model performance.\n\n*Table 13. Accuracy numbers after threshold based pruning of learned label-to-cluster assignments on Amazon-670K dataset*\n| Cutoff Threshold | % of edges pruned | P@1 | P@5 | R@10 | R@100 |\n| --- | --- | --- | --- | --- | --- |\n| 0 | 0% | 48.68 | 40.04 | 50.33 | 68.95 |\n| 0.01 | 20.89% | 48.68 | 40.05 | 50.33 | 68.96 |\n| 0.05 | 64.42% | 48.68 | 40.04 | 50.33 | 68.96 |\n| 0.1 | 73.63% | 48.68 | 40.04 | 50.33 | 68.95 |\n| 0.25 | 84.52% | 48.65 | 40.02 | 50.26 | 68.82 |\n| 0.5 | 89.11% | 48.40 | 39.48 | 48.98 | 66.75 |\n| 0.75 | 91.95% | 47.70 | 38.19 | 46.38 | 62.17 |\n| 0.9 | 93.13% | 47.26 | 37.42 | 44.91 | 59.53 |\n\n**Top-k based pruning ablation**: in the following table we report the final accuracy numbers of ELIAS-1 model after top-k based pruning of the learned label-to-cluster assignments (i.e. we retain only top-k label assignments per cluster).\n\n*Table 14. Accuracy numbers after top-k based pruning of learned label-to-cluster assignments on Amazon-670K dataset*\n| Top-K | P@1 | P@5 | R@10 | R@100 |\n| --- | --- | --- | --- | --- |\n| 1000 | 48.68 | 40.04 | 50.33 | 68.95 |\n| 750 | 48.70 | 40.05 | 50.34 | 68.95 |\n| 500 | 48.72 | 40.05 | 50.34 | 68.95 |\n| 300 | 48.72 | 40.05 | 50.34 | 68.95 |\n| 200 | 48.71 | 40.05 | 50.32 | 68.87 |\n| 100 | 48.22 | 39.04 | 47.98 | 64.80 |\n| 50 | 46.17 | 33.85 | 38.35 | 49.48 |\n\n**$\\kappa$ ablation**: in the following table we report the effect of choosing different $\\kappa$ (row-wise sparsity parameter) to the final model performance on Amazon-670K dataset. We notice that the model performance increases up to a certain value of $\\kappa$, after that the model performance (specially P@1) saturates and starts degrading slowly.\n\n*Table 15. $\\kappa$ ablation on Amazon-670K*\n| $\\kappa$ | P@1 | P@5 | R@10 | R@100 |\n| --- | --- | --- | --- | --- |\n| 100 | 46.79 | 36.60 | 42.90 | 56.38 |\n| 200 | 47.88 | 38.67 | 46.96 | 63.30 |\n| 500 | 48.68 | 40.04 | 49.99 | 68.48 |\n| 1000 | 48.68 | 40.05 | 50.33 | 68.95 |\n| 2000 | 48.58 | 40.07 | 50.27 | 68.91 |\n| 5000 | 48.57 | 39.93 | 50.15 | 68.91 |\n| 10000 | 48.32 | 39.73 | 49.97 | 68.84 |\n\n**Effectiveness on multi-modal labels**: Please refer to our response to reviewer ubcH for a quantitative and qualitative analysis of learned index's behaviour on labels which get assigned to multiple clusters.\n\n> **Scalability**\n\nWith a reduced embedding dimension ELIAS can scale to datasets with up to 10M labels even on a single GPU but we do agree that scaling ELIAS to much bigger datasets (100M or 1B scale) will require extending the two-level index to deeper hierarchies and is one of the future directions we hope to explore.\n", " We thank reviewer ubcH for their valuable comments and appreciate their understanding of the contribution of this work in the context of recent advances in deep learning and XMC! We provide below additional discussion to make a stronger argument for the claim that ELIAS is better suited for multi-modal label distributions.\n\n**Qualitative analysis**: We qualitatively compare the training point distributions of labels which get assigned to multiple clusters and labels which get assigned to only one cluster by plotting TSNE plots of the training points of such labels and their assigned clusters [here](https://www.dropbox.com/s/eqfw06dk66quo3r/ELIAS_visualize_multimodal_label_training_points.pdf?dl=0). We say that a label $l$ is assigned to a cluster $c$ iff the weight $a_{c,l}$ in the learned adjacency matrix $\\mathbf{A}$ is greater than $0.25$. These plots indicate that labels assigned to multiple clusters often have training points with a more multi-modal distribution than the labels which get assigned to only one cluster. \n\n**XR-Transformer vs ELIAS comparison**: in the following table we compare the contribution to R@100 of labels belonging to different label bins for XR-Transformer-1 and ELIAS-1. Here label bins are created based on number of assigned clusters in learned ELIAS model (for e.g. column 2 presents the contribution to R@100 of all labels which have only 1 clusters assigned to them). The results indicate that the relative improvement in performance in the ELIAS model over XR-Transformer is much more significant for labels which get assigned to multiple clusters than labels which only get assigned to single cluster.\n\n*Table 10. R@100 contribution of labels with different numbers of clusters assigned to them in Amazon-670K dataset*\nNum assigned clusters | 1 | 2 | 3-5 | 6-10 | 10+\n|--|--|--|--|--|--|\n| R@100 contribution (XR-Transformer-1) | 24.26 | 16.91 | 17.19 | 4.72 | 1.56\n| R@100 contribution (ELIAS-1) | 25.10 | 17.70 | 18.59 | 5.47 | 2.06\n| Delta | +3.4% | +4.6% | +8.1% | +15.8% | +32.0%\n\n**Decilewise distribution of number of assigned clusters**: in the following table we analyze the distribution of the average number of clusters assigned to a label for each label decile (decile 1 represents the head most decile and decile 10 represents the tail most decile). This demonstrates a clear trend that head labels get assigned to more number of clusters than tail labels in the learned assignments.\n\n*Table 11. Decilewise distribution of the average number of assigned cluster in Amazon-670K dataset*\n| Deciles | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\n|--|--|--|--|--|--|--|--|--|--|--|\n| **Avg. assigned clusters** | 6.99 | 4.25 | 3.10 | 2.50 | 2.19 | 1.96 | 1.74 | 1.74 | 1.49 | 1.29\n\n---\n\nWe hope that our response helped in addressing your concerns, we'll be happy to answer/discuss any further clarifications needed.", " We would like to thank reviewer BfTa for their valuable and constructive feedback! Below we provide additional discussion and answers to address the valid concerns raised by the reviewer.\n\n**Time complexity analysis**: the time complexity for processing a batch of $\\eta$ data-points is $\\mathcal{O}(\\eta(T_{\\text{bert}} + Cd + b\\kappa + Kd))$ where $T_{\\text{bert}}$ represents the time complexity of the bert encoder, $C$ represents the number of clusters in index, $d$ is the embedding dimension, $b$ is the beam size, $\\kappa$ is the row-wise sparsity of label-to-cluster adjacency matrix $A$, and $K$ is the number of labels shortlisted for classifier evaluation. Assuming $C = \\mathcal{O}(\\sqrt{L})$, $\\kappa = \\mathcal{O}(L/C) = \\mathcal{O}(\\sqrt{L})$ and $K = \\mathcal{O}(\\sqrt{L})$, the final time complexity comes out to be $\\mathcal{O}(\\eta(T_{\\text{bert}} + \\sqrt{L}(2d + b)))$.\n\nIn practice, because of ELIAS’s shallow index and design choices such as a $\\kappa$-row-wise sparse adjacency matrix, each computation involved in the forward pass can be written as tensor operations which are highly parallelizable on a GPU. This results in very fast inference times when doing the inference on GPUs.\n\n**Empirical runtimes**: the following table provides the empirical runtimes and model sizes for the ELIAS-1$^{(d)}$ model. All reported numbers are for A6000 GPU with 24 core standard CPU machine. The reported training times are the total training times i.e. both stage 1 and stage 2 training. Prediction times are reported as average prediction time per point when doing batch prediction.\n\n*Table 8. Empirical prediction time, training time, and model sizes on benchmark datasets*\n| Dataset | Prediction (1 GPU) | Training (1 GPU) | Training (8 GPU) | Model Size |\n| --- | --- | --- | --- | --- |\n| **LF-AmazonTitles-131K** | 0.08 ms/pt | 1.66 hrs | 0.33 hrs | 0.65 GB |\n| **Wikipedia-500K** | 0.55 ms/pt | 33.3 hrs | 6.6 hrs | 2.0 GB |\n| **Amazon-670K** | 0.57 ms/pt | 10.1 hrs | 2.1 hrs | 2.4 GB |\n| **Amazon-3M** | 0.67 ms/pt | 37.6 hrs | 7.5 hrs | 5.9 GB |\n\n**$\\lambda$ ablation**: in the following table we report the final accuracy numbers with different $\\lambda$ on Amazon-670K dataset. With a very small $\\lambda$ the loss only focuses on the classification objective which leads to significantly worse R@100 performance, increasing $\\lambda$ improves the overall performance up to a certain point, after that the performance saturates and starts degrading slowly. \n\n*Table 9. Final accuracy numbers on Amazon-670K with varying $\\lambda$*\n| $\\lambda$ | P@1 | P@5 | R@10 | R@100 |\n| --- | --- | --- | --- | --- |\n| **0** | 47.80 | 39.45 | 49.17 | 66.05 |\n| **0.01** | 48.30 | 39.86 | 49.73 | 67.78 |\n| **0.02** | 48.48 | 39.94 | 49.96 | 68.27 |\n| **0.05** | 48.68 | 40.05 | 50.33 | 68.95 |\n| **0.1** | 48.72 | 40.05 | 50.19 | 68.91 |\n| **0.2** | 48.62 | 39.96 | 50.06 | 68.82 |\n| **0.5** | 48.48 | 39.76 | 49.80 | 68.55 |\n\n> **It is not rigorous to say that ELIAS is an end-to-end method...**\n\nWe consider ELIAS to be end-to-end because the stage 2 training (which represents the main contribution of our work) jointly trains the representation, indexing, and classification parameters in an end-to-end fashion allowing each component to adapt to each other w.r.t. final objective. We do acknowledge that the current solution involves a careful initialization of model parameters (based on stage 1 training) before the end-to-end training begins which is not ideal.\n\n> **Experiment results of ELIAS^{(d)} in Table 1 have not been analyzed in Table 3...**\n\nELIAS$^{(d)}$ in Table 1 is indeed Stage 1+Stage 2+3 x ensemble, thanks for pointing it out, we'll make this explicit in the main text.\n\n---\n\nWe hope that our response helped in addressing your concerns, we'll be happy to answer/discuss any further clarifications needed.", " This paper focuses on the extreme multi-label classification (XMC) problem and proposes a method called ELIAS. \n(1) ELIAS adopts a two-layer index for representing extreme-scale labels, and it learns overlapping cluster partitions by assigning each label to multiple clusters.\n(2) ELIAS adopts a two-staged training strategy, where a XMC model is trained with fixed cluster partition generated by k-balanced clustering in the first stage, the cluster partition is generated according to the weighted count of labels assigning to corresponding clusters according to the XMC model trained in first stage, and the XMC model with value of the cluster partition are trained in the second stage.\n(3) ELIAS++ adopts a sparse ranker and a calibration module for achieving further improvement compared to ELIAS.\nExperiments are conducted on benchmark dataset, where ELIAS achieves better performance compared to previous SOTA methods. Strength:\n+ This paper is well motivated and the proposed method achieves significant performance compared to previous SOTA methods.\n+ The writing is good and easy to follow.\n\nWeakness:\n- It is not rigorous to say that ELIAS is an end-to-end method, though Eq. 7 and Eq. 8 can be optimized in an end-to-end manner in theory. In practice, the model is trained in a two-stage manner, and the label-to-cluster assignment is generated according to Eq. 10, which is non-differentiable. \n- Experiment results of ELIAS^{(d)} in Table 1 have not be analyzed in Table 3. Is ELIAS^{(d)} corresponds to Stage 1+Stage 2+3 x ensemble in Table 3?\n- Lack of time complexity analysis. It will be better to add a discussion about empirical wall-clock time of the two-stage training process of ELIAS. How does the loss (i.e., classification and shortlist loss) affect the performance of ELIAS? Is it possible to add an ablation study of $\\lambda$?\n None", " The paper addresses a relevant problem in partition based approach to extreme multi-label classification (XMC), where existing methods have 2 stages – the first stage uses a shallow tree-based index to hard partition the label space and the second stage learns classifiers into labels inside that partition. The challenge is that if all the labels are not inside the partition in the first stage, then they cannot be in the output of the second stage at all. This paper introduces a variant, ELIAS, where the first stage is replaced by a weighted graph based index with soft learnable parameters that are learned together with the final task classification objective. Experimental results on popular XMC datasets show reasonable improvements in precision/recall metrics over existing XMC methods. \n\n Originality: \nThe general idea of training the retrieval stage using an objective (supervised, unsupervised or self-supervised) followed by a classification stage is becoming more commonplace now, with works like REALM for language modeling, and retrieval augmented convolutional networks in computer vision. However, this paper appears to be the first work applying jointly training the retrieval and classification stages in XMC for multi-label classification. The authors have taken a natural next step in evolving this general idea.\nQuality:\nThe paper takes a reasonable approach – motivating the problem, challenges with existing approaches to XMC, proposing a solution, describing the details and testing it on datasets that show the value of this approach. \nClarity:\nThe paper is reasonably well written and easy to understand, with sufficient details. While I have not looked at the code, I am glad that they have shared the PyTorch implementation for transparency and enable reproducibility. \nSignificance:\nXMC is an important problem in a number of domains where the output space of labels is large – many information retrieval problems including search, ads, commerce etc. are in this category. As such the work addresses a significant problem in XMC.\n The authors claim earlier in section 1 that ELIAS is better suited to handled multi-modal label distributions. However, this aspect is not really demonstrated in the paper. They hypothesize in the recall comparison section in section 4 that the increased recall might be due to the multi-modal distribution of popular labels. Do they have any quantitative evidence to back this claim? This paper addresses a technical improvement in XMC. As such, I don’t see any significant potential negative societal impact of this specific work. ", " Tree-based methods are amongst the popular approaches for extreme multi-label learning problems. Existing approaches either use a decoupled strategy of training the label classifiers on top of a fixed tree index over labels, or alternate between updating the index and training the classifiers. \n\nThis paper proposes a relaxation of the tree-based index to a directed acyclic graph where each label can belong to multiple clusters with non-zero probability. The probability of label belonging to a cluster is learned together with label classifiers in an end-to-end fashion, and this leads to empirical improvement over using a fixed index over labels. Strengths\n\n1) Proposed approach allows for learning label-to-cluster assignments in an end-to-end fashion which provides empirical improvements over using fixed label-to-cluster assignments.\n\n2) The paper is well-written and easy to follow. The experiments support the main claim of the paper that jointly learning classifiers and index structure in an end-to-end fashion can yield improved performance over using a fixed index structure. \n\nWeakness\n\n1) Limited Analysis: It would be interesting to see analysis of the proposed approach beyond final accuracy numbers, for example, how much does the final index structure deviate from the fixed index structure used for stage 1 training. \n\n2) Scalability: Since the proposed approach learns a two-level index, it is not immediately clear if the proposed approach can be applied to relax tree structures with three or more levels into a directed acyclic graph. This might limit the scalability of the proposed approach to larger industry scale datasets. However, this is a minor weakness as the approach is shown to scale well to and also outperform baselines on one of the largest publicly available extreme multi-label classification datasets (i.e. Amazon-3M). Questions/Suggestions\n\nQ1) Brute-force-OvA: Does this model consist of a BERT encoder followed by a multi-layer perceptron with L outputs? How much time did it take to train this model? Is it possible that BERT-OvA model also suffers from similar optimization challenges as training the proposed model from scratch? If so, then it is not necessarily a strong upper-bound on performance of these models.\n\nQ2) What is the overall training time for the proposed model?\n\nQ3) How were hyper-parameters such as $\\kappa$ chosen? Was k-fold cross-validation used? \n\nQ4) Why have previous XMC papers such as SiameseXML and DeepXML been not compared with?\n\nQ5) The main set of results in the paper compare precision/recall of the proposed model with other baselines. But it might be interesting to have some results/insights/observations about the kind the structure of the index learnt using the proposed approach and how much it deviates from the fixed index from stage 1 training. Some ideas are:\n\n 5a) In the final index structure, what fraction of labels get assigned to multiple clusters? Is it possible to prune assignment of a label to multiple clusters after training without affecting the accuracy of the model?\n\n5b) In “Recall Comparison” para in Sec 4, authors hypothesize that improved performance for top 2 deciles is due to the tendency of popular labels to have multi-modal distribution. Is it the case that in the final index, popular labels tend to be assigned to multiple clusters and rare labels tend to pick a single cluster? \n\n5c) How does row-sparsity affect the performance of the model?\n\nQ6) Which dataset is used for Figure 5?\n\nQ7) Instead of using a separate sparse ranker, why is the proposed model not trained with a combination of dense and sparse features for input as done in baseline methods such X-Transformer, Overlap-XMC? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "MDJHNo2vuLm", "TmFn_GKaGA_", "SiPq9kAfKVS", "MXXeohPFqN", "1YDUkPQCmKQ", "d-W76dvQ7X0", "xcnxoZ0hXn_", "dtmHfgmFhBL", "ArBzhqlbSrd", "TmFn_GKaGA_", "LUi3e6XOQo", "nips_2022_RF5Lb6NaZp", "nips_2022_RF5Lb6NaZp", "nips_2022_RF5Lb6NaZp" ]
nips_2022_oWx_9VJgyV7
SNAKE: Shape-aware Neural 3D Keypoint Field
Detecting 3D keypoints from point clouds is important for shape reconstruction, while this work investigates the dual question: can shape reconstruction benefit 3D keypoint detection? Existing methods either seek salient features according to statistics of different orders or learn to predict keypoints that are invariant to transformation. Nevertheless, the idea of incorporating shape reconstruction into 3D keypoint detection is under-explored. We argue that this is restricted by former problem formulations. To this end, a novel unsupervised paradigm named SNAKE is proposed, which is short for shape-aware neural 3D keypoint field. Similar to recent coordinate-based radiance or distance field, our network takes 3D coordinates as inputs and predicts implicit shape indicators and keypoint saliency simultaneously, thus naturally entangling 3D keypoint detection and shape reconstruction. We achieve superior performance on various public benchmarks, including standalone object datasets ModelNet40, KeypointNet, SMPL meshes and scene-level datasets 3DMatch and Redwood. Intrinsic shape awareness brings several advantages as follows. (1) SNAKE generates 3D keypoints consistent with human semantic annotation, even without such supervision. (2) SNAKE outperforms counterparts in terms of repeatability, especially when the input point clouds are down-sampled. (3) the generated keypoints allow accurate geometric registration, notably in a zero-shot setting. Codes and models are available at https://github.com/zhongcl-thu/SNAKE.
Accept
Post-rebuttal, the paper had split reviews, with three reviewers in favor of acceptance (6, 6, 5 but noted as a 6 in the final comment from 2TwX) and one reviewer strongly arguing for rejection (3). The AC examined the reviews, the paper, and the discussion, and is inclined to accept the paper. The AC is persuaded by the arguments presented by the reviewers in favor of acceptance. While xHwu has raised a number of concerns, the AC believes that the authors have addressed most of these well in public discussion. The AC understands xHwu's positions, but does not see the remaining concerns as grounds for rejection in light of the more positive views of the other reviewers. Given the extensiveness of the discussion, the AC would encourage the authors to use their extra page to incorporate some of the experimental results into the final version of the paper.
train
[ "jgwi7mxF2Bk", "0DeLpyqBYF", "piE8lF-Ei3s", "Ax7RUIPMpmp", "7IyN4BNcdYP", "ewce3jLLMX", "pt2FaIMXNLkF", "SWUOBwOpXDT", "fefSnZhYs72", "hwVFOCsLiENd", "jabu1QKCRRz", "OI37yz4nDUo", "C-c4H4cWr2x", "dcARkPno-5", "WFiXaP9Q0Xz", "PDx5_h8F0Ns" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewers,\n\nThe discussion period is closing soon. Please take a look at the responses from the authors. If you have further questions, please ask them now, since the authors will be unable to respond soon. It's substantially more productive, effective, and reasonable to have a quick back-and-forth with authors now than to raise additional questions or concerns post-discussion period that the authors are unable to address. \n\nThanks,\n\nAC", " Dear reviewer,\n\nPlease let us know if our responses have addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results address the concerns you've raised. We are happy to address any further concerns.", " Dear reviewer,\n\nPlease let us know if our responses have addressed the issues raised in your review. We hope that our corrections, clarifications, and additional results address the concerns you've raised. We are happy to address any further concerns.", " We would like to thank R#xHwu for the detailed feedback. Here we respond to raised concerns one by one.\n\n**Q1 and Q2**\n> 1. The proposed method is not unsupervised. \n\n> 2. It makes no sense to use occupancy probability to represent the inverse distances between the query and the input.\n\n**A1 and A2**\n\nSorry for the insufficient description of the occupancy we predict in this paper, which is indeed surface occupancy instead of shape occupancy. \n\nConventionally, points inside or on the input surface are considered occupied, as you stated. One needs ground-truth occupancy information (e.g., generated from CAD models) to learn an implicit occupancy function under this formulation. \n\nHowever, in our formulation, occupied points are those on the input surface, and the others are all considered unoccupied, including the points inside the surface. In order to avoid this ambiguity, we refer to occupancy in the revised paper as surface occupancy (now marked blue).\n\nUnder this surface occupancy definition, the training for the implicit shape field is unsupervised. We randomly sample positives (occupied points) from the input point cloud and negatives (unoccupied points) in the unit 3D space. We visualize the predicted occupancy field in Fig.16 of the revised supplementary. These five samples are taken from the unseen test set. As shown by the second row, only points on the input surface have a high occupancy value, and the other points (inside or outside of the surface) have a near-zero occupancy value. Under our formulation, two surfaces can be obtained through the marching cube algorithm (using a threshold of 0.4), and we only show the outer surface. It also could be found in Fig.16, the (surface) occupancy probability can represent the inverse distances between the query and the input surface.\n\nBecause we do not use any additional information that other methods do not have access to, the comparison is fair. Finally, since you insist that our formulation is not possible, we recommend you try our codes which clearly show that it is indeed possible.\n\n**Q3**:\n> It is not conclusive that the occupancy information can improve the keypoint detection accuracy in Table.2.\n\n**A3**:\n\n(1) As you stated, keypoints should be on the input surface. Without the surface occupancy decoder, keypoints cannot be constrained on the input surface so that some of them would float in the air, as seen in Fig.7-(a). If the detector has a surface occupancy decoder, it could encourage the keypoint to be located on the underlying surface, with the help of proposed loss functions.\n\n(2) To reconstruct the shape across instances of the same category, our model naturally encourages semantic consistency of the intermediate feature embedding. So the keypoints detected could be semantically consistent. Without the shape decoder, the model performs poorly on the KeypointNet dataset.\n\n(3) We would like to clarify that SE(3) repeatability is important but not equivalent to accuracy. Our evaluation and analysis clearly show that SE(3) repeatability has its own limitations. And it is clearly shown that, apart from this important but imperfect metric, our method generates meaningful keypoints consistent with human annotation and achieves better registration performance.\n\n**Q4**\n> Novelty and contributions.\n\n**A4**:\n\n(1) UKPGAN reconstructs input point cloud coordinates instead of the underlying shape manifold. Moreover, it can only predict keypoints from the input point cloud, which cannot maintain a consistent performance when the test point clouds are disturbed. We propose the first implicit keypoint field that is tightly combined with the implicit surface occupancy field, which is novel. \n\n(2) Input point clouds are not equivalent to the input surface. A detector should extract consistent keypoints under a number of disturbances that can affect the input point clouds, e.g., missing parts, point density, and sensor noise. A continuous keypoint field that can be queried at any point in 3D space is more consistent under input disturbances.\n\n(3) We respectfully disagree with your point that shape reconstruction is harder than 3D keypoint detection. It is ambiguous for humans to label keypoints, so learning-based keypoint detection must be done in an unsupervised manner. Meanwhile, shape reconstruction is definitely another hard task and incorporating it helps the continuous keypoint field better capture shape cues.", " Thank the authors for their efforts in the rebuttal. I read through all the reviews from all reviewers and all your replies in this rebuttal. Some of your replies are helpful to make your paper clearer to me. Unfortunately, some of your replies are not convincing at all.\n \n**1. The proposed method is not unsupervised. It needs ground truth occupancy information as an additional supervision, which results in unfair comparisons with other methods.**\n \nAs the authors claimed many times, the proposed method is unsupervised, while this is not true. As shown by the loss Lo in Eq.(4), it requires occupancy supervision to learn an implicit function for surface reconstruction. This is much different from the self-reconstruction for point clouds, as illustrated by UKPGAN in Fig.1(b), since the input point clouds are also the reconstruction target, and no additional supervision is required.\n \nThe authors replied to me in A3.(2) in the 3/3 section, they merely obtained the occupancy information by sampling positives from the input point cloud and negatives from the unit 3D space. But you definitely have to determine the area that is not occupied and the area that is occupied. If you did not know this information, you would have lots of queries sampled inside of the shape while labelling them as negative. \n \n**2. It makes no sense to use occupancy probability to represent the inverse distances between the query and the input.**\n \nThe authors replied in A.7 in the 2/3 section, they said that “in the modern literature on occupancy, the shape is not consider solid”. This is not common sense about the occupancy. To learn a valid implicit function, we regard the area inside of the shape as the occupied and outside of the shape as the unoccupied, no matter if the shape is a voxel grid or mesh. It seems there is no one merely regarding the surface as the occupied while other areas are left as the unoccupied. If the authors used the occupancy in this way, I do not think they can reconstruct surfaces like the ones shown in paper.\n \nIf the shape is regarded as an occupied solid, it makes no sense to model the inverse distance between the query and the input using the occupancy probability. Since the changing patterns of occupancy probability and inverse distances across the surface are so different.\n \n \n**3. It is not conclusive that the occupancy information can improve the keypoint detection accuracy in Table.2.**\nThe results without occ and with all in the second column indicates that the surface reconstruction cannot improve the keypoint detection accuracy under ModelNet40. There should be no reason to obtain results like this if the reconstruction really helps. I can not accept the authors’ explanation in A11 in section 3/3.\n \n**4. Novelty and contributions**\n- Although the authors proposed some loss terms to make them work for keypoint detection, I do not think the novelty is high enough compared to the UKPGAN. The idea of using shape reconstruction to improve the keypoint detection is the same.\n \n- Moreover, I do not think it is a wise solution to detect keypoints from a continuous field defined in the whole 3D space since we know keypoints should appear on the surface represented by the input point clouds. \n \n- Surface reconstruction from point clouds without normals is a much harder problem than keypoint detection, which is still a challenge. Detecting keypoints based on surface reconstruction makes the problem even harder to resolve. I do not think the authors aim to learn an occupancy field that we usually discussed and have a common sense in surface reconstruction. More importantly, it does no improve keypoint detection performance under ModelNet40.\n", " **Q10**: \n> How to determine the keypoint number? I noticed that the numbers of keypoints produced by different methods are different even in the same case, such as the visual comparison in Fig.6. Why do not use the same number of points to perform the comparison?\n\n**A10**:\n\nSorry for the insufficient description of the visualization. For all quantitative experiments, we fixed the number of keypoints detected by each method for a fair comparison. However, the visualized keypoints in Fig. 6 are selected by Non-Maximum Suppression with a radius of 0.1, similar to how the keypoints are demonstrated in USIP. Therefore, the different methods may show a different number of keypoints of the same object. Notably, ISS cannot predict keypoint saliency, so we randomly choose 30 points from all predicted keypoints for an object and 100 points for an indoor scene. In the supplementary, more qualitative results can be found in Fig.7-Fig.15. \n\n---\n\n**Q11**:\n\n> The reason why I do not think the reconstruction can improve the performance of keypoint detection is the results in Table 2. The results without surface reconstruction are better than the results with surface reconstruction. I do not think this is a good support for the argument.\n\n**A11**:\n\nAs we stated in lines 262-267 of the main paper, although the model without shape reconstruction could detect more repeatable keypoints on the ModelNet40 dataset, it fails to give semantically consistent keypoints on the KeypointNet dataset. Fig. 7-a in the main paper shows that SNAKE is unable to output symmetric and meaningful keypoints without the shape-aware technique. \n\n---\n\n**Q12**: \n\n> With the ground truth occupancy supervision, it is not a fair comparison with other methods.\n\n**A12**:\n\nNo, we do not use ground truth occupancy. We do not use any additional information that other methods do not have access to, so the comparison is fair. The only supervision signal we use is the input point cloud itself. See A3 for more details.\n", " **Q4**:\n\n> The proposed method is also not novel. UKPGAN has explored the feasibility of combining shape reconstruction and keypoint detection together, although UKPGAN reconstructs the input point cloud rather than learning an implicit function to represent the same shape. The minor difference here is what representation is used to represent 3D shapes.\n\n**A4**:\n\nThank R#xHwu for the comment, but we cannot fully agree with the comment. \n\n(1) We are the first to propose a keypoint field that predicts a keypoint saliency value for each continuous input query point coordinate. The advantages of the keypoint field are stated in A1.\n\n(2) We propose several novel loss functions that exploit the mutual relationship between two keypoint and occupancy decoders, which is quite different from UKPGAN.\n\n(3) We design a gradient-based optimization strategy for refining the keypoint localization during inference.\n\n---\n\n**Q5**:\n\n> Learning occupancy field results in using an additional surface constraint. This term should not be there due to the learning of the occupancy field. Fig.2(b) may be confusing, I think the close to 0 line should be parallel to the x axis rather than y axis.\n\n**A5**:\n\nThe middle panel of Fig.2 indicates the loss functions for keypoint field learning. Surface constraint loss, which entangles the occupancy and keypoint fields, enforces the saliency of the query that is far from input close to 0 (see Eq.6 in the main paper). Since it plays a vital role in formulating the keypoint field, it should appear there. \n\nThank R#xHwu for the suggestion. We rotate 'close to 0 line' in the revised main paper.\n\n---\n\n**Q6**:\n\n> The term of repeatability loss is hard to understand. Why do we need a term like this? For shapes, they are well aligned, why do we need to consider the rigid transformation? Another question is about the cosine similarity, the probabilities are scalars, right? Is there any reason to use cosine similarity to evaluate the difference between two numbers?\n\n**A6**:\n\nRepeatability means we need to detect the same keypoints under various transformations so that later geometric tasks can be successfully done (like registration), so applying SE(3) transformations is a natural choice. Enforcing repeatability under SE(3) transformations allows us to detect keypoints in an unsupervised manner (i.e., without human annotated keypoints for supervision).\n\nYes, for a single point, the value is a scaler. But we calculate the cosine similarity between vectorized values in a local grid so that contextual information that reflects local shape can be captured. (see lines 140-149 in the main paper)\n\n---\n\n**Q7**:\n\n> The authors use occupancy probability to represent the inverse distance between the query and the input. I do not think it works here. Since occupancy probability always gets smaller from 1 to 0 when moving a query from inside to outside while the inverse distance gets smaller to larger before going across the surface, and then gets smaller after that.\n\n**A7**:\n\nIn the modern literature on occupancy, the shape is not considered solid. Again, note that we do not use the solid CAD model for ground truth occupancy calculation. All we need is the input point cloud for training. See A3 for details.\n\n---\n\n**Q8**:\n\n> Explicit keypoint extraction via optimization is also an operation that I do not understand. An intuitive idea of extracting keypoints is to use points of the input point clouds as queries to predict the occupancy values since keypoints can only locate on the surface. Why do we have to extract keypoints via updating the locations of randomly sampled queries in the 3D space?\n\n**A8**:\n\nNo, we do not optimize from randomly sampled points. At inference, the initial query set is evenly distributed in the input space, and then there is a surface filtering step (see Algorithm 1 and Fig.2-inference in the main paper). This step refines keypoint locations without moving queries into the void because the local maxima of the saliency field lie on the input surface. If the initial query set is the input point cloud, its number and distribution will be affected by the input.\n\n---\n\n**Q9**: \n> In the visualization of saliency field slices, how to select the slices to visualize?\n\n**A9**:\n\nSorry for this misunderstanding. We implement this visualization by projecting the keypoint field onto an axis (i.e., using torch.max()). We have updated the paper and renamed it as the 'projected slice'.\n", " We would like to thank R#xHwu for the detaield comments. Here we respond to raised concerns one by one.\n\n---\n\n**Q1**:\n\n> First of all, I do not think it makes any sense to learn the probability of keypoints as a field. Keypoints should be located on surfaces, they are not floating in the space. Specifically, the authors aim to detect keypoints from input point clouds, the keypoints to be determined should be some of the input points. It should be much easier to learn the probability of keypoints among the discrete input points than in the continuous whole space.\n\n**A1**:\n\n(1) We do agree that enforcing keypoints to lie on the surface is a reasonable choice. To this end, we have proposed two techniques: surface constraint loss during training and occupancy filtering during inference.\n\n(2) Point clouds are limited in terms of the number of points that may not contain the exact keypoints but only the points near the keypoints. Therefore, we want the detector to have the potential to predict keypoints drift away from the input points to improve keypoint localization. USIP holds the same view that it is unnecessary for keypoints to be any of the input points. Since SNAKE learns a continuous keypoint field that can be queried at any point in 3D space, it can do so. The various experimental results in the main paper also verify the effectiveness of our method. If R#xHwu insists that the keypoints must be derived from the input points, then use the query set sampled from input points and set no optimization when making inference for our network.\n\n(3) In classical 2d keypoint detection methods like SIFT[1] or SURF[2], subsequent steps for sub-pixel refinement are widely used to improve keypoint localization, which means that 2D keypoints do not necessarily lie on the input pixel grid.\n\n(4) When the test point clouds are disturbed by downsampling or noises, the methods like UKPGAN, which generates keypoints from discrete inputs, cannot maintain a consistent performance, which can be found in section C.1 in the supplementary.\n\n(5) We do not use any additional information that other methods do not have access to (see A3 for more details). Our entire training process is easy and stable.\n\n[1] Lowe, David G. \"Object recognition from local scale-invariant features.\" Proceedings of the seventh IEEE international conference on computer vision. Vol. 2. Ieee, 1999.\n\n[2]Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. \"Surf: Speeded up robust features.\" European conference on computer vision. Springer, Berlin, Heidelberg, 2006.\n\n---\n\n**Q2**:\n\n> The motivation of improving the performance with surface reconstruction does not make any sense either. Surface reconstruction is a harder task than keypoint detection, since point clouds have been given as input which provides a range as a good constraint to find solutions for kepoint detection. We already know the structure of shapes represented by point clouds and why we have to learn an implicit function to reconstruct shapes. I believe it just makes this problem more complex to get resolved.\n\n**A2**:\n\n(1) Point cloud is not a complete 3D representation for 3D data because its sampling rate is always limited. Furthermore, it cannot represent topological relations. In contrast, implicit functions can represent 3D data of arbitrary topology and at arbitrary resolution. Moreover, training for the occupancy network is simple without using any additional information that other methods do not have access to (see A3 for more details).\n\n(2) The occupancy field collaborates well with the keypoint field. Although the saliency of an arbitrary point can be obtained through querying the feature point field, the occupancy of the point also needs to be known to further filter the points that lie on the input's surface. Compared with the point cloud, the occupancy field can easily tell us the geometric information of any query point.\n\n---\n\n**Q3**:\n\n> The authors claim that the proposed method is a novel unsupervised method. This is also not correct, since this method requires ground truth occupancy information as a supervision.\n\n**A3**:\n\nSorry for this misunderstanding. \n\n(1) Unsupervised means the keypoint location must be learned in an unsupervised manner (like USIP and UKPGAN) because it is ambiguous for humans to label keypoints.\n\n(2) Meanwhile, we do not use ground truth occupancy values. We only use input point clouds to learn this implicit occupancy function. We randomly sample the positives from the input point cloud. Moreover, the negatives are randomly sampled in the unit 3D space. Although some of the negatives are indeed on the surface of the object, their number is so limited compared to the whole query sets that they do not affect the training. Therefore, we do not use any additional information that USIP and UKPGAN do not use. We have added these notes in the training details (see section B.1) in the revised supplementary.", " We would like to thank R#2TwX for the professional assessment. Here we respond to raised concerns one by one.\n\n---\n\n**Weaknesses**:\n\n> Lack of related work discussion: FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY VARYING KERNELS\n\n**A**:\n\nWe thank R#2TwX for suggesting this highly related paper. We have referred to it as [43] (line 110) in the revised main paper. We believe it to be applicable, after some adaptation, to our framework as a shape encoding/reconstruction component. For now, the major obstacle to use it is that an additional meshing step is needed since our input is a point cloud without mesh connectivity. We think this can be left for future work.\n\n---\n\n**Q1**:\n\n> Are there any other newer methods to compare with? The UKPGAN seems to be in the year 2020. \n\n**A1**:\n\nTo clarify, the UKPGAN paper firstly appeared on Arxiv in 2020, but it was finally accepted to CVPR 2022. This fact can be checked on this link:\n\nhttps://openaccess.thecvf.com/content/CVPR2022/html/You_UKPGAN_A_General_Self-Supervised_Keypoint_Detector_CVPR_2022_paper.html\n\nDuring 2020-2022, this paper underwent substantial revision, and we refer to its latest version. We believe a CVPR 2022 paper can be considered a state-of-the-art and reasonable baseline. To avoid this kind of confusion, we have updated the reference in the main paper to the CVPR 2022 version.\n\n---\n\n\n**Q2**:\n\n> Can you elaborate more of the difference when compared with \"R2d2: Reliable and repeatable detector and descriptor\"?\n\n**A2**: \n\nIn our opinion, the differences between SNAKE and R2D2 include:\n\n(1) Since R2d2 predicts saliency scores for a 2D image, the keypoint location comes from discrete grids of pixels. By contrast, we use the coordinate-based networks, which parameterize keypoint probability as a continuous function.\n\n(2) Our method tightly entangles the shape reconstruction and keypoint detection, which brings several advantages. While R2d2 does not introduce a task for image reconstruction.\n\n(3) Because 3D keypoints are encouraged to lie on the surface of the input, we propose a novel surface constraint loss that utilizes the occupancy probability and keypoint saliency. In contrast, since R2d2 detects keypoints from the 2D image plane, R2d2 does not propose a loss function for similar mutual constraint purposes.\n\n(4) We can further refine the coordinates of keypoints by gradient-based optimization in the continuous keypoint field, which is also different from R2d2.\n\n---\n\n**Q3**:\n\n> Can you elaborate more on discrete space and continuous space? \n\n**A3**:\n\nTraditional signal representations are discrete - for example, 2D images are discrete grids of pixels, and 3D data are often represented as meshes, voxels, or point clouds. While implicit neural representation aims to parameterize a signal as a *continuous function* by a neural network that outputs whatever is at a given coordinate, for example, occupancy, SDF, and radiance. \n\nThis paper focuses on 3D keypoint detection from point clouds. The former work UKPGAN estimates the saliency of each point in the input point clouds and selects the most salient point as keypoints. However, discrete point clouds use a finite number of points to represent the 3D object or scene. It's possible that keypoints selected from this discrete set are sub-optimal due to limited sampling of the input. So we propose a keypoint field that is a continuous representation. We can predict the saliency of points at arbitrary continuous coordinates. When the input point clouds are down-sampled or affected by noises, our method outperforms counterparts that rely on discrete keypoint representations. ", " We would like to thank R#5p2u for the professional assessment. Here we respond to raised concerns one by one.\n\n---\n\n**Weakness**:\n\n> Keypoint extraction in inference time requires iterative gradient descent and query the implicitly defined saliency field. Thus the computational cost is high and is not suitable for real-world applications at its current form.\n\n**A**: \n\nThanks for pointing out this. We would like to note that the gradient descent optimization step is an optional add-on, which trades off computational overhead for higher accuracy. Without this step, our method can still achieve strong keypoint detection results, as evidenced by:\n\n(1) Figure 7-c in the main paper shows that when no optimization is used (0-step), the repeatability is still as high as around 81%.\n\n(2) Table 4 in the supplementary material shows that when no optimization is used, our method is as fast as the traditional method ISS. Note that ISS is widely used in online robotics applications like [1][2]. \n\nIn addition, SNAKE requires the lowest GPU memory cost to generate keypoints compared with other deep learning based methods, as shown by Figure 2 in the supplementary material. It shows that SNAKE is not only efficient in terms of speed but also memory usage. \n\nFinally, to better illustrate the trade-off between speed and accuracy, we have added keypoint repeatability into Table 4 of the supplementary material, which is also shown below.\n\nTable 1: Average time (s) taken to compute keypoints from input point clouds on ModelNet40 dataset. Decimals in parentheses in italics are relative repeatability (%). Here, the experiment setting is the same as section 4.2 in the main paper. $J$ is the optimization step.\n\n| Input Point # | ISS | USIP | Ours $J$=0 | Ours $J$=5 | Ours $J$=10 |\n| :-----------: | :------------: | :-------------: | :------------: | :------------: | :------------: |\n| 2048 | 0.07 (*0.088*) | 0.006 (*0.748*) | 0.08 (*0.795*) | 0.50 (*0.835*) | 0.81 (*0.851*) |\n| 4096 | 0.11 (*0.096*) | 0.007 (*0.799*) | 0.09 (*0.811*) | 0.50 (*0.850*) | 0.83 (*0.864*) |\n\n\n[1] F. Fadri, et al. Autonomous robotic stone stacking with online next best object target pose planning. In IEEE international conference on robotics and automation (ICRA), pages: 2350-2356, 2017.\n\n[2] Jiadong Guo, et al. Local Descriptor for Robust Place Recognition Using LiDAR Intensity. IEEE Robotics and Automation Letters, 4(2):1470–1477, 2019.\n\n---\n\n**Q1**: \n\n> Additional visualization of repeatability of keypoints under SE3 transformation as well as different sparsity of points can make the contribution of the paper more clear. The plots in Figure 5 alone is not visual enough to show the quality of the proposed method in terms of repeatability.\n\n**A1**:\n\nWe thank R#5p2u for this suggestion. We have added some new qualitative visualization figures (Figure 13-15) in the supplementary material, which compare the repeatability of different methods using the same randomly generated SE(3) transformation. ", " **Q2**:\n\n> The registration experiments (sec 4.3) rely on D3Feat descriptors for the detected keypoints. I am aware that this descriptor is commonly used in the UKPGAN’s experiment. I am interested in understanding the bottleneck of the problem. Since D3Feat detector + D3Feat descriptor still serves as an upper bound in this experiment, I wonder whether it is possible that certain detector requires specifically designed descriptors in order to work perfectly in the registration problem. D3Feat descriptor may not provide the best features for SNAKE or UKPGAN. \n\n**A2**:\n\n(1) Firstly, we would like to note a fact: the D3Feat detector + D3Feat descriptor is hard to surpass but not a strict upper bound. As demonstrated by Table.1 in the main paper, SNAKE + D3Feat descriptor outperforms D3Feat detector + D3Feat descriptor under 1000/500 points for feature matching recall, 2500 points for registration recall, and 2500 points for inlier ratio. This fact indicates that when the number of keypoints is high enough, SNAKE can collaborate well with off-the-shelf descriptors.\n\nAs such, if it is allowed to use many keypoints (e.g., 2500), the performance bottleneck actually lies in the registration problem itself (e.g., local minima under a certain SE(3) transformation).\n\n(2) Secondly, if only a small number of keypoints are allowed, the mismatch between detector and descriptor is the bottleneck. For example, by the shape-aware mechanism, SNAKE could consider the center of a flat table top to be a keypoint as it reflects the geometric center. In contrast, the descriptors of D3Feat near the center of a flat table top may not be discriminative enough for registration.", " We would like to thank R#ZdeR for the professional assessment. Here we respond to raised concerns one by one.\n\n---\n\n**Weakness and Q1**:\n> Missing comparison with UKPGAN in section 4.2. UKPGAN is a competitive baseline and its code is publicly available. \n> \n> L224 states that UKPGAN is not involved in the experiments due to the absence of pretrained model. Since their code is publicly available, I wonder if training from scratch is possible? If not, I wonder if it is possible to compare alternative datasets?\n\n**A1**:\n\nWe thank R#ZdeR for this suggestion. We have tried to train UKPGAN (official implementation) on the ModelNet40 and 3DMatch datasets from scratch but observed divergence under default hyper-parameters. The training always reports NaN losses in early epochs. This instability also implies limitations in implementing the idea of joint reconstruction and keypoint detection with GAN-based methods.\n\nHowever, we do agree that comparing repeatability (apart from semantic consistency and registration accuracy) between UKPGAN and SNAKE is necessary. As such, we provide a new experiment to compare their repeatability on the KeypointNet dataset, on which the UKPGAN provided a pre-trained model. We randomly perform SE(3) transformation on the test point clouds to generate the second view point clouds. Then, we select top-32 salient keypoints with NMS (radius=0.03) in each sample and show the keypoint repeatability under different distance thresholds $\\epsilon$, downsample rates, and Gaussian noise scales. The relative repeatability (%) results are summarized as follows:\n\nTable 1: Relative repeatability (%) with different distance thresholds $\\epsilon$ on the KeypointNet dataset.\n\n| $\\epsilon$ | 0.03 | 0.04 | 0.05 | 0.06 | 0.07 | 0.08 | 0.09 | 0.10 |\n| :--------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: |\n| UKPGAN | 0.199 | 0.322 | 0.454 | 0.564 | 0.661 | 0.741 | 0.810 | 0.864 |\n| Ours | **0.643** | **0.734** | **0.806** | **0.856** | **0.892** | **0.918** | **0.936** | **0.948** |\n\nTable 2: Relative repeatability (%) when input point clouds are disturbed ($\\epsilon$=0.03) on the KeypointNet dataset.\n\n| | Original | Down 4x | Down 8x | Noise std=0.02 | Noise std=0.03 |\n| :----: | :-------: | :-------: | :-------: | :------------: | :------------: |\n| UKPGAN | 0.199 | 0.570 | 0.427 | 0.608 | **0.558** |\n| Ours | **0.643** | **0.594** | **0.525** | **0.626** | 0.536 |\n\nThese two tables show that SNAKE achieves significant gains over UKPGAN in most cases. Interestingly, when the inputs are disturbed, the performance of UKPGAN increases rather than decreases. Via visualizing the results (see Fig.1 of the updated supplementary material), we find that when the input point clouds are disturbed, the keypoints predicted by UKPGAN are clustered in a small area, which improves the repeatability of keypoints but fails to cover the input uniformly. This illustrates that the GAN-based method adopted by UKPGAN to control the keypoint sparsity is not robust to input point cloud disturbance. The keypoints of ours still remain meaningful under the drastic changes of inputs.\nThese results are also updated in the revised supplementary material, as can be found in section C.1.", " The paper presents a novel unsupervised method SNAKE to detect 3D keypoints from point clouds based on implicit neural representations. The key idea is to combine shape reconstruction and keypoint detection during training. Experiments show that jointly learning 3D shapes and key points improves semantical consistency, better repeatability under disturbance, and accurate geometric registration under zero-shot settings. \n + Though the idea of combining reconstruction and saliency prediction is not new (UKPGAN also reconstructs shape), this paper takes advantage of implicit representation and shows advantages over the GAN-based method. The proposed method is simple but effective. The proposed four loss functions are intuitive and ablations show that they are important.\n+ The experiments are mostly thorough and show good qualitative and quantitative results compared to competitive baselines. \n+ The paper is well written and the figures are easy to understand. \n- Missing comparison with UKPGAN in section 4.2. UKPGAN is a competitive baseline and its code is publicly available. \n\nPost-rebuttal:\nMy final rating is weak accept. Thanks to the authors and reviewers for their effort. The rebuttal mostly answers my questions. I think the paper is novel and has enough difference from UKPGAN -- the method does not have a GAN and the experiment results are better. However, I do think D3Feat descriptor may not be the best way to evaluate the proposed method in experiments since it is designed for D3Feat detector and may not be discriminative enough for different feature detectors. Since the prior works are using this protocol, I won't criticize too much for following it.\n - L224 states that UKPGAN is not involved in the experiments due to the absence of pretrained model. Since their code is publicly available, I wonder if training from scratch is possible? If not, I wonder if it is possible to compare alternative datasets? \n- The registration experiments (sec 4.3) rely on D3Feat descriptors for the detected keypoints. I am aware that this descriptor is commonly used in the UKPGAN’s experiment. I am interested in understanding the bottleneck of the problem. Since D3Feat detector + D3Feat descriptor still serves as an upper bound in this experiment, I wonder whether it is possible that certain detector requires specifically designed descriptors in order to work perfectly in the registration problem. D3Feat descriptor may not provide the best features for SNAKE or UKPGAN. \n Yes. ", " The paper is well written, the motivation and technical details are clearly presented. The idea of estimating saliency field from sparse keypoints seem novel, and it is shown to be effective to produce repeatable and consistent keypoint detection result. Strength:\n\nThe paper is well written, the motivation and technical details are clearly presented. The idea of estimating saliency field from sparse keypoints seem novel, and it is shown to be effective to produce repeatable and consistent keypoint detection result.\n\nWeakness:\n\nKeypoint extraction in inference time requires iterative gradient descent and query the implicitly defined saliency field. Thus the computational cost is high and is not suitable for real-world applications at its current form.\n\n Additional visualization of repeatibility of keypoints under SE3 transformation as well as different sparsity of points can make the contribution of the paper more clear. The plots in Figure 5 alone is not visual enough to show the quality of the proposed method in terms of repeatibility. Keypoint extraction during inference requires considerable computational cost, not suitable for real-time on-device applications.", " The paper presents an unsupervised method to predict 3d keypoints from point cloud. Several novel losses are proposed to enforce repeatability, Surface Constraint, and Sparsity. They also achieve superior performance on various public benchmarks. They also use the novel two head to model occupancy and saliency separately to better disentangle these two tasks and let them serve as independently different functions. Strengths:\nI like the intuition that starts from continuous instead of discrete space. \nThe proposed architectures and losses are novel as far as I see. \nThe presentation is very well.\nIt achieves the SOTA performance on several datasets.\nWeaknesses:\nLack of related work discussion: FULLY CONVOLUTIONAL MESH AUTOENCODER USING EFFICIENT SPATIALLY VARYING KERNELS\n\n Are there any other newer methods to compare with? The UKPGAN seems to be in the year 2020.\nCan you elaborate more on discrete space and continuous space?\nCan you elaborate more of the difference when compared with \"R2d2: Reliable and repeatable detector and descriptor\"? Yes", " The authors introduced a method to detect keypoints from point clouds. They leverage a deep learning model to learn an implicit function which is a mapping to provide each location a probability of being a keypoint. To make the method shape-aware, the authors use deep learning models to learn the kepoint field along with the learning of an occupancy field to explore whether the shape information can improve the keypoint detection. The contribution lies in the way of combining the learning of implicit fields and keypoint fields. The authors evaluate the effectiveness by comparing it with the latest methods under the widely used benchmark. Strengths:\n1. The visualization is good.\n2. The paper is easy to follow.\n \n \nWeaknesses:\n1. The motivation is not convincing at all.\n2. The experimental results cannot justify the effectiveness of the method.\n 1. First of all, I do not think it makes any sense to learn the probability of keypoints as a field. Keypoints should be located on surfaces, they are not floating in the space. Specifically, the authors aim to detect keypoints from input point clouds, the keypoints to be determined should be some of the input points. It should be much easier to learn the probability of keypoints among the discrete input points than in the continuous whole space.\n \n2. The motivation of improving the performance with surface reconstruction does not make any sense either. Surface reconstruction is a harder task than keypoint detection, since point clouds have been given as input which provides a range as a good constraint to find solutions for kepoint detection. We already know the structure of shapes represented by point clouds and why we have to learn an implicit function to reconstruct shapes. I believe it just makes this problem more complex to get resolved.\n \n3. The authors claim that the proposed method is a novel unsupervised method. This is also not correct, since this method requires ground truth occupancy information as a supervision.\n \n4. The proposed method is also not novel. UKPGAN has explored the feasibility of combining shape reconstruction and keypoint detection together, although UKPGAN reconstructs the input point cloud rather than learning an implicit function to represent the same shape. The minor difference here is what representation is used to represent 3D shapes.\n \n5. Learning occupancy field results in using an additional surface constraint. This term should not be there due to the learning of the occupancy field. Fig.2(b) may be confusing, I think the close to 0 line should be parallel to the x axis rather than y axis.\n \n6. The term of repeatability loss is hard to understand. Why do we need a term like this? For shapes, they are well aligned, why do we need to consider the rigid transformation? Another question is about the cosine similarity, the probabilities are scalars, right? Is there any reason to use cosine similarity to evaluate the difference between two numbers? \n \n7. The authors use occupancy probability to represent the inverse distance between the query and the input. I do not think it works here. Since occupancy probability always gets smaller from 1 to 0 when moving a query from inside to outside while the inverse distance gets smaller to larger before going across the surface, and then gets smaller after that.\n \n8. Explicit keypoint extraction via optimization is also an operation that I do not understand. An intuitive idea of extracting keypoints is to use points of the input point clouds as queries to predict the occupancy values since keypoints can only locate on the surface. Why do we have to extract keypoints via updating the locations of randomly sampled queries in the 3D space?\n \n9. In the visualization of saliency field slices, how to select the slices to visualize? \n \n10. How to determine the keypoint number? I noticed that the numbers of keypoints produced by different methods are different even in the same case, such as the visual comparison in Fig.6. Why do not use the same number of points to perform the comparison?\n \n11. The reason why I do not think the reconstruction can improve the performance of keypoint detection is the results in Table 2. The results without surface reconstruction are better than the results with surface reconstruction. I do not think this is a good support for the argument.\n \n12. With the ground truth occupancy supervision, it is not a fair comparison with other methods.\n Yes, the authors addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 5 ]
[ "nips_2022_oWx_9VJgyV7", "WFiXaP9Q0Xz", "dcARkPno-5", "7IyN4BNcdYP", "SWUOBwOpXDT", "PDx5_h8F0Ns", "PDx5_h8F0Ns", "PDx5_h8F0Ns", "WFiXaP9Q0Xz", "dcARkPno-5", "OI37yz4nDUo", "C-c4H4cWr2x", "nips_2022_oWx_9VJgyV7", "nips_2022_oWx_9VJgyV7", "nips_2022_oWx_9VJgyV7", "nips_2022_oWx_9VJgyV7" ]
nips_2022_m7CmxlpHTiu
Self-Supervised Aggregation of Diverse Experts for Test-Agnostic Long-Tailed Recognition
Existing long-tailed recognition methods, aiming to train class-balanced models from long-tailed data, generally assume the models would be evaluated on the uniform test class distribution. However, practical test class distributions often violate this assumption (e.g., being either long-tailed or even inversely long-tailed), which may lead existing methods to fail in real applications. In this paper, we study a more practical yet challenging task, called test-agnostic long-tailed recognition, where the training class distribution is long-tailed while the test class distribution is agnostic and not necessarily uniform. In addition to the issue of class imbalance, this task poses another challenge: the class distribution shift between the training and test data is unknown. To tackle this task, we propose a novel approach, called Self-supervised Aggregation of Diverse Experts, which consists of two strategies: (i) a new skill-diverse expert learning strategy that trains multiple experts from a single and stationary long-tailed dataset to separately handle different class distributions; (ii) a novel test-time expert aggregation strategy that leverages self-supervision to aggregate the learned multiple experts for handling unknown test class distributions. We theoretically show that our self-supervised strategy has a provable ability to simulate test-agnostic class distributions. Promising empirical results demonstrate the effectiveness of our method on both vanilla and test-agnostic long-tailed recognition. Source code is available in the supplementary material.
Accept
The reviewers agreed the paper provides some nice insights into tackling the difficult and under-explored problem of test-agnostic long-tailed recognition. The reviewers appreciated the thorough experiments and ablations provided. The author response sufficiently addressed the key concerns the reviewers had.
train
[ "NcbgvC_6Kr-", "bRCnqyrFlbo", "zV3mhhcxq37", "srMXzOosNBf", "UBvffFZIxJxK", "5IcsTqXEDK-", "EzZ4q0kP8F", "dRrZDKkq8Gl", "7pvsNHBytc", "GpK7cIjIc9", "jN63a45nMz9", "OQJbmFlT1DJ", "CjoqrvlidJx", "hmtLGfyuyP-" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your time and effort in reviewing our paper. Based on your constructive suggestions, the work has become more solid and thorough now.", " Thank the authors for addressing some of my concerns. To reflect the authors' efforts, I will change my score to 5.", " Thanks very much for the response. We have tried our best to address all the mentioned concerns. Following your constructive suggestions, our work has become more solid and thorough now. Could you please kindly re-evaluate the work based on the current version? We would like to know whether there is any remaining question that we can resolve.", " I would like to thank the authors for addressing my concerns and conducting additional experiments during the short rebuttal period. I will keep my rating as \"Borderline accept\".", " We sincerely appreciate all reviewers‘ time and effort in reviewing our paper and providing constructive feedback. Besides the response to each reviewer, here we would like to further 1) thank reviewers for their recognition of our work, 2) highlight the new results added during the rebuttal, and 3) highlight the revision in the revised paper:\n\n**1) We are glad that the reviewers appreciate and recognize our contributions.**\n\n* The proposed test-agnostic long-tailed recognition setting is challenging, interesting, and of great practical significance [eBs7,rg9d]\n* The proposed test-time aggregation strategy is interesting and has proven to be useful [eBs7]\n* The experiments and ablation studies are comprehensive, convincing, and thorough. [uSnn,eBs7,rg9d] \n* This paper is well-written and easy to follow. [uSnn,eBs7,rg9d]\n \n\n**2) In the rebuttal, we have provided more supporting results following the reviewers’ suggestions.**\n\n* Performance of our test-time self-supervised strategy on streaming test data [uSnn,rg9d]\n* Performance of our test-time self-supervised strategy on Forward-LT test class distributions with higher imbalance ratios [uSnn]\n* Performance of using MC-Dropout as a metric for test-time expert aggregation [uSnn]\n* Comparison results between model accuracy and the number of shared blocks between experts [eBs7]\n* Comparison with the optimal expert weights searched by Grid Search [rg9d] \n* Actual average additional time of test-time aggregation per sample [rg9d]\n\n**3) We make the following modifications in our revision to address reviewers' questions (highlighted in blue).**\n\n* We further clarify that our self-supervised aggregation strategy can be conducted in an online manner for streaming test data, and add corresponding empirical verification [uSnn,rg9d]\n* We add more implementation details [uSnn,eBs7] \n* We add more explanations for Appendix E.1 [eBs7] \n \n\n\nSince the discussion period will end in a few days, we would like to know whether there is any remaining question that we can resolve. We look forward to your response.", " Thank the reviewer for the constructive comments, particularly for recognizing the studied problem is interesting and under-explored. We address the concerns point by point as follows.\n\n---\n\n**Q1. Concern on the technical significance** \n\nWe see your point, but we are afraid that we cannot agree with the comment on technical significance based on the following facts.\n\n**(Q1-1) \"The technical significance is not enough.\"** \n\nThe test-agnostic LT problem we attempt to address is highly challenging (as recognized by Reviewer eBs7). To our best knowledge, there is no existing feasible solution so far --- previous methods either assume the test class distribution to be fixed as uniform, or the imbalanced test class distribution is known *a priori*. **Our SADE is the first feasible approach to solve this problem and achieve superior performance** (cf. Table 5), which we believe is already a nontrivial technical contribution.\n\n\n**(Q1-2) \"The skill-diverse expert learning does not make new contribution to the field because multiple experts have been used in existing literature, e.g., [1-3].\"** \n\nThis is not true. **Our proposed inverse softmax loss (cf. Eq.3) is new to the community**. Despite its simplicity, it effectively increases the expert diversity that leads to higher ensemble performance (cf. Table 6), and enables our model to cover inverse LT class distributions, making a core component to solve the challenging test-agnostic LT problem. \n\n**Existing multi-expert methods are not directly applicable**. Simply training and aggregating multiple experts [1,2,3] cannot handle the challenge of unknown class distribution shifts. As shown in the following table, when the test class distribution varies, the performance distribution of the multi-expert model learned by RIDE [2] in terms of many-, medium- and few-shot classes remains unchanged, suggesting that the method cannot adapt to different test class distributions. \n\n\n| Test distribution | Method | Many | Medium | Few | All classes |\n| --------------------- | --------- | :--: | :----: | :--: | :--: |\n| Forward-LT-50 | RIDE | 68.3 | 51.6 | 36.8 | 67.6 |\n| Forward-LT-10 | RIDE | 68.9 | 52.9 | 38.5 | 64.0 |\n| Uniform | RIDE | 68.0 | 52.9 | 35.1 | 56.3 |\n| Backward-LT-10 | RIDE | 68.3 | 53.3 | 36.2 | 48.7 |\n| Backward-LT-50 | RIDE | 70.8 | 52.5 | 36.1 | 44.0 |\n| | | | | | |\n| Forward-LT-50 | SADE (ours) | 70.0 | 53.2 | 33.1 | 69.4 (+1.8) |\n| Forward-LT-10 | SADE (ours) | 69.9 | 54.3 | 34.7 | 65.4 (+1.4) |\n| Uniform | SADE (ours) | 66.5 | 57.0 | 43.5 | 58.8 (+2.5) |\n| Backward-LT-50 | SADE (ours) | 60.9 | 57.5 | 50.1 | 54.5 (+5.8) |\n| Backward-LT-50 | SADE (ours) | 60.7 | 56.2 | 50.7 | 53.1 (+9.1) |\n\n\n[1] Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification. In ECCV, 2020.\n\n[2] Long-tailed recognition by routing diverse distribution-aware experts. In ICLR, 2021.\n\n[3] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification. In AAAI, 2022.\n\n**(Q1-3) \"The Test-time Self-supervised Aggregation is simply a re-weighting of three models. The idea of aggregating multiple diverse models was also explored in [3].\"** \n\n\nOur test-time self-supervised strategy is not a simple expert aggregation strategy. To the best of our knowledge, **maximizing prediction consistency between unlabeled test data's perturbed views for expert aggregation is new**. Such a strategy is well-motivated (cf. Table 2), theoretically guaranteed (cf. Theorem 1) and empirically effective (cf. Tables 5&8) in handling unknown class distribution shifts. The novelty of this strategy has been highly recognized by Reviewer eBs7 \"*The proposed test-time aggregation strategy is interesting and has proven to be useful*\".\n\nIn addition, please note that existing LT methods for aggregating multiple experts are unable to tackle test-agnostic LT. For example, the mentioned method [3] uses **ground-truth labels to compute the weights for different experts**. However, when facing test-agnostic LT, this method does not make sense anymore, because the ground-truth labels are unavailable at test time.\n\n \nIn light of the above technical innovations we have introduced, it is not fair to criticize the technical significance of our method just because of its simplicity.", " **Q2. Can the Test-time Self-supervised Aggregation learn optimal weights?**\n\nIt is hard to theoretically prove that our method can find the global optimum, but it performs well in our experiments (cf. Tables 7-10). In addition, **we also empirically find that the solution of our method is close to the optimum**. Specifically, we conduct grid search to find optimal weights, where the values of the three weights are selected from [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] and the sum of them is 1. The obtained optimal weights by grid search and the corresponding model performance on ImageNet-LT are reported in the following table. We find that our self-supervised strategy is able to obtain near-optimal weights and model performance compared to the results of the grid search, which further demonstrates the effectiveness of our method. Moreover, analyzing the theoretical optimum is beyond the scope of this paper, which we thus leave as future work. \n\n| Test Distributions | Method | Weight of Expert 1 | Weight of Expert 2 | Weight of Expert 3 | Performance |\n| ---------------- | ----------------------- | :----------------: | :----------------: | :----------------: | :----------------: |\n| Forward-LT-50 | SADE (ours) | 0.46 | 0.35 | 0.13 | 69.4 |\n| Forward-LT-50 | **Grid search** (optimal) | 0.50 | 0.40 | 0.10 | 69.8 |\n| | | | | | |\n| Forward-LT-10 | SADE (ours) | 0.46 | 0.36 | 0.18 | 65.4 |\n| Forward-LT-10 | **Grid search** (optimal) | 0.50 | 0.40 | 0.10 | 65.7 |\n| | | | | | |\n| Backward-LT-10 | SADE (ours) | 0.21 | 0.29 | 0.50 | 54.5 |\n| Backward-LT-10 | **Grid search** (optimal) | 0.20 | 0.30 | 0.50 | 54.7 |\n| | | | | | |\n| Backward-LT-50 | SADE (ours) | 0.17 | 0.27 | 0.56 | 53.1 |\n| Backward-LT-50 | **Grid search** (optimal) | 0.10 | 0.30 | 0.60 | 53.5 |\n\n---\n\n**Q3. Does the proposed test-time strategy have to require all test data to be available in advance?**\n\n \nIndeed not. After submission, we also tried to apply our method for streaming test data, and found that **our test-time self-supervised strategy also works well in an online manner**, and **does not require access to all the test data in advance**. More specifically, as shown in the following table, our test-time strategy performs well on the *streaming test data* of ImageNet-LT. Even when the test data come in one by one, our test-time self-supervised strategy still outperforms the SOTA baseline (i.e., offline Tent [2]) by a large margin, with the same multi-expert model. \n\nIn the revised paper, we have clarified in Section 4.2 that our test-time strategy can be conducted in an online manner for streaming test data, and also added the following new results to Appendix F.6. The source code of our online strategy will be released.\n\n| Test-time Strategy | Forward-LT-50 | Forward-LT-5 | Backward-LT-5 | Backward-LT-50 |\n| ----------------------------------- | :-----------: | :----------: | :-----------: | :------------: |\n| No test-time adaptation | 65.5 | 62.0 | 54.7 | 49.8 |\n| Entropy minimization with Tent [2] | 68.0 | 62.8 | 53.2 | 45.7 |\n| offline SADE (reported in paper) | 69.4 | 63.0 | 55.5 | 53.1 |\n| online SADE with batch size 64 | 69.5 | 63.6 | 55.8 | 53.1 |\n| online SADE with batch size 8 | 69.8 | 63.0 | 55.4 | 53.0 |\n| online SADE with batch size 1 | 69.0 | 62.8 | 55.2 | 52.8 |\n\n\n[4] Tent: Fully test-time adaptation by entropy minimization. In ICLR, 2021.", " **Q4. Concern on incurring more computational cost by our test-time strategy**\n\nAs the first feasible method to handle the challenging task of test-agnostic LT, we believe that incurring a little additional computational cost is acceptable. This is also supported by Reviewer eBs7 (\"*The problem of computation complexity seems tolerable since it is a quite challenging problem after all*\"). Moreover, as clarified in the above answer to Q3, our test-time self-supervised strategy can be conducted in an online manner, which is efficient in practice. In fact, the actual average additional time is *only 0.009 seconds per sample* at test time on V100 GPUs. In the future, we will further extend the proposed method for better computational efficiency, e.g., exploring dynamic network routing. \n\n---\n\n**Q5. Why does this paper choose Balanced softmax and a variant logit adjustment loss?**\n\nWe aim to learn multiple experts with diverse skills that excel at handling different class distributions, while developing more sophisticated learning methods for training each expert is not our focus. Therefore, **our first principle is to utilize the representative loss functions which have been proved simple and effective previously**. Promising results (cf. Tables 6&18) also confirmed the effectiveness of our strategy on the loss design in learning skill-diverse experts.\n\n---\n\n**Q6. How should we decide the number of experts?**\n\nWe set the number of experts according to the degree of whether the skill-diverse experts cover the potential Forward-LT, Uniform, and Backward-LT class distributions. Through our experiments, we found that **three experts are sufficient to handle varied unknown test class distribution** (cf. Tables 5-6), while further adding additional experts does not lead to significant performance gains (cf. Appendix E.1). \n\n\n---\n\n**Q7. Why should we use the proposed method instead of existing long-tailed methods such as RIDE?**\n\nCompared to existing LT methods like RIDE, our SADE method offers the following advantages:\n\n* Empirical superiority: **SADE consistently outperforms existing LT methods (like RIDE) on various test class distributions** (cf. Table 5), including Forward LT, Uniform, and Backward LT class distributions. Taking ImageNet-LT as an example, compared to RIDE, our SADE achieves *1.8%* accuracy improvement on the Forward-LT-50 test distribution, *2.5%* accuracy gain on the uniform test distribution, and *9.1%* gain on the Backward-LT-50 test distribution. The empirical superiority of our method has been highly recognized by both Reviewer uSnn (\"*achieved consistent performance improvements on all these benchmarks and testing scenarios*\") and Reviewer eBs7 (\"*The experiments and ablation studies are comprehensive and convincing*\"). \n\n* Theoretical aspect: **our method has a provable ability to simulate test-agnostic class distributions, while RIDE does not enjoy any theoretical guarantees**. This has also been recognized by Reviewer eBs7 (\"*The proposed test-time aggregation strategy is interesting and has proven to be useful*\"). \n\n\nBecause of the above superiority, our method provides a better solution to handle real-world long-tailed applications, where the test class distribution may follow any kind of class distribution.\n\n---\n\n**Q8**. Typo in Table 9: Thanks for pointing this out. We have corrected the typo in the revision.\n\n\n\nThanks again for your constructive comments. We welcome and are happy to discuss any further questions you may have.", " Thanks a lot for your valuable comments. We are glad to see that the thorough experiments and good writing of this paper are appreciated. We address your concerns point by point as follows.\n\n---\n \n**Q1. Concern on the rationality of test-agnostic LT**\n\n**Please note that test-agnostic LT is not an entirely new task**. As stated in Lines 37-38, it has been explored by LADE [1] (published in CVPR 2021) where the test class distribution can be Forward LT, Uniform, or Backward LT distributions. However, LADE assumes that the test class distribution is known *a priori*, which does not hold in realistic applications. This makes LADE less applicable in practice. \n\n**Our main contribution is to eliminate such a restrictive assumption made by LADE and explore test-agnostic LT under the setting of unknown test class distributions**. Hence, the developed solutions would have more practical value, and may motivate future LT research. The value of our task has been highly recognized by both Reviewer eBs7 \"*The proposed test-agnostic setting is challenging and of great practical significance*\" and Reviewer rg9d \"*The studies problem is interesting and under-explored*\". \n\nIn addition, an example of backward class distributions in realistic scenarios is autonomous driving, which has been described in Lines 32-36 of our submission.\n\n\n[1] Disentangling label distribution for long-tailed visual recognition. In CVPR, 2021.\n\n---\n\n**Q2. The performance improvement on Forward-LT and Uniform is not significant**\n\nThanks for recognizing \"*the proposed method achieved consistent performance improvements on all benchmarks and testing scenarios*\". Please note that **improving the model performance for ALL kinds of test class distributions is nontrivial, since tackling unknown class distribution shifts between training and test data is highly challenging**. To the best of our knowledge, no existing long-tailed method can address this problem, and our work provides the first feasible solution (cf. Tables 5&8-10). \n\nBesides, the forward-LT and uniform settings have been extensively studied. Even compared with the strong SOTA methods like RIDE and LADE, our SADE method can still bring at least *2.5%* (for uniform) and *1.8%* (for forward-LT-50) accuracy improvement, which we believe is significant w.r.t. such strong baselines.\n \nWe would like to highlight that our method is advantageous in handling large class distribution shifts between the training and test data, as mentioned in Lines 305-306 (\"*the performance advantages of SADE become larger as the test data get more imbalanced*\"). For the forward-LT setting, when the test imbalance ratio becomes more severe, SADE demonstrates larger performance gains, as shown in the following table.\n\n\n| Method | Forward-LT-500 | Forward-LT-300 | Forward-LT-200 | Forward-LT-100 | Forward-LT-50 |\n| --------- | :------------: | :------------: | :------------: | :------------: | :-----------: |\n| RIDE | 70.2 | 69.8 | 69.0 | 68.6 | 67.6 |\n| SADE (ours) | 74.3 (+4.1) | 73.9 (+4.1) | 72.7 (+3.7) | 72.2 (+3.6) | 69.4 (+1.8) |\n", " **Q3. Does the proposed test-time strategy have to use all test data for test-time training?**\n\nIndeed not. After submission, we also tried to apply our method for streaming test data, and found that **our test-time self-supervised strategy also works well in an online manner**, and **does not require access to all the test data in advance**. More specifically, as shown in the following table, our test-time strategy performs well on the *streaming test data* of ImageNet-LT. Even when the test data come in one by one, our test-time self-supervised strategy still outperforms the SOTA baseline (i.e., offline Tent [2]) by a large margin, with the same multi-expert model. \n\nIn the revised paper, we have clarified in Section 4.2 that our test-time strategy can be conducted in an online manner for streaming test data, and also added the following new results to Appendix F.6. The source code of our online strategy will be released.\n\n| Test-time Strategy | Forward-LT-50 | Forward-LT-5 | Backward-LT-5 | Backward-LT-50 |\n| ----------------------------------- | :-----------: | :----------: | :-----------: | :------------: |\n| No test-time adaptation | 65.5 | 62.0 | 54.7 | 49.8 |\n| Entropy minimization with Tent [2] | 68.0 | 62.8 | 53.2 | 45.7 |\n| offline SADE (reported in paper) | 69.4 | 63.0 | 55.5 | 53.1 |\n| online SADE with batch size 64 | 69.5 | 63.6 | 55.8 | 53.1 |\n| online SADE with batch size 8 | 69.8 | 63.0 | 55.4 | 53.0 |\n| online SADE with batch size 1 | 69.0 | 62.8 | 55.2 | 52.8 |\n\n[2] Tent: Fully test-time adaptation by entropy minimization. ICLR, 2021.\n\n---\n\n**Q4. Implementation of RIDE and the fairness of comparisons**\n\nWe used the same setup for all the baselines and our method, where **RIDE was also trained for 200 epochs on ImageNet-LT** based on its official code (https://github.com/frank-xwang/RIDE-LongTailRecognition). Thus the empirical comparisons between RIDE and our method are fair. The following table shows the reproduced results of RIDE based on 100 and 200 training epochs on ImageNet-LT. \n\n| Methods | Many-shot | Medium-shot | Few-shot | All classes |\n| :---------------: | :-------: | :---------: | :------: | :--: |\n| RIDE - 100 epochs | 67.4 | 52.5 | 34.5 | 55.8 |\n| RIDE - 200 epochs | 68.0 | 52.9 | 35.1 | 56.3 |\n\n---\n\n**Q5. Exploration on MC-Dropout**\n\nThanks for your constructive suggestion. We did not consider MC-Dropout for expert aggregation before. Following your suggestion, we further explore MC-Dropout to estimate the uncertainty of the trained experts and use the uncertainty as a metric to aggregate experts at test time. The results based on three experts on ImageNet-LT are shown in the following table. We find that **using MC-Dropout as an aggregation metric provides reasonable performance, but still worse than our self-supervised aggregation strategy, particularly when the test imbalance ratio is large**. Such a result further demonstrates the superiority of our method. We agree that exploring model uncertainty to aggregate experts is interesting, and will explore it in the future.\n\n\n\n| Test-time Strategy | Forward-LT-50 | Forward-LT-25 | Uniform | Backward-LT-25 | Backward-LT-50 |\n| :----------------: | :-----------: | :-----------: | :------: | :------------: | :------------: |\n| No | 65.5 | 64.4 | **58.8** | 51.5 | 49.8 |\n| MC-Dropout | 67.8 | 66.4 | 58.7 | 52.2 | 51.0 |\n| SADE (ours) | **69.4** | **67.4** | **58.8** | **53.7** | **53.1** |\n\n\n\n\nWe welcome and are happy to discuss any further questions.", " Thank you very much for your encouraging comments on our paper, particularly for recognizing the value of the studied problem and our proposed method. We hope that our work can motivate more future long-tailed studies to tackle this practical yet challenging problem, i.e., unknown test class distribution shifts. We address your questions as follows.\n\n---\n\n**Q1. Trade-off between computation complexity and performance. Whether is it a near-linear relationship between higher accuracy and experts with fewer shared modules?**\n\nWe are glad to see the reviewer agrees that the additional computation complexity is acceptable since our studied problem is challenging. As mentioned in Appendix C.3 (Lines 634-640), we also made efforts to reduce the computational costs by sharing the majority of the model backbone between experts. We only set the top network blocks of ResNet/ResNeXt and the classifier as independent components of each expert, and reduce their number of convolutional channels by 1/4. We found this design provides a good computation-performance trade-off. \n\nHere, we further evaluate the relationship between the number of shared model blocks and model performance based on ImageNet-LT under the same hyper-parameter setting. As shown in the following table, **the relationship between the number of the shared blocks and model accuracy is not near linear**. **Sharing two blocks is already a good trade-off** between model accuracy and total computational complexity (in terms of MACs) on ImageNet-LT.\n \n\n| Model | MACs (G) | Forward-LT-50 | Forward-LT-25 | Uniform | Backward-LT-25 | Backward-LT-50 |\n| ------------------- | :-------------: | :-----------: | :-----------: | :-----: | :------------: | :------------: |\n| Share all blocks | 3.29 | 65.9 | 64.0 | 52.9 | 49.7 | 49.8 |\n| Share first 3 blocks | 4.27 | 69.0 | 67.0 | 58.0 | 53.1 | 52.5 |\n| Share first 2 blocks (ours) | 6.08 | 69.4 | 67.4 | 58.8 | 53.7 | 53.1 |\n| Share first 1 block | 8.33 | 69.2 | 67.7 | 59.0 | 53.9 | 53.4 |\n| Share 0 block | 9.64 | 68.9 | 66.9 | 58.9 | 53.4 | 52.9 |\n\n\n\n\n---\n\n**Q2. How exactly are the expertise-guided loss functions changed for more experts in Appendix E.1 (Line 762)?**\n\nIn the experiments of having more experts in Appendix E.1, **we adjusted the hyper-parameter $\\lambda$ in Eq. (3) for new experts, while keeping the hyper-parameters of the original three experts unchanged**. Specifically, when there are four experts, we set $\\lambda=1$ for the new expert; while when there are five experts, we set $\\lambda=0.5$ and $\\lambda=1$ for the two newly-added experts, respectively. In the revised paper, we have clarified this in Appendix E.1. \n\n\n\nWe are glad to discuss any further questions you may have.", " The aim of this paper is to develop a mixture-of-expert (MOE) model to solve the test-agnostic long-tailed recognition problem, where the test class distribution may follow a uniform, forward or backward long-tailed distribution. The method is developed on the basis of RIDE with three experts and consists of two strategies. At training time, SADE utilizes skill-diverse expert learning strategies that require each expert to handle a different class distribution in order to solve distribution-agnostic long-tailed recognition problems. At test time, SADE utilizes a test-time expert aggregation strategy, which is based on a self-supervised learning approach, to determine expert aggregation methods that handle unknown class distributions. Experiments were conducted on various test-time training strategies dealing with class distribution transfer. SADE achieves state-of-the-art performance on multiple long-tailed datasets, including CIFAR100-LT, ImageNet-LT, Places-LT and iNaturalist 2018. Strengths:\n- Evaluating on Forward-LT and Uniform test-class distributions can help us better understanding the performance of various long-tailed algorithms on different testing scenarios. It is shown that SADE can achieve SOTA performance on all these testing distributions.\n- This paper is well-written and easy to follow. \n- Authors conducted thorough experiments on various benchmarks and testing scenarios and achieved consistent performance improvements on all these benchmarks. \n\nFor weaknesses, my main concerns are twofold:\n- The biggest improvement comes from test data with Backward-LT distributions, however, I find it hard to believe that backward class distributions are common in real-world applications. The many-shot (few-shot) classes in the training data are often the many-shot (few-shot) classes in the testing data as well. Thus, Forward-LT and Uniform test class distributions make more sense, however, SADE achieves marginal improvements in these two testing cases. \n- The test-time self-supervised aggregation strategy requires the model to see all test data (unlabeled) before deployment, however, in real-world applications we more commonly see only 1 test image. This part is similar to my first concern, which is whether this setup is a practical setup that can be used in real-world applications. I have a few questions on the fairness of the result comparisons and the setting of the test-time adaptation:\n- When comparing with the current SOTA method RIDE [46], the results of RIDE (https://github.com/frank-xwang/RIDE-LongTailRecognition/blob/main/MODEL_ZOO.md) is actually achieved by training the model for 100 epochs. However, the results reported in the paper is achieved by training the model for 200 epochs. Therefore, I am concerned about the fairness of the comparisons. \n- For test-time adaptation, have you tried using MC-Dropout [1] as a metric for expert aggregation? You can get the uncertainty of each expert and decide the weight of each expert based on the uncertainty. Does it save you from using all test data for expert aggregation? I think it might help if MC-Dropout is used with a strong data augmentation to produce different inputs.\n\n[1] Gal, Yarin, and Zoubin Ghahramani. \"Dropout as a bayesian approximation: Representing model uncertainty in deep learning.\" international conference on machine learning. PMLR, 2016. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work. The long-tail recognition algorithm aims to alleviate the problem of ignoring underrepresented minorities, which is necessary to obtain an unbiased model and facilitate the development of CNN models for social justice.", " This paper extends the conventional long-tailed learning to the \"test-agnostic one\", in which the model trained on a long-tailed class distribution should generalize to arbitrary testing distribution not necessarily being uniform. To handle such a problem, this paper proposes a novel method consisting of two modules: (1) diverse experts with different class expertise and (2) a self-supervised test-time weighting strategy that adaptively aggregates the experts to tackle unknown testing distribution. Extensive experiments have shown the efficacy of the proposed method on handling arbitrary class distributions in testing.\n ## Strengths\n- The proposed test-agnostic setting is challenging and of great practical significance.\n- The paper is well-written and easy to follow.\n- The proposed test-time aggregation strategy is interesting and has proven to be useful.\n- The experiments and ablation studies are comprehensive and convincing.\n\n## Weaknesses\n- The weakness is mainly focused on the computation complexity. As mentioned in the paper, the three experts are independent in ResNet blocks (later stages) and fully-connected layers. Though it seems tolerable since it is a quite challenging problem after all, have the authors explored the trade-off between accuracy and complexity? For instance, is it a near-linear relationship between higher accuracy and experts with fewer shared modules? I would like to see how far can it go at the two extreme points: (1) when nothing is shared between experts and (2) everything is shared except the fully-connected layers.\n Line 762: how exactly is the expertise-guided loss functions changed to suit different types of distributions?\n The limitations are carefully discussed in the paper, which mainly encompass the extensibility to different tasks and the model complexity of the proposed method.\n", " This paper studies an interesting problem in long-tailed recognition, i.e., the training class distribution is long-tailed while the test class distribution is agnostic rather than a uniform distribution as assumed in previous works. To deal with the problem, this paper proposes a new approach that outperforms existing methods in both vanilla and test-agnostic long-tailed recognition settings. ### Strength\n\n1. The studies problem is interesting and under-explored.\n2. Extensive experiments are conducted to justify the effectiveness of the proposed method.\n3. The writing is clear and easy to understand.\n\n## Weakness\n\n1. The technical significance is not enough. Specifically, there are two aspects. First, **the** **skill-diverse expert learning** does not make new contribution to the field because multiple experts have been used in many existing literature, e.g., [1-3]. Moreover, the idea of aggregating multiple diverse models was also explored in [3] though the studied problem is different. Second, **the** **Test-time Self-supervised Aggregation** simply a re-weighting of three models. The key contribution might be the prediction stability maximization, but optimizing this objective does not ensure to obtain the optimal weights. \n2. This paper only considers the transductive setting where the entire test data are accessible at once. However, in many applications, the assumption is not satisfied.\n3. This paper incurs more computational cost than previous methods. The Test-time Self-supervised Aggregation has to be performed at each test time.\n4. nitpick: some bold numbers in Table 9 are not the best results.\n\n[1] Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification\n\n[2] Long-tailed recognition by routing diverse distribution-aware experts\n\n[3] Cross-Domain Empirical Risk Minimization for Unbiased Long-tailed Classification 1. Why does this paper choose Balanced softmax and a variant logit adjustment loss?\n2. How should we decide the number of experts?\n3. Is the classifier re-weighting strategy optimal? And can the Test-time Self-supervised Aggregation learn optimal weights?\n4. Why should we use the proposed method instead of existing long-tailed methods such as RIDE? The performance improvement is not significant and the proposed method incurs additional computational costs. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ "bRCnqyrFlbo", "dRrZDKkq8Gl", "srMXzOosNBf", "GpK7cIjIc9", "nips_2022_m7CmxlpHTiu", "hmtLGfyuyP-", "hmtLGfyuyP-", "hmtLGfyuyP-", "OQJbmFlT1DJ", "OQJbmFlT1DJ", "CjoqrvlidJx", "nips_2022_m7CmxlpHTiu", "nips_2022_m7CmxlpHTiu", "nips_2022_m7CmxlpHTiu" ]
nips_2022_A6O79ipjlJC
A Novel Matrix-Encoding Method for Privacy-Preserving Neural Networks (Inference)
In this work, we present a novel matrix-encoding method that is particularly convenient for neural networks to make predictions in a privacy-preserving manner using homomorphic encryption. Based on this encoding method, we implement a convolutional neural network for handwritten image classification over encryption. For two matrices A and B to perform homomorphic multiplication, the main idea behind it, in a simple version, is to encrypt matrix A and the transpose of matrix B into two ciphertexts respectively. With additional operations, the homomorphic matrix multiplication can be calculated over encrypted matrices efficiently. For the convolution operation, we in advance span each convolution kernel to a matrix space of the same size as the input image so as to generate several ciphertexts, each of which is later used together with the ciphertext encrypting input images for calculating some of the final convolution results. We accumulate all these intermediate results and thus complete the convolution operation. In a public cloud with 40 vCPUs, our convolutional neural network implementation on the MNIST testing dataset takes ~287 seconds to compute ten likelihoods of 32 encrypted images of size 28 x 28 simultaneously. The data owner only needs to upload one ciphertext (~19.8 MB) encrypting these 32 images to the public cloud.
Reject
The reviewers were unanimous in their recommendation to reject the paper. The authors' responded to the reviews but recognized the limitation of their submission, particularly in terms of missing comparisons to related work. I want to take this opportunity to address the author who wrote in their rebuttal: *"I will see the papers that referred to Gazelle if I can restart my Ph.D. study again. I hope so."* I sincerely hope you have the chance to restart your Ph.D. program and continue your research. The conference review process can be daunting -- yet it is an important step in pushing our field forward. In the case of your paper, the reviewers appreciated your ideas and the quality of the presentation, describing it as clear and easy to understand. What was missing was a more comprehensive comparison with related work -- a common misstep, even for seasoned researchers. Please **do not** let this discourage you from engaging in research. In fact, I hope this experience demonstrates the value of peer review, serves as a learning experience, and helps your write a better paper. I look forward to crossing paths with your work again.
train
[ "v0V4gEkiRsh", "zrDqENROuWF", "6WD5lYSvOhC", "IWIlYkbJ58", "hY50ZRw-qLP", "4oDwB3PiUJ0", "6sdM6jZfx_k", "K5yLl2hdMSy", "GTvm9knzZfV", "0ZlkDuOCWCy", "LqA5Eqf37kb", "zOvaqO_U-LJ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have noticed that this paper lacks a comparison with previous methods. \n\nAnd I am very grateful for the time you and other reviewers spent reading my work.\n\nI hope this work doesn't waste your time.", " I don't think I have read the paper on GAZELLE or the one about MiniONN (both look familiar to me, though).\n\nI just read four or five papers based on only HE technique and came up with the basic ideas behind this work. Jiang et al.'s work gave me a lot of inspiration and thinking.\n\nIndeed, I did few investigations for prior works.\n\nI will see the papers that referred to Gazelle if I can restart my Ph.D. study again. I hope so.", " I agree that the proposed method provides parallel algorithms which make the inference more efficient. However, to be approved of the novelty of the efficiency, as I mentioned previously, a comparison with the previous method is necessary. In my opinion, if the quantitative comparison with other previous work is hard to be done at this moment, the paper should be submitted later after the comparison is done. (e.g. runtime.) The authors should analyze or set a fair experiment to compare with other previous works.", " I see the point of the authors. However, the main problem of this work is the lack of sufficient investigations for prior works. The main contribution of Jiang et al.'s work is the general matrix multiplication on the homomorphic encryption scheme, not the convolution operation itself. There are so many prior works for the convolution operation on the homomorphic encryption schemes. For example, did you see the following paper?\n\nJuvekar, Chiraag, Vinod Vaikuntanathan, and Anantha Chandrakasan. \"GAZELLE: A low latency framework for secure neural network inference.\" 27th USENIX Security Symposium (USENIX Security 18). 2018.\n\nThis is not even the recent work about the convolution operation on homomorphic encryption, and there have been many prior works based on this work. See the papers that referred to Gazelle. The authors have to consider these works.", " We would like to thank the reviewers for their input and appreciate their comments.\n\n\nSince $F$ only contains public information, so a constant-ciphertext multiplication would be more efficient. After we completed the HE programming and paper writing, we did find that it was unnecessary to encrypt the filter matrix $F$. But it was too late to update the source code and paper.\n\nThere are three steps to implement SumForConv in terms of rotations and element-wise operations. For example, given a ciphertext $ct_{0}$ encrypting an $h \\times w$ image and the kernel size is $kh \\times kw$, we could do the following steps: Step 1. we perform a series of the incomplete column shifting on the ciphertext $ct_{0}$, obtaining a new ciphertext $ct_{1}$. Namely, $\\texttt{Rot}$($ct_{0}$, $0$) $\\oplus$ $\\texttt{Rot}$($ct_{0}$, $1$) $\\oplus \\cdots $ $\\oplus \\texttt{Rot}$($ct_{0}$, $kw - 1$) $=$ $ct_{1}$. \nStep 2. we then perform a series of the row shifting on the ciphertext $ct_{1}$, obtaining a new ciphertext $ct_{2}$. Namely, $\\texttt{Rot}$($ct_{1}$, $0 \\times w$) $\\oplus$ $\\texttt{Rot}$($ct_{1}$, $1 \\times w$) $\\oplus \\cdots $ $\\oplus \\texttt{Rot}$($ct_{1}$, $(kh - 1) \\times w$) $=$ $ct_{2}$. Now, the ciphertext $ct_{2}$ has already encrypted the information we want, but with some garbage information. In the final step, we only need to design a special constant vector and perform a public-private multiplication to filter out the garbage information, obtaining the ciphertext we desire. \n\nMore comparisons with state-of-the-art work would be made in our future submissions to other conferences or journals. We would like to thank reviewer rzzf for this great suggestion.\n", " We would like to thank the reviewers for their input and appreciate their comments.\n\nBefore and after each layer such as the CNN layer, the message size is reduced and the encoding representation is destroyed. Therefore, the message packed in the ciphertext needs to be reconstructed for future use as the input ciphertexts to the next layer.\nTaking the CNN layer in our implementation as an example, after our convolution operation algorithm is finished (our impractical Algorithm 2 running on several simulation virtual ciphertexts), we actually need to do some following work to reconstruct the output ciphertexts to the same encoding representation as the input ciphertexts, which needs about $h$ rotations for an image matrix with $h$ rows. If stride two or higher is adopted, the reconstructing process needs more rotations, about $h \\times w$ for an image of size $h \\times w$. That is why we don't favor using stride two or higher in our encoding method. It seems that stride one is the common setting.\n\n\nVolley Revolver can be used for the dataset CIFAR-10/100 under the same parameter setting as in the paper (with the available $2^{15}$ slots of a single ciphertext). Another setting of $logN=17$ and $logQ=1200$ to achieve a $128$-bit security level enables our encoding method to be used to a colored image of size $256 \\times 256$, in which case the available slots number is $2^{16}$ and 3 ciphertexts are needed to encrypt one such colored image. Most common original ImageNet images of size $500 \\times 500$ are too harsh to be adopted in the HE domain and are even unnecessary to be used in the clear. We wonder if there is research work dealing with the original ImageNet dataset. \n\nFuture work about building deep CNN in the HE domain via Volley Revolver will be done and the dataset CIFAR-10 will be certainly used in that work.\n\n\nAs far as we can see, encrypting the transpose of matrix B for the multiplication A x B is a simple idea but will be widely adopted in future research work.\n", " We would like to thank the reviewers for their input and appreciate their comments.\n\nWe use the word efficient a lot. Here we mean that the main frameworks of the matrix computation and convolution operation can be computed in parallel. Multiple image data being encoded in a single ciphertext allows one ciphertext to encrypt semantically-complete information for each image, facilitating the design of parallel algorithms. The response time varies significantly on how many vCPUs the cloud has. For example, we first test our final artifact on a server with 12 vCPUs, and the response is 30 minutes or so for 32 MNIST images. The experimental result in our paper is obtained by running our CNN implementation on a cloud with 40 vCPUs, taking about 285 seconds to respond. \n\nOur encoding method is applicable for encrypted image data with multiple channels. Suppose that there are 32 32x32 color images (CIFAR-10 images) with three channels. For simplicity, we adopt the same parameter setting as in the paper. For this toy example, our encoding method only needs to use 3 ciphertexts to encrypt the three channels of the 32 color images respectively. Each ciphertext encrypts the corresponding channel of the 32 colored images just like our encoding method encrypting the MNIST grey images. Note that in this special case, not a single slot of the three ciphertexts is wasted, which is the best optimum ciphertext size. In conclusion, our encoding method needs 3 ciphertexts to encrypt one colored image. ", " We would like to thank the reviewers for their input and appreciate their comments.\n\nFrom our own perspective, the main reason that Jiang et al.'s work is difficult to be applied in CNN with over two convolutional layers is that the encoding representations before the CNN layer and after the CNN layer are too different. Supposing that there is another CNN layer stacking after the first CNN layer, it would take a lot of HE operations to reconstruct the encoding representation of the output ciphertexts of the first CNN layer to the same as that of the input ciphertexts of the following CNN layer. That is why we think it is difficult for us to use the baseline method to implement CNN with over two CNN layers. However, Jiang et al. know their method better than we do and probably would come up with another novelty idea to overcome the problem to us.\n\nThe only advantage of our method in the Neural Networks Inference compared to the state-of-the-art techniques is probably that our encoding method only takes one ciphertext to encrypt several images while others do not. Other works need several ciphertexts to encrypt several images. More comparisons with state-of-the-art work would be made in our future submissions to other conferences or journals. We would like to thank reviewer sp2h for this great suggestion.\n\n", " This work deals with the new matrix encoding method, called Volley Revolver, for efficient matrix multiplication on homomorphic encryption scheme. To efficiently use the data structure and the homomorphic operation in CKKS scheme for matrix multiplication, they adequately transpose and concatenate one of the input matrices and use RowShifter and SumColVec operation. The matrix multiplication and convolution operation are efficiently designed for CKKS scheme, and the simulation is conducted with HEAAN library and MNIST dataset. Strengths\n- The authors deal with matrix multiplication and convolution with the general dimensions of matrices.\n- The ideas are well conveyed with specific matrix formulas to ease the understanding.\n\nWeaknesses\n- No simulation comparison with state-of-the-art matrix multiplication or convolution with CKKS scheme. As far as I know, the matrix multiplication and convolution in CKKS scheme is well researched in homomorphic encryption academic area, and thus this proposed technique can be compared with many prior works regarding matrix multiplication. They only show the classification accuracy and the latency with their proposed model without any comparison, so I cannot be assured that the proposed technique is superior to the state-of-the-art techniques.\n- Reference is too weak. There are far more works regarding matrix multiplication and convolution in HE, but they only refer to only a small subset of these works.\n- They refers to the work of Jiang et al. [9] and they claimed that the technique in this work \"might\" be difficult to be applied in practical application for CNN with over two convolutional layers. However, the authors do not suggest why it is difficult. The authors should have dealt with the limitation of the prior works thoroughly before proposing their technique. I cannot be assured that the prior techniques are really inadequate for CNN with over two convolutional layers.\n- Overall, the authors failed to prove that their proposed technique is superior to the state-of-the-art techniques, and thus the authors should justify their contribution more strongly if they want to be accepted in the NeurIPS conference. - Why Jiang et al.'s work is difficult to be applied in CNN with over two convolutional layers?\n- Please show the comparison with the state-of-the-art prior techniques. They did not deal with the limitation of this work, and I think there is no potential negative societal impact.", " This work provides the method, named Volley Revolver, of encoding encrypted matrix to perform matrix multiplication efficiently in the HE scheme. Volley Revolver provides an encoding method that packs multiple encrypted data in a single ciphertext, which makes matrix multiplication efficient. Strengths:\n\nThe authors described the details of the proposed method with proper figures which are easy to understand.\n\n---\n\nWeaknesses:\n\nThe authors use the word *efficient* a lot, but its meaning should be described more definitely. From my understanding, the meaning of \"efficient\" refers that multiple image data can be encoded in a single ciphertext. If this is the authors' main contribution, please emphasize this point in Abstract, Introduction, Related Works, and Conclusions. \n\nMoreover, there is no quantitative discussion about the efficiency of the proposed method. Previous works are mentioned, but I cannot find the exact improvement compared to the previous works. Are there any other previous works about matrix encoding for matrix multiplication or convolution operation? If so, the authors should refer those works. If not, please mention it. And please show the performance which indicates the improvement from the previous works (e.g., runtime, the number of multiplications). Without these discussions, I think the proposed method is hard to be considered as an efficient method. Is the proposed method applicable for encrypted image data with multiple channels? Most of the colored image dataset such as CIFAR-10 are formed with three channels. The limitation is mentioned in the introduction (lines 40--41).", " This paper proposes a new matrix multiplication method (called \"volly revolver\") suitable for homomorphic encryption. They present an efficient way of adding intermediate steps of the Convolution operation and simulate the operation on the packed ciphertext. In applying homomorphic encryption to neural networks, it is necessary to perform matrix operations efficiently. In this process, homomorphic encryption requires an efficient method because the movement of the encrypted message is done through rotation, which has much computational cost. \nThis paper does not reflect the latest research results. See, for example, C. Juvekar, et al., “GAZELLE: a low latency framework for secure neural network inference,” Proceedings of the 27th USENIX Conference on Security Symposium, August 2018, pp. 1651–1668, which covers how to perform matrix operations for homomorphic operations efficiently.\n\nThis paper is not properly compared with homomorphic matrix multiplication results in other papers. There is a limit to directly applying to real CNNs considering only matrix operation without considering the flow of computation in the entire network. The tests conducted only in MNEST appear to be too outdated compared to the recent results of CIFAR-10/100 or ImageNet. The paper deals only with matrix multiplication, but it is unclear what advantages there are from multiplication alone. Recent works have already reported a faster computation run-time than reported in the paper. Therefore, a simulation comparison with the latest results is necessary to clarify the advantage of the proposed method in the paper, but there is currently none.\n\nCNN operation does not end with just one matrix operation, but it leads to another operation. In this process, the message size is reduced, and the message packed in the ciphertext needs to be reconstructed for efficient operations. Considering the whole network, convolution operation (and matrix multiplication) should be considered. Otherwise, it can be rather disadvantageous due to additional packing requirements and a corresponding new data rotation for the followed processes.\n\n(215-216 lines) said that \"Our method based on Volley Revolver can build convolutional neural networks as deep as it needs.\" However, they do not \n\nAlthough this paper targets gray images, other studies handle color images such as CIFAR-10/100 and ImageNet well. Obviously, the proposed method in this paper can also be used for color images, applying three channels independently. However, the amount of computation should be three times larger.\n\nAt 78-79 lines, the method in this paper is only applicable to datasets with a smaller number of pixels than the available slots of ciphertext, which is also a limitation of the proposed method. In the case of ImageNet, it is not possible to fit one image even in 2^15 slots, so another efficient method is needed.\n\nOn page 4, is the extended form of B included in the single ciphertext? Then, when the extended area is considered, the actually applicable sample size is likely to be further reduced.\n\nIt also does not consider how stride two or higher is applicable. The social impact or limitations of the paper are not specifically described.", " The paper presents methods for matrix multiplication and 2D convolution in lattice-based homomorphic encryption that allows parallel multiplication of a number of cleartext values. The authors have implemented in an open source framework and provide a benchmark for a simple MNIST network.\n I'm missing a differentation to the prior work by Dathrati et al. [1]. They propose a matrix multiplication that only uses one ciphertext multiplication as opposed to $2p$ in this work for $p$ being the number of columns in the output, and they report on inference of three versions of a LeNet network with similar accuracy in 2.5 seconds where this work achieves an inference time of 287 seconds for a batch of 32. Even prior art cited by the paper (CryptoNets) achieves 250 seconds on a batch of 4096. The authors claim that memory usage is an issue with CryptoNets but they don't comment on the memory usage of their solution.\n\nI'm also missing the specification SumForConv in terms of basic operations. The underlying encryption scheme only allows element-wise operations and rotations so any other operations has to reduced to these.\n\nWhy do the authors encrypt $F$ (line 11 in Algorithm 1 and line 22 in Algorithm 2)? F only contains public information, so a public-private multiplication would be more efficient.\n\n[1] https://dl.acm.org/doi/10.1145/3314221.3314628\n How did you implement SumForConv in terms of rotations and element-wise operations?\n yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "6WD5lYSvOhC", "IWIlYkbJ58", "6sdM6jZfx_k", "K5yLl2hdMSy", "zOvaqO_U-LJ", "LqA5Eqf37kb", "0ZlkDuOCWCy", "GTvm9knzZfV", "nips_2022_A6O79ipjlJC", "nips_2022_A6O79ipjlJC", "nips_2022_A6O79ipjlJC", "nips_2022_A6O79ipjlJC" ]
nips_2022_gwsnBjNcVEe
Phase Transition from Clean Training to Adversarial Training
Adversarial training is one important algorithm to achieve robust machine learning models. However, numerous empirical results show a great performance degradation from clean training to adversarial training (e.g., 90+\% vs 67\% testing accuracy on CIFAR-10 dataset), which does not match the theoretical guarantee delivered by the existing studies. Such a gap inspires us to explore the existence of an (asymptotic) phase transition phenomenon with respect to the attack strength: adversarial training is as well behaved as clean training in the small-attack regime, but there is a sharp transition from clean training to adversarial training in the large-attack regime. We validate this conjecture in linear regression models, and conduct comprehensive experiments in deep neural networks.
Accept
Reviewers all agree that the theory in the paper is interesting and that it helps us understand the robustness accuracy tradeoff. Several reviewers raise the issue that they are unsure about how “phase transition” is defined in this article, and whether the observed behavior is indeed a phase transition in a typical sense. The reviewers also are unclear about why the values of epsilon were chosen for the experiments, and whether the experiments adequately demonstrate the behavior that the authors are intending to display. I think the first issue is a semantic one, and does not rise to the level of rejecting the paper. The second issue is not shared by all reviewers, and the justification for the choices of epsilon is explained in the rebuttal. For this reason I feel that the outstanding issues have been adequately addressed by the authors. Some suggestions to the authors for the camera ready: I feel that the main body of the paper is more fluid than the introduction, and I suggest carefully editing the introduction to ensure each sentence is clear. I also suggest that the authors be clear about what is meant by "an asymptotic order" since some readers will not be clear on the meaning of this terminology.
train
[ "wdsxc-gB-By", "YBJJSeYnDB", "IJrEn1DreGV", "tlKR0KF0-RE", "Y5Sy_J0Ii44", "aGgh0fOA1vi", "edWJc3Xl6II", "lFFZlA4bGdv", "lsa5yYLtmk" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reviewing our paper! Below are some responses to your questions and comments:\n\n1. Q1, $\\varepsilon$ and $\\epsilon$: These two are different notations. The former notation $\\varepsilon$ is the noise term in the response $y$, which has nothing to do with the attack. The latter notation $\\epsilon$ denotes the strength of the adversarial attack.\n\n2. Q2, Table1: Although we ran experiments for various $\\epsilon$, our aim is to find $\\epsilon^*$. As a result, we only present several trials where $\\epsilon$ is around $\\epsilon^*$(=0.5).\n\n3. Weakness: \n\n * Our paper is not trying to show that $\\epsilon=0$ and $\\epsilon=\\infty$ lead to different adversarial training trajectories. We aim to justify that the two cases, $\\lim \\epsilon/\\epsilon^*=0$ or $\\lim\\epsilon/\\epsilon^*=\\infty$, have different adversarial training trajectories, where we consider asymptotically $d$ and $n$ increase toward infinite.\n\n * Since our phase transition is an asymptotic order instead of a critical value (e.g., the melting point of ice), under the finite sample setting, the transition will be smooth when increasing $\\epsilon$ smoothly. Therefore, we take $\\epsilon$ on the two sides of $\\epsilon^*$ and make them as large/small as possible representing a $\\epsilon$ value that is larger or smaller than $\\epsilon^*$ in asymptotic order. To make our numerical results more convincing, we prepare to add more simulation results for a grid value of $\\epsilon$ around $\\epsilon^*$, aiming to show that the change of adversarial training behavior (which can be characterized via p-value of hypothesis testing) starts to explode at $\\epsilon^*$. \n", " Thank you for reviewing our paper! Below are some answers for your questions:\n\n1. Weakness, simulation: We appreciate you sharing this comment with us! We will add some more experiments to show the existence of the phase transition, as described later.\n\n2. Weakness, the choice of $\\epsilon$ in different experiments: Yes, we change the choice of $\\epsilon$ in different experiments. \n\n * For Table 1, we sequentially try $\\epsilon=0, 0.5, 1, \\dots$, and find that $\\epsilon^*=0.5$, thus we stop the further searching.\n\n * For Figures 3 and 4, since our phase transition boundary is an asymptotic order instead of a critical value (e.g., the melting point of ice), we take $\\epsilon$ on the two sides of $\\epsilon^*$ and make them as large/small as possible to represent a $\\epsilon$ value that is larger or smaller than $\\epsilon^*$ in asymptotic order. And also, for this reason, Figures 3 and 4 take different choices of $\\epsilon$ than Table 1. \n \n * In terms of the difference between Figure 3 and Figure 4, it is just because Figure 4 already includes too many choices of $\\epsilon$. We leave the very large/small $\\epsilon$ and pick some other moderate $\\epsilon$ so that Figure 4 is clean.\n\n * On the other hand, we prepare to add more simulation results for a grid value of $\\epsilon$ around $\\epsilon^*$, aiming to show that the change of adversarial training behavior (which can be characterized via p-value of hypothesis testing) starts to explode at $\\epsilon^*$. \n\n3. Weakness, proof: Thank you for pointing this out! We are providing stronger proof in the appendix than what is used in the paper. We will consider rephrasing the appendix so that it is more clear to check the proofs.\n\n4. Question, fix $d$ and $n$: In our paper, we are considering asymptotic changes in $d$ and $n$.", " We appreciate your effort in reviewing our paper! Below are the responses for particular questions:\n\n1. Weakness, Q1, naming: Thanks for pointing out your concern. Our word ``transition\" is used in an asymptotic way, i.e., when $d/n$ is larger/smaller than some rate, then the phenomenons will be very different in the two regimes. This is different from the common sense of phase transition in physics, e.g., the melting point of ice.\n\n2. Weakness, Q2, generalize to other models: We agree that it is hard to extend the theoretical analysis to general models, but we expect that the insight we obtained from simpler model generalizes well. Therefore, we conduct plenty of experiments in neural networks to justify that the phase transition phenomenon also happens in neural networks.\n\n3. Weakness, Q3, ``equivalence\": Thank you for pointing out this unclear expression! Our goal is to show that Theorems 1 and 2 lead to the same critical threshold for $\\epsilon$, rather than saying they are mathematically equivalent. We adjusted the word we used in line 150-151 in the revision.\n\n4. Weakness, Q4, $\\lim\\inf$: $\\lim\\inf$ means ``limit inferior\" for a sequence of numbers. Its formal definition is $\\lim\\inf x_n = \\lim_{n\\rightarrow\\infty}(\\inf_{m\\geq n}x_m)$. It is possible that the sequence of numbers $x_n$ does not have a unique limiting point. In this case, $\\lim\\inf$ gives the smallest limiting point.\n\n5. Weakness, Q5, connectivity: Thanks for sharing your understanding with us! We put connectivity analysis in this paper because it is an important property in neural networks. The whole Section 5 aims to show the various properties in the neural networks change significantly when adversarial training uses a large attack strength $\\epsilon$. \n\n6. Weakness, Q6, Table 2: Thanks for figuring this out! We add more explanations for Table 2 in the revision (line 264-268).", " Thank you for reviewing our paper!\n\n1. Weakness, the trade-off between robust accuracy and natural accuracy: The paper mainly focuses on the loss instead of the accuracy. However, these two metrics are highly correlated. When the loss is small, the accuracy is likely to be high. We will consider conducting some analysis on the accuracy part in our revision.\n\n2. Similar to PGD and FSGM, auto-attack is another numerical way to calculate an attack. Since our theory studies the true attack, we expect to observe similar numerical results using auto-attack compared to PGD and FSGM. We will consider adding more experiments for different attacks.", " We appreciate the reviewers spending time in reviewing our paper. Below is a clarification for our paper:\n \nReviewer rFja, Xy6A, T2si raised some questions about the phase transition phenomenon. We would like to emphasize that the transition in our paper is mathematical phase transition in an asymptotic sense, where the transition occurs within the range $\\epsilon\\asymp\\epsilon^*$, instead of an exact critical value. The adversarial training trajectories under $\\lim \\epsilon/\\epsilon^*=0$ and $\\lim\\epsilon/\\epsilon^*=\\infty$ are very different, where we consider asymptotically $d$ and $n$ increase toward infinite. On the other hand, under finite sample situation, as $\\epsilon$ increases, the change is always smooth, but the rate of change shall be significantly different between attack range $\\epsilon<\\epsilon^*$ and $\\epsilon>\\epsilon^*$.\n\n1. This still counts as phase transition, where the boundary between phases is represented by asymptotic relationship. Similar terminology can be found in [1] (Reviewer rFja)\n\n2. We run experiments for very large/small $\\epsilon$ in Figures 3 and 4. (Reviewer Xy6A)\n\n3. When $\\epsilon$ just changes around $\\epsilon^*$ with tiny perturbation, it is expected to only observe smooth changes in the adversarial robustness and related properties. (Reviewer T2si)\n\n[1] Barbara M Smith. Constructing an asymptotic phase transition in random binary constraint satisfaction problems. Theoretical Computer Science, 265(1-2):265–283, 2001\n", " The paper studies how to mitigate the gap between adversarial training and clean training by finding the optimal $\\epsilon$. They first validate the phase transition boundary in simple linear regression model then extend to large-scale neural network. Based on their observation, the commonly used attack strengths are greater than \u000foptimal $\\epsilon$. They proposed an efficient way to approximate the $\\epsilon^\\star$ which can make the adversarial training more reliable and evaluate on three dataset. Strength: the paper is well organized and the proof of theorem is solid. The analysis of connectivity is good.\n\nWeaknesses: Some analysis can be added. The author claims the proposed method can find an optimal $\\epsilon$ for adversarial training. I'm wondering if there has a tradeoff between robust acc. and clean acc.. Table 1. shows the loss and acc. value between training and testing, but the author only focus on the explanation of loss value.\n\nCan it be used to evaluate on the ensemble attack method such as AutoAttack? Yes", " This paper propose a way to study the effects of the strength of adversarial training on the trajectories of optimization of machine learning models. The specific examples used in this paper is deep neural networks. The proven theorems are on a much simpler linear model. Weakness:\n1. The phrase \"phase transition\" must be use with care. In the theory of critical phenomena, phase transitions means existence of singularities in the thermodynamics limit. Most common types of phase transitions are first order and second order phase transitions. I recommend the authors to use another descriptor other than 'phase transition'.\n2. The paper prove theorems for simple linear model, I am skeptical that these results can be generalised to arbitrary models.\n3. The key equation in the paper is the definition of \\epsilon^* in line 151. I cannot make the connection of the theorems with this statement in line 150: \"This equivalency justifies our idea of . . . . \". I can hardly understand what was being justified. I feel that definition of \\epsilon^* may be intuitive.\n4. Is there a typo in line 132? \"lim inf\" means what? Also, for taking limits, it is better to state what quantity tends to this limit.\n5. Line 216: about connectivity. this analysis may not bring too much value in the understanding of the content of this paper.\n6. Table 2 is not being explained well.\n\nStrengths:\n1. While many past papers focus on prediction accuracies, this paper focus on understanding of the training process. Indeed the focus on tuning parameters and network to make predictions just that little bit better may not help this research field move forward. Papers focusing on understanding brings more value to this field of research. Any paper that push the boundaries of understanding should be encouraged.\n\n\n\n\n\n no questions The authors have not discuss potential negative societal impact.", " Adversarial training does not achieve performance comparable to the standard training on clean data. This paper investigates the critical point of the magnitude of the adversarial perturbations with which training trajectories of adversarial training become significantly different from those of standard training. By using simple linear regression models, this paper claims that the order of the critical point is in $\\Theta(\\sqrt{d/n})$ where $d$ is the dimension of data, and $n$ is data size.\nIn addition, it claims that the critical point can be estimated by using the difference between training loss of adversarial training and test loss of standard training. To support the claims, this paper provides results of various experiments, e.g., evaluation of the difference of parameters in standard training and adversarial training in the cases (d<<n and n<<d), connectivity of the loss, and empirical evaluation of the critical point.\n ## Strengths\n- This paper provides interesting analyses of training trajectories of adversarial training.\nThis study will help readers better understand adversarial training and may inspire the creation of new and stronger methods for adversarial training.\n\n- This paper claims the relationship between the critical point and catastrophic overfitting of FGSM.\nIf we know the magnitude of attacks against which FGSM can make models sufficiently robust, it is worth in practice because FGSM is an efficient method. However, the results of the experiments (Table 2) are less informative because they do not list the robust accuracy of FGSM against PGD.\n\n## Weaknesses\n- The theoretical analysis is based on simple models. Though I admit the analysis for deep models is difficult, experimental evidence of deep models can be obtained instead of theoretical results. In the current paper, the experimental results are not sufficient as described below.\n\n- The experimental conditions are not complete, and the results do not fully support the claim. The simulation on the left of Figure 2 provides little information because $\\varepsilon$ varies across conditions of n<<d and d<<n. I think that if $d$ is fixed across the conditions, small and large $\\varepsilon$ can be fixed across conditions to compare the difference between n<<d and d<<n conditions on the same scale. It would be useful if these results support Theorem 1. In addition, the experimental conditions on the right side of Figure 2 are not shown. It seems to be important to know what the value of $\\sqrt{n/d}$ is in this setting.\n\n- If this paper claims a phase transition, it is not convincing unless experiments show the discontinuous change in the trajectory against $\\varepsilon$. In Table 1, the evaluation interval of $\\varepsilon$ is too coarse and the losses seem to increase almost linearly, and there is no discontinuity like a phase transition. In addition, Experimental conditions are chosen artificially. For example, Figure 4 shows the results of $\\varepsilon$=1/255, 4/255, but the results of $\\varepsilon$=1/255, 4/255 are not in Table 1 and Figure 3. Conversely, the results of $\\varepsilon$=2/255 are shown in Table 1 and Figure 3 but not in Figure 4. These discrepancies in experimental conditions could be seen as cherry-picking in order to claim that the critical point is $\\varepsilon=0.5/255$. In my opinion, many experiments should be conducted on the interval [0,0.5/255] and on the interval [0.5/255,2/255] to obtain more convincing results.\n\n- The proof of theoretical results is unclear, and its validity is difficult to evaluate. Since the claims in theorems are different from the claims in the proof, it is difficult to follow the proof. For example, Theorem 1 uses lim inf d/n > 0 and lim d/n=0 in the claims, but its proof uses $d/n<\\infty$ and $d/n\\rightarrow \\infty$, and the relationship is not explained. In line 635, A, B, C, D should be written by using $\\hat{\\theta}(\\varepsilon)$. To improve clarity, theorems and proofs should be written in a consistent and easy-to-understand notation.\n\n## Minor issues and Comments.\n- In Table 1, $\\varepsilon$ is divided by 255, but in the text, $\\varepsilon$ is 0.5 without being divided by 255. It is necessary to rewrite the text to determine which is correct.\n\n- In my opinion, Proposition 1 is incomplete. This theoretical claim contains unclearness though it is supplemented by the footnote. I think the term \"the similar property but not the same\" should be explained exactly in this proposition.\n\n- [a] reports that adversarial training can improve the accuracy on clean data under certain conditions. If this paper can connect such studies to the study fo the difficulty of adversarial training, this paper might be more valuable. For example, if the paper can reveal how weak attacks contribute to clean accuracy, this paper would be worth publishing.\n\n[a] Xie, Cihang, et al. \"Adversarial examples improve image recognition.\" CVPR2020. What do lim inf d/n > 0 and lim d/n=0 in Theorem 1 mean? I think d and n are fixed in training. Theoretical results are restricted to simple linear models and specific two-layer networks. Though I admit that deriving theoretical results for deep neural networks are difficult due to nonlinearity, experiments might be made more sophisticated to convince that theoretical results are valid for deep models. For example, $\\epsilon$ in Table 1 could be evaluated at finer intervals to show the phase transition experimentally.", " The authors study the behavior of a simple linear model (and a shallow NN) under adversarial training and theoretically discover qualitative differences between the small-$\\epsilon$ regime and the large-$\\epsilon$ regime. They claim that these regimes are separated by a critical perturbation strength $\\epsilon^*$ and propose a theoretically motivated method of determining it. They run experiments on both their simple models as well as standard ResNets in order to validate their theoretical findings. he paper is well-structured and given that the paper is technically complicated in some parts, the authors do a good job of clearly conveying their message.\n\nA central claim of the paper is the existence of a phase transition when increasing $\\epsilon$ through some critical value. Usually, phase transitions are characterized by a discontinuity in one or more properties of the system. To me it seems that (at fixed dimension $d$ and samples $n$) the paper simply shows that $\\epsilon=0$ and $\\epsilon=\\infty$ differ in loss and in optimal solution. This is completely obvious. What would be interesting is if some property of the system actually varies non-smoothly as we pass through a critical value of $\\epsilon$. The most direct way of studying this is by varying $\\epsilon$ in a fine grained way and demonstrating a discontinuity in one or several properties of the network. The authors justify why they do not do this by the high cost of running adversarial training (if this is a problem, the authors could consider using the method of [1] for their experiments).\n\nAs it is, I do not think that the authors provide enough evidence of an interesting discontinuity. For example, on the right-hand side of Figure 2, it seems fairly clear that the loss distribution is varying smoothly as a function of $\\epsilon$. This is not a surprising result. A similar observation is true for Figure 3 and an even more striking counter-example to the authors claim of a phase transition seems to be their own Figure A.1 in the appendix. The only weak evidence that I think the paper provides for the phase transition claim is Figure 4: here, looking at the generalization gap ratio after overfitting at 200 epochs, it seems that there might be a non-smooth increase as we vary $\\epsilon$. \n\n[1] \"Fast is better than free: Revisiting adversarial training\" Wong, Rice, Kolter at ICLR'20 In Theorem 1, it appears that the $\\epsilon$ refers not only to the attack strength but actually to the true Gaussian noise present in the data. Why does it make sense to jointly vary the true noise on the data and the budget of the adversarial training? Is the $\\epsilon$ even referring to the adversarial attack strength at all here?\n\nIn line 208, the authors talk about running adversarial training for a wide range of $\\epsilon$'s. Are they referring to the results in Table 1? Because, if so, that is not a wide range of $\\epsilon$'s. If not, then more results should be shown, ideally as plots. The authors do not discuss societal impact of their work, but I agree with their assessment in the checklist that this does not apply to their work, as it is fairly theoretical." ]
[ -1, -1, -1, -1, -1, 4, 6, 4, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "lsa5yYLtmk", "lFFZlA4bGdv", "edWJc3Xl6II", "aGgh0fOA1vi", "nips_2022_gwsnBjNcVEe", "nips_2022_gwsnBjNcVEe", "nips_2022_gwsnBjNcVEe", "nips_2022_gwsnBjNcVEe", "nips_2022_gwsnBjNcVEe" ]
nips_2022_G6cJsOOx2R3
On Enforcing Better Conditioned Meta-Learning for Rapid Few-Shot Adaptation
Inspired by the concept of preconditioning, we propose a novel method to increase adaptation speed for gradient-based meta-learning methods without incurring extra parameters. We demonstrate that recasting the optimisation problem to a non-linear least-squares formulation provides a principled way to actively enforce a well-conditioned parameter space for meta-learning models based on the concepts of the condition number and local curvature. Our comprehensive evaluations show that the proposed method significantly outperforms its unconstrained counterpart especially during initial adaptation steps, while achieving comparable or better overall results on several few-shot classification tasks – creating the possibility of dynamically choosing the number of adaptation steps at inference time.
Accept
This paper was quite well received by reviewers, with scores of 5, 6, 6, 8. Reviewers felt the paper was well written, clear and expressed an interesting core idea. Experimental results compare against MAML and show clear improvements. The key idea here was inspired by preconditioning, and the method here aims to increase adaptation speed for gradient-based meta-learning methods without incurring extra parameters. The paper recasts the optimisation to a non-linear least-squares formulation and propose a way to enforce a well-conditioned parameter space for meta-learning through the condition number and local curvature perspectives. Experiments show that the approach outperforms unconstrained optimization significantly and does particularly well during initial adaptation phase of optimization. The AC recommends acceptance.
train
[ "56XVBG8hplI", "-Opey45F2n7", "8-G7uh-F55d", "AIjNkHawo-w", "sjDLDAgOCnD", "8t1wvhkn6Mx", "b8QBMAd12iZ", "Z-TfgTGxfWI", "jCArqoIB-O0", "a2Aa0bR8M_I", "bRsV3yi02it", "ZtKem4aEMSX", "Y6DrS_4etf_", "hnnjHQZRJCB", "lAP9NWvj-P" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. I do appreciate it. My concerns are partially resolved and I will increase my score from 4 to 5, but I still think this paper is at the borderline for NeurIPS. I strongly encourage the authors to perform the apples-to-apple comparison against other recent methods and present the results in the final version if this gets in. \n\n1. Please include the comparisons against other preconditioning-based methods, e.g. MC, ModGrad, Warp-MAML, which all of them achieved better performance than the suggested method. You can also present the number of parameters required for those methods as you did in the bs8R's response.\n\n2. Please release the code publicly. Currently, no codes are available, but I believe a few lines of codes are enough to implement the suggested method, which can be easily plugged into many existing open-sourced meta-learning software.", " I would like to thank the authors for providing me a strong and detailed author response. Clearly, they have put in a lot of effort and thought into answering the raised questions. After reading the author response, I feel that the core concept of the paper not only is quite neat and well supported with empirically results but also has the potential to inspire interesting future research into fundamentally improving the performance of meta-learning. Therefore, I am keeping my score 6: Weak Accept.", " Thank you for your continued feedback!\n\n>_Has this occurred in previous works? [...]_\n\nAlthough there have been several works (e.g. [Y], [Z]) analyzing different aspects of the MAML algorithm, we are to the best of our knowledge the first to investigate the contribution of each individual update step towards the optimization objective / overall performance in more detail.\n\n[Y] Raghu et al.: Rapid learning or feature reuse? Towards understanding the effectiveness of MAML. ICLR, 2019\n\n[Z] Arnold et al.: When MAML Can Adapt Fast and How to Assist When It Cannot. AISTATS, 2021\n\n>_[...] actual value of the number of additional parameters [...]_\n\nWe agree that explicit numbers will provide much better insight for the reader. Note that these numbers are dependent on the specific architecture and ’way of incorporating’ the respective method since most approaches (MC, ModGrad and Warp-MAML) provide different possibilities regarding how many gradients shall be modulated/preconditioned.\n\nWe provide the arch. chosen for the experimental result reported in the respective paper (Conv4) here, and will include a more comprehensive discussion into our work. \nAssume we split the Conv4 arch. into its four parameter sub-groups (layers) and a classifier as follows: $\\boldsymbol{\\theta}$={$\n\\boldsymbol{\\theta}^{l1},\n\\boldsymbol{\\theta}^{l2},\n\\boldsymbol{\\theta}^{l3},\n\\boldsymbol{\\theta}^{l4},\n\\boldsymbol{\\theta}^{cl}$}. Each of the first 4 sub-groups then contains a set of convolutional weights $\\boldsymbol{\\theta}^{\\mathrm{li}}_{\\mathrm{conv}}$ and potentially additional parameters (e.g. bias, BN). **We focus on the parameters of the convolutional operation within the 2nd layer in the following**. We provide a table to give a concise overview over the additional parameters required to precondition this specific subgroup, and the explanations afterwards in more detail:\n\n| Method | Architecture | sub-group params.|Add. params. for precond. sub-group|\n|:----|:----|:------|:----|\n|MAML [A]| Conv4 (64) | $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 36{,}864\\$| -- | \n|ours| Conv4 (64)| $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 36{,}864$| -- | \n|Meta-SGD [B]| Conv4 (64)| $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 36{,}864$| $\\|\\boldsymbol{\\phi}^{\\mathrm{l2}}_{\\mathrm{conv}}\\|=36{,}864$| \n|MC [C]| Conv4 (128)| $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 147{,}456$| $\\|\\boldsymbol{\\psi}^{\\mathrm{l2}}_{\\mathrm{conv}}\\|=32{,}849$ | \n|ModGrad [D]| Conv4 (64)| $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 36{,}864$| $\\|\\boldsymbol{\\Psi}^{\\mathrm{l2}}_{\\mathrm{conv}}\\|=485{,}760$| \n|Warp-MAML [E]| Conv4 (128)| $\\|\\boldsymbol{\\theta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\| = 147{,}456$| $\\|\\boldsymbol{\\zeta}^{\\mathrm{l2}}_{\\mathrm{conv}}\\|=147{,}456$ | \n\n- **Ours**: **No add. parameters**\n\n- **Meta-SGD**: One add. parameter for each network parameter, i.e. **doubles** the number of parameters.\n\n- **MC**: Three additional matrices for each parameter sub-group: $M_{i}\\in\\mathbb{R}^{C_{\\mathrm{in}} \\times C_{in}}$, $M_{o}\\in\\mathbb{R}^{C_{\\mathrm{out}} \\times C_{\\mathrm{out}}}$, $M_{f}\\in\\mathbb{R}^{d \\times d}$ with $C_{\\mathrm{in}}$, $C_{out}$ the number of input, output channels; $d$ is kernel-dim.\nExample convolutional weights of $\\boldsymbol{\\theta}^{\\mathrm{l2}}$ with 128 filters used in their paper (without bias potential batch-norm params): $C_{\\mathrm{in}}=128$, $C_{\\mathrm{out}}=128$, $d=9$; **additional $32{,}849$ parameters** ($\\|M_{i}\\|=16{,}384$, $\\|M_{o}\\|=16{,}384$ and $\\|M_{f}\\|=81$).\n- **ModGrad**: Modulation of each parameter sub-group via two sister-networks $(\\phi_1,\\phi_1)$ (three if bias used), with 2 FC-layers each. Additional parameters for each network $\\phi_i$ are $\\phi_i^{\\mathrm{fc1}}\\in \\mathbb{R}^{v\\times D_{j}}$ and $\\phi_i^{\\mathrm{fc2}}\\in \\mathbb{R}^{D_{j}\\times (u+uD_{j})}$; $u$ and $v$ are hyperparameters chosen to $5$ and $300$, $D_{j}$ is the dimension dependent on the weights that shall be modulated. \nExample for $\\boldsymbol{\\theta}^{\\mathrm{l2}}$ using Conv4 with 64 filters as in their paper: plus **additional $485{,}760$ parameters** ($|\\phi_1^{\\mathrm{fc1}}|=57{,}600$, $|\\phi_1^{\\mathrm{fc2}}|=185{,}280$, $|\\phi_2^{\\mathrm{fc1}}|=57{,}600$, $|\\phi_2^{\\mathrm{fc2}}|=185{,}280$).\n- **Warp-MAML**: Modulation of each parameter sub-group via dedicated warp-modules for each, which differ between architectures (e.g. conv vs ResNet). For the example $\\boldsymbol{\\theta}^{\\mathrm{l2}}$ of a Conv4 architecture with 128 filters as used in their paper, they insert an additional conv-warp-module after each convolutional block, resulting in **additional $147{,}456$ parameters**, i.e. **doubles** the number of parameters.\n\nWe hope this provides better insight and further outlines the significant difference in the number of parameters used to achieve preconditioning in other works. ", " I would like to thank the authors for the response and clarifications. \n\n> We suspect that MAML is able to find a meta-initialization that overfits to the tasks and essentially learns that it does not need to perform the initial few steps into the ’best’ possible direction to still sufficiently decrease it’s overall loss. By doing so, it is still able to achieve satisfying performance during the last few updates but can no longer reach its highest possible one – as the comparison to our conditioned MAML clearly demonstrates (Table 3). This behavior further shows the instability / inefficiency of the initial steps of the unconstrained counterpart and hints at the more general challenge: it is not clear in advance for any application how many steps and at which step size one should choose. We see our method as first step towards alleviating this usually heuristically approached problem.\n\nHas this occurred in previous works? If so, it would be helpful to reference this as I have not seen it happen before. \n\nI also believe it would help the table that compare with the existing preconditioning methods if an actual value of the number of additional parameters is included since it is unclear, for example, how many parameters $\\zeta$ is. For example, Method A requires $\\theta$ additional parameters. \n\nPractically, I'm unsure about the benefit of using this method, but I believe the insights derived from this paper is interesting.", " ... continued:\n\n>_The authors mention that \"computing the condition number by using the max and\nmin eigenvalues... would unnecessarily weaken the training signal\" (section 3.2). Does using the condition\nnumber as opposed to Hessian’s logarithmic eigenvalues actually lead to a meaningful difference in adaptation\nperformance?_\n\nUsing the actual condition number, or rather the more stable version log(cond-nmbr), does lead to a meaningful difference in adaptation performance, however takes much longer to converge and is outperformed by our proposed loss using the variance of the eigenvalues. Results below show the validation accuracy achieved after each update step after training a Conv6 architecture on miniImageNet in a 5-way 5-shot setting:\n| Conv6 Val Acc. | step1 | step2 |step3 |step4 |step5 |\n|:----|:----:|------:|:----:|------:|:----:|\n| var(log(ev))| 63.93±1.76| 68.44±1.70| 69.15±1.70| 69.78±1.69| 69.83±1.73|\n| log(cond-nmbr) |55.30±2.04 |62.03±1.87 |63.95±1.82| 64.98±1.87| 65.36±1.86|\n\n> _Some of the adaptation works have found that only adapting bath norm parameters,\ninstead of the whole network parameters, to reduce the computational cost can be sufficient to improve the\nadaptation performance. In that regard, it is interesting that adapting cls+emb+eBN parameters performs\nworse than adapting cls+emb parameters (Table 2), any insight on why additionally adapting bn parameters\nleads to worse performance?_\n\nOne reason that other authors limit their focus on adapting only the the batchnorm\nparameters is that this significantly reduces the dimensionality of the update space, which prevents\noverfitting and can tend to also reduce the condition number for these updates. One possible intuition\nwhy we observe that conditioning on the cls+emb parameter set achieves better performance than\non the cls+emb+eBN set is that by adding the additional parameters, we actually increase the\ndimensionality of our condition-enforced update space and might thus provoke other negative\nimpacts. Our investigations presented in the supplementary material in Figure A1 also show that\nenforcing the conditioning constraint on only the batch norm parameters does not prove particularly\nhelpful to reduce the overall condition number of the network compared to other sets.", " We thank you for your review and feedback.\n\n>_Maybe more baseline approaches from meta-learning and/or few-shot learning, other than MAML, could be\nadded for comparison. Can MAML + the conditioned parameter space (proposed method) outperform more\nrecent meta-learning algorithms?_\n\nWe provide results comparing to other preconditioning methods using a Conv4 backbone in the table below, and indicate the use of additional parameters that require optimization across all methods. Please note that the goal of work is to provide insights into the benefits of a well- conditioned parameter space and the role of the condition number, rather than optimizing w.r.t. to outperforming other more complex and parameter-intense methods:\n\n| Method | Inner Loop Parameter Update| Add. Params | Test Acc. |\n|:----|:----|:----:|:--:|\n| MAML [A] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $63.1\\pm0.9$ |\n| Ours | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $65.3\\pm0.7$|\n| Meta-SGD [B] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\mathrm{diag}(\\phi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\mathrm{diag}(\\phi)$ | $64.0±0.9$ |\n| MC [C]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau},\\psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ |$\\psi$ | $68.0\\pm0.7$ |\n| ModGrad [D]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M\\_{\\tau}^{(k-1)}(\\Psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\Psi$ | $69.2\\pm0.7$ |\n| Warp-MAML [E]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}, \\zeta)$|$\\zeta$ | $68.4\\pm0.6$|\n\n[A] Finn et al.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML, 2017 \n\n[B] Li et al.: Meta-SGD: Learning to Learn Quickly for Few Shot Learning. arXiv:1707.09835, 2017\n\n[C] Park et al.: Meta-Curvature. NeurIPS, 2019\n\n[D] Simon et al.: On Modulating the Gradient for Meta-Learning. ECCV, 2020\n\n[E] Flennerhag et al.: Meta-Learning with Warped Gradient Descent. ICLR, 2020\n\n>_Also, correct me if I’m wrong, but it appears that the parameter space conditioning can be integrated with other meta-learning algorithms as well. Does it lead to performance improvement regardless of the choice of base meta-learning algorithm?_\n\nWhile this was beyond the scope of our paper, we agree that our approach should in principle be applicable to other meta-learning algorithms as well. We chose MAML in this specific setting due to its popularity and seminal character, as well as simplicity which allowed us to clearly investigate the contribution and effects of our imposed constraints without interference of additional modules. We would welcome further investigations of our proposed and other previous preconditioning methods within further meta-learning algorithms.\n\n>_Can the proposed method also improve MAML’s performance in reinforcement learning setting? Experimentally demonstrating the applicability of the proposed method across different problem set-ups or tasks would strengthen the paper’s contribution._\n\nIncluding our proposed approach into a reinforcement learning setting is an interesting suggestion. It is worth considering that reinforcement learning has a *much* noisier (and weaker) training signal than conventional meta learning applied to classification (like the one used within this work). We suspect that it might take longer to realize gains from reducing the condition number on a batch (which is a much noisier estimate of the true condition number) and see this as interesting future work.\n", " ... continued\n\n>_Although there has been previous work in preconditioning, this is the first method that I’m aware of that does so without additional parameters. On Page 9 Line 299-300, it is mentioned that existing preconditioning methods \"[incur] an often extensive number of additional parameters\". This seems to suggest that some methods may not require a lot more parameters. If so, how does the number of parameters\nrequired compare to the existing preconditioning methods and how do the performance differ? This would allow a potential user to make a decision between which method to try._\n\nWe thank you for drawing our attention to this formulation, and acknowledge our unfortunate way of wording this part. We agree with you on the fact that to the best of our knowledge, we are the first to show preconditioning without additional parameters (please see Table above). To provide better comparison and outline additionally require parameters of other methods, we are adding a section detailing the difference between other preconditioning methods regarding parameters they require as well as the performance gains they can achieve to the supplementary material of our revised version.\n\n>_I find it interesting that the proposed method can scale to many more gradient\nsteps (Table 3). How does MAML fare when scaling to those same number of gradient steps at test time?\nDoes MAML continue to perform similarly or better than with 5 gradient steps? I am not convinced of the\nbenefit of choosing the number of adaptation steps at inference steps. For example, at test time, it makes\nsense to evaluate the model on the same number of inner loop updates as it was trained on. Although, it is\nbeneficial if improvements can be further gained with more training (as we see in Table 3)._\n\nWe provide the detailed results regarding further adaptation beyond the training horizon for our method as well as MAML in the supplementary for all combinations of backbones and datasets in Table A2. MAML shows similar behavior in that it is able to further improve its performance when evaluated for more steps, but is generally not able to entirely ’catch up’ with the results of our\nmethod. We agree that running an experiment in the same configuration is generally a valid approach for\nloosely or unconstrained setups (compute, energy, time) and when the method is trained on only\nfew steps; However, if methods could achieve better results via additional update steps (as shown in\nour work) or when trained for a significantly higher number of steps, it might prove more beneficial\nto be able to only take a few in case a fast but slightly less accurate answer is required, or spend\nmore time and get a higher accuracy if the situation and constraints permit.\n\n>_On Page 9 Line 316, it’s mentioned that the proposed method \"significantly\n[outperforms] unconstrained methods during initial adaptation\". However, the only method compared is\nMAML, so it would be more precise to say \"significantly [outperforms] its unconstrained counterpart during\ninitial adaptation\"_\n\nWe agree with you and thank you for pointing this out; We have corrected the wording in\nour revised version.\n\n>_If the parameter space is well-conditioned, then I would expect that a single\nlarge gradient step from the meta-initialization could already train a good model. For example, instead of\nevaluating the model by fine-tuning with 5 inner loop updates with an inner loop learning rate of 0.1, I’m\ncurious as to the performance of fine-tuning with 1 inner loop update with an inner loop learning rate of say\n0.5._\n\nWe thank you for proposing this interesting ablation. We have run additional experiments with our approach and MAML trained on 5 steps while evaluating on one step with scaled-up learning rate. While our approach evaluated on the same setting as used in training still outperforms its 1-step variants, note that our well-conditioned method with 1 step using 5*lr outperforms the 5-step MAML model.\n\n| Conv6 |1step - lr|1step - 2*lr|1step - 3*lr|1step - 4*lr|1step - 5*lr| 5steps - lr|\n|----------|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| Ours| 62.31±0.72| 65.67±0.71| 65.87±0.71| 65.72±0.71| **66.54±0.70**| **68.43±0.71**|\n| MAML| 20.19±0.07| 20.00±0.00| 20.14±0.05| 23.28±0.37| 26.64±0.51| 65.96±0.71|\n", " We thank you for your review!\n\n>_Computing the conditioning term is expensive. As such, instead of applying the conditioning constraint to the entire model, only a small subset of parameters can have the conditioning constraint applied._\n\nWhile we agree that computing the condition number w.r.t. all parameters of the network proves expensive, we demonstrate in the paper that enforcing our conditioning loss on a small subset of parameters (e.g. classifier) is sufficient to significantly reduce the condition number\nof the entire network’s parameter space (e.g. Figure 4 of the main paper and Figure A1 of the supplementary material).\n\n>_Missing comparison with existing preconditioning methods (For example, see Section 2.1 of WarpGrad)_\n\nWe thank you for raising this point. While we discuss the related preconditioning methods\nwithin our related work and the introduction, we agree that a concise overview over the differences\nin update rules and parameter sets will prove beneficial to ease comparison between methods. We\nprovide an overview of our method in context to existing preconditioning methods in the following\ntable. Please note that the goal of this work is to provide insights into the benefits of a well-\nconditioned parameter space for learning-based methods and the role of the condition number,\nwhich we hope will motivate further research in this area (rather than optimizing towards competing\nwith other more complex and parameter-intense methods).\n\n| Method | Inner Loop Parameter Update| Add. Params | Test Acc. |\n|:----------|:-------------|:------:|:----:|\n| MAML [A] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $63.1\\pm0.9$ |\n| Ours | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $65.3\\pm0.7$|\n| Meta-SGD [B] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\mathrm{diag}(\\phi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\mathrm{diag}(\\phi)$ | $64.0±0.9$ |\n| MC [C]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau},\\psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ |$\\psi$ | $68.0\\pm0.7$ |\n| ModGrad [D]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M\\_{\\tau}^{(k-1)}(\\Psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\Psi$ | $69.2\\pm0.7$ |\n| Warp-MAML [E]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}, \\zeta)$|$\\zeta$ | $68.4\\pm0.6$|\n\n[A] Finn et al.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML, 2017 \n\n[B] Li et al.: Meta-SGD: Learning to Learn Quickly for Few Shot Learning. arXiv:1707.09835, 2017\n\n[C] Park et al.: Meta-Curvature. NeurIPS, 2019\n\n[D] Simon et al.: On Modulating the Gradient for Meta-Learning. ECCV, 2020\n\n[E] Flennerhag et al.: Meta-Learning with Warped Gradient Descent. ICLR, 2020\n\n> _It is not clear to me why MAML’s training collapses in Figure 2._\n\nWe suspect that MAML is able to find a meta-initialization that overfits to the tasks and essentially learns that it does not need to perform the initial few steps into the ’best’ possible direction to still sufficiently decrease it’s overall loss. By doing so, it is still able to achieve satisfying performance during the last few updates but can no longer reach its highest possible one – as the comparison to our conditioned MAML clearly demonstrates (Table 3). This behavior further shows the instability / inefficiency of the initial steps of the unconstrained counterpart and hints at the more general challenge: it is not clear in advance for any application how many steps and at which step size one should choose. We see our method as first step towards alleviating this usually\nheuristically approached problem.\n", " We thank you for your feedback.\n\n> _precondition loss only applied in top layers (as mentioned by authors), but this may not be a big problem, as\nthe top layers are the ones that are adapted the most during meta-testing_\n\nTo add to the fact you mention that mostly the top layers are adapted, we also show in Figure 4 of the main paper as well as Figure 1 of the supplementary material that the condition number of an appropriately chosen parameter subset is a very good predictor for the condition number of the overall network, and applying the loss to the subset does indeed reduce the entire network’s condition number.\n\n>_Is it the real condition number or the variance of logs in Figure 2?_\n\nIn Figure 2(c), we show the progress of the ‘real’ condition number of the Hessian w.r.t. all\nparameters of the network, computed via the ratio of maximum and minimum absolute eigenvalue\nas defined in equation (4) using our approximation of the Hessian defined in equation (8). We will\nreformulate the figure’s caption to improve clarity of this fact in our revised version.", " ... continued from previous\n\n> _[...] your answers for checklist 3-(c) should be ‘no’._\n\nWhile we have run our initial experiments (small architectures) for three different seeds, it turned out not to be feasible for the entire number of experiments given our limited computational resources. We observed in these experiments that training our method proved very stable/repeatable with little variation in the outcomes. To provide some insight, training a Conv6 architecture with three different random initializations on miniImageNet using our method yields top mean validation accuracies of 69.79%, 69.76% and 69.82%. We have further investigated different seeds for test evaluations and did not observe major differences that would not support our conclusions drawn from the single runs (see table below). We are happy to include some training curves with error bars into the supplementary material to demonstrate the stability of our training method, if the reviewers consider this data to be helpful/supportive. Nevertheless, we agree that despite these insights the fitting answer to 3-(c) considering the reported results should\nbe ‘no’.\n| Conv6 | step 1| step 2 | step 3 | step 4 | step 5 |\n|:---|:-----:|------:|:-----:|:-----:|:-----:|\n|seed1| 63.57±0.73| 67.05±0.73| 68.12±0.72| 68.58±0.73| 68.88±0.73|\n|seed2| 62.75±0.69 |66.96±0.70| 67.77±0.69| 68.08±0.70| 68.31±0.70|\n|seed3| 62.38±0.70| 66.33±0.71| 67.49±0.70| 67.74±0.70| 68.05±0.70|\n|seed4| 62.26±0.68| 66.44±0.70| 67.40±0.70| 67.73±0.71| 67.88±0.71|\n|seed5| 62.53±0.68 |67.09±0.68 |67.96±0.68| 68.52±0.67| 68.78±0.68|\nreported| 62.31±0.72| 66.66±0.71| 67.63±0.72| 68.21±0.71| 68.43±0.7|\n\n> _How did you compute eigenvalues? is it differentiable so that you can easily use\nthem as a part of loss function?_\n\nWe compute the eigenvalues in fully differentiable form via eigendecomposition (ED).\nOur approximation of the Hessian via $JJ^\\top$ (eq. (8)) yields a real symmetric matrix for which the eigendecomposition always exists and can be written as $H = U \\Lambda U^\\top$, with both $U$ and $\\Lambda$ real-valued. The work of Ionescu [F] is worth\nnoting, where the authors lay out the background of computing the partial derivatives in matrix form for our case of the eigendecomposition of a real symmetric matrix in their Proposition 2 (equations (12) - (14)).\nDeep learning libraries like PyTorch include several methods to compute the decomposition in a stable and fully differentiable manner. We use the ‘eigh()’ method from the ‘linalg’ library which provides an efficient way of computation for an entire batch of real symmetric matrices (one computation to retrieve the eigenvalues of all update steps at once) and compared favorably regarding speed to other methods in our timing tests.\n\n[F] Ionescu et al.: Matrix backpropagation for deep networks with structured layers, ICCV 2015\n\n>_Also, are there other related works that are using eigenvalues as a part of loss function?_\n\nApproaches like [G,H] propose losses that are inspired by or based on the specific functionality/meaning eigenvalues have in their respective scenarios (e.g. their importance in projections/multi-view geometry). We are, however, not aware of any other approach using eigenvalues within the actual loss function.\n\n[G] Zheng Dang et al.: Eigendecomposition-free training of deep networks with zero eigenvalue-based losses, ECCV 2018\n[H] Kwang Moo Yi et al.: Learning to find good correspondences, CVPR 2018\n\n> _Figure 2-(c), you plotted L_k, which is considered as condition numbers. But, was it approximated (by eq 8) and also surrogate condition numbers (by eq (9))? Since you are using L_k as an additional loss function, it is obvious that this number decreases. I am curious if the real condition number is decreasing._\n\nWe apologize if this has been unclear. As indicated in the description of the y-axis and the figure caption, Figure 2-(c) shows the progress of the **actual ‘real’ condition number**, i.e. max(ev)/min(ev), of the approximated Hessian of our network – displayed in log10 scale simply\nfor better visualization, not the loss. The Hessian is approximated as introduced in eq (8) via the Jacobian product. The decrease is thus not directly caused due to the enforced loss, but instead rather shows that our loss is indeed able to significantly reduce the actual condition number of the network’s parameters. We provide additional insights about the progress of the condition number computed for the parameters of each inner-loop update step in the supplementary material in Figure 3.\n\n>In figure 1, (a) and (b), were they a toy example? or from MAML? If MAML, how did you pick only two dimensions in the parameters?_\n\nThe experiment presented in the motivational Figure 1 is indeed a toy example that we\nhave chosen to visualize the effect of a higher and lower condition number on optimization problems\nsolved with first-order methods. We have updated the Figure’s caption to clarify this.", " We thank you for the constructive feedback and address your concerns point-by-point in the following:\n\n>_All 5 step training was used for all experiments for both MAML and the proposed method. Since MAML might overfit the initial parameters for 5 step performance, so they learned not to update too much in the first few iterations. I am curious how MAML performs, using 1 step training, and 1 step testing accuracy, which is a fairer comparison I think._\n\nPark et al. (2019) provide analyses regarding the 1-step results for MAML and demonstrate that 5-step training and testing notably outperforms the 1-step experiments: 63.92% vs. 59.26% on 5-way 5-shot and 48.85% vs. 46.28% on 5-way 1-shot scenarios for 5-steps vs. 1-step, respectively (Table 3 in their paper, using a Conv4 architecture on the miniImageNet dataset). We thus adopted the comparison to the higher-performing versions of MAML for fairness, i.e. the 5-step approach. Note that both the 5-step and 1-step are outperformed by our method, achieving accuracies of 65.26% and 48.94% on 5-shot and 1-shot settings, respectively.\n\n>_Missing MAML results on CUB for step 5 - step 100 in Table 3._\n\nDue to space limitations, we were unfortunately not able to include all experimental results into main body of the paper, but detailed results for both our conditioned version and the original MAML across all backbones and datasets are provided in the supplementary material in Table A1 (for steps 1-5) and Table A2 (for steps 5-100).\n\n>_Missing comparisons to prior arts. Only MAML baseline is not good enough as time of now. For examples, miniImageNet results, the accuracies are way lower than current state-of-the-arts methods. I am not complaining about not achieving SOTA, but it’s better to be upfront with the readers. Especially, you should have shown the performance of methods related to ‘preconditioning’, e,g, [1-3], you can still argue that your method does not require additional parameters._\n\nAs pointed out in the question, achieving state of the art results has not been the objective of this paper, but rather the demonstration of how the condition number of the approximated Hessian can be helpful in learning environments, the empirical findings that the condition number of certain parameter subsets seems to be a good predictor for the overall condition number of the network, its correlation with the few-step adaptation performance, and others. We chose MAML as a basis due to its popularity and simplicity, which provides a well-suited basis for such analyses (without interference of too many added complexities). We do however agree that comparison to other state of the art preconditioning methods (additional parameters and achieved performance) will indeed prove helpful to determine the best method for any individual use case, provide an initial comparison in the table below and propose to include this together with the respective discussion into the supplementary material of our revised paper (supplemented by a reference in the main paper pointing towards these results and the discussion, due to space limitations).\n| Method | Inner Loop Parameter Update| Add. Params | Test Acc. |\n|:----------|:-------------|:------:|:----:|\n| MAML [A] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $63.1\\pm0.9$ |\n| Ours | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}\\left(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}\\right) $ | -- | $65.3\\pm0.7$|\n| Meta-SGD [B] | $\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\mathrm{diag}(\\phi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\mathrm{diag}(\\phi)$ | $64.0±0.9$ |\n| MC [C]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau},\\psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ |$\\psi$ | $68.0\\pm0.7$ |\n| ModGrad [D]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha M\\_{\\tau}^{(k-1)}(\\Psi) \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau})$ | $\\Psi$ | $69.2\\pm0.7$ |\n| Warp-MAML [E]|$\\boldsymbol{\\theta}^{(k)}\\_{\\tau} = \\boldsymbol{\\theta}^{(k-1)}\\_{\\tau} - \\alpha \\nabla\\_{\\boldsymbol{\\theta}^{(k-1)}}\\mathcal{L}(\\boldsymbol{\\theta}^{(k-1)}\\_{\\tau}, \\zeta)$|$\\zeta$ | $68.4\\pm0.6$|\n\n[A] Finn et al.: Model-agnostic meta-learning for fast adaptation of deep networks. ICML, 2017 \n\n[B] Li et al.: Meta-SGD: Learning to Learn Quickly for Few Shot Learning. arXiv:1707.09835, 2017\n\n[C] Park et al.: Meta-Curvature. NeurIPS, 2019\n\n[D] Simon et al.: On Modulating the Gradient for Meta-Learning. ECCV, 2020\n\n[E] Flennerhag et al.: Meta-Learning with Warped Gradient Descent. ICLR, 2020", " This paper presents the idea of enforcing better condition numbers for inner-loops in meta-learning frameworks. They reformulated meta-learning loss as the least-square formulation, which enables them to easily approximate condition numbers. Then, it was used as an additional loss term to minimize it, which results in better performance in few-shot classification tasks. They demonstrated that their approach converged rapidly, especially for the first few iterations. I generally like the idea of achieving rapid convergence without introducing more parameters. The paper is well written and easy to follow. The idea of using least-square reformulation to easily approximate eigenvalues is quite interesting. However, I have some concerns about experimental results below in the question section. I would revise my score depending on the authors' response. 1. All 5 step training was used for all experiments for both MAML and the proposed method. Since MAML might overfit the initial parameters for 5 step performance, so they learned not to update too much in the first few iterations. I am curious how MAML performs, using 1 step training, and 1 step testing accuracy, which is a fairer comparison I think.\n\n2. Missing MAML results on CUB for step 5 - step 100 in Table 3.\n\n3. Missing comparisons to prior arts. Only MAML baseline is not good enough as time of now. For examples, miniImageNet results, the accuracies are way lower than current state-of-the-arts methods. I am not complaining about not achieving SOTA, but it’s better to be upfront with the readers. Especially, you should have shown the performance of methods related to ‘preconditioning’, e,g, [1-3], you can still argue that your method does not require additional parameters.\n\n4. checklist 3-(c), the answer was yes, but I can’t find any error bars w/ different random seeds. I was expecting the ‘shaded area’ in training curves, e.g., standard deviation or confidence interval, at least for the main curves. The standard deviations you reported in the tables are not w/ different random seeds. It’s average over test tasks from my understanding. So, your answers for checklist 3-(c) should be ‘no’.\n\n5. How did you compute eigenvalues? is it differentiable so that you can easily use them as a part of loss function? no explanation about this in the current texts. Also, are there other related works that are using eigenvalues as a part of loss function? \n\n6. Figure 2-(c), you plotted L_k, which is considered as condition numbers. But, was it approximated (by eq 8) and also surrogate condition numbers (by eq (9))? Since you are using L_k as an additional loss function, it is obvious that this number decreases. I am curious if the real condition number is decreasing.\n\n7. In figure 1, (a) and (b), were they a toy example? or from MAML? If MAML, how did you pick only two dimensions in the parameters? not clear given the current texts.\n\n[1] Meta-Curvature, Park et al., NeurIPS 2019\n[2] On Modulating the Gradient for Meta-Learning, Simon et al., ECCV 2020\n[3] Meta-learning with warped gradient descent, Flennerhag et al., ICLR 2020 The authors addressed some limitations in the main text. and I do not see any negative societal impact.", " - The authors propose a regularization method for improving the conditioning of MAML.\n- Experiments show the method consistently reduces number of steps for adaption, and improves accuracy Strengths:\n- method seems very effective at reducing number of required MAML steps\n- consistently outperforms MAML\n- simple additional loss during training, no cost at inference time\n- authors study effect of using parameter subset\n\nWeaknesses:\n- precondition loss only applied in top layers (as mentioned by authors), but this may not be a big problem, as the top layers are the ones that are adapted the most during meta-testing Is it the real condition number or the variance of logs in Figure 2? Yes", " The paper proposes a regularisation term for the outer-loop of MAML to encourage a well conditioned parameter space that improves inner loop adaptation. The experiments suggest that the condition number is correlated to the performance within a few gradient updates. Training with the proposed regularisation term is shown to empirically improve upon MAML. \n\n \nStrengths:\n - Does not require additional parameters for adaptation unlike existing preconditioning methods. \n - Interesting analysis regarding the connection between condition number and few-step performance on an unseen task. \n - Promising experiment results show an empirical benefit of preconditioning. \n\nWeaknesses:\n - Requires higher-order gradients (computationally expensive) like MAML\n - Computing the conditioning term is expensive. As such, instead of applying the conditioning constraint to the entire model, only a small subset of parameters can have the conditioning constraint applied.\n - Missing comparison with existing preconditioning methods (For example, see Section 2.1 of WarpGrad) \n\nOverall, I would rate the paper:\n\nNovelty: Medium\n\nClarity: High\n\nSignificance: Medium\n\n \nIt is not clear to me why MAML's training collapses in Figure 2. \n\n\nAlthough there has been previous work in preconditioning, this is the first method that I'm aware of that does so without additional parameters. On Page 9 Line 299-300, it is mentioned that existing preconditioning methods \"[incur] an often extensive number of additional parameters\". This seems to suggest that some methods may not require a lot more parameters. If so, how does the number of parameters required compare to the existing preconditioning methods and how do the performance differ? This would allow a potential user to make a decision between which method to try.\n\nI find it interesting that the proposed method can scale to many more gradient steps (Table 3). How does MAML fare when scaling to those same number of gradient steps at test time? Does MAML continue to perform similarly or better than with 5 gradient steps? I am not convinced of the benefit of choosing the number of adaptation steps at inference steps. For example, at test time, it makes sense to evaluate the model on the same number of inner loop updates as it was trained on. Although, it is beneficial if improvements can be further gained with more training (as we see in Table 3). \n\n\nOn Page 9 Line 316, it's mentioned that the proposed method \"significantly [outperforms] unconstrained methods during initial adaptation\". However, the only method compared is MAML, so it would be more precise to say \"significantly [outperforms] its unconstrained counterpart during initial adaptation\"\n\nIf the parameter space is well-conditioned, then I would expect that a single large gradient step from the meta-initialization could already train a good model. For example, instead of evaluating the model by fine-tuning with 5 inner loop updates with an inner loop learning rate of 0.1, I'm curious as to the performance of fine-tuning with 1 inner loop update with an inner loop learning rate of say 0.5. Yes", " Improving the performance of a deep neural network on a new, unseen task with a limited number of new datapoints and adaptation epochs is one of the central problems in modern deep learning. The authors propose to improve the performance of MAML, a benchmark algorithm for few-shot adaptation problems, by pre-conditioning the parameter space using the condition number of the network. Instead of directly using the condition number to condition the bi-level optimization problem of MAML, the authors propose to consider the distribution of all eigenvalues using the approximated logarithmic eigenvalues for increased expressiveness. This simple modification to MAML is shown to be highly effective at improving the few-shot adaptation process across diverse datasets. Strength\n- The proposed method is theoretically sound, interesting, and novel. Although it is a simple modification to an existing algorithm (MAML), there is enough novelty to be recognized in the idea to better-condition a parameter space using the condition number of the Hessian matrix. In addition to the strong theoretical motivation behind the proposed approach, the authors experimentally demonstrate the relationship between the condition number of a network and its adaptation capabilities. \n- The proposed method is shown to be highly effective at allowing more rapid few-shot adaptation. According to the experimental results, the proposed method improves adaptation performance of MAML across all adaptation steps, but the degree of improvement is particularly noteworthy under a limited number of adaptation steps (1 or 2). \n- The writing is concise and clear. It was easy to follow how the modified objective function is derived by introducing the conditioning constraint to the MAML bi-level optimization problem. \n\nWeakness\n- Maybe more baseline approaches from meta-learning and/or few-shot learning, other than MAML, could be added for comparison. Can MAML + the conditioned parameter space (proposed method) outperform more recent meta-learning algorithms? Also, correct me if I'm wrong, but it appears that the parameter space conditioning can be integrated with other meta-learning algorithms as well. Does it lead to performance improvement regardless of the choice of base meta-learning algorithm? - Can the proposed method also improve MAML's performance in reinforcement learning setting? Experimentally demonstrating the applicability of the proposed method across different problem set-ups or tasks would strengthen the paper's contribution.\n- The authors mention that \"computing the condition number by using the max and min eigenvalues... would unnecessarily weaken the training signal\" (section 3.2). Does using the condition number as opposed to Hessian's logarithmic eigenvalues actually lead to a meaningful difference in adaptation performance?\n- Some of the adaptation works have found that only adapting bath norm parameters, instead of the whole network parameters, to reduce the computational cost can be sufficient to improve the adaptation performance. In that regard, it is interesting that adapting cls+emb+eBN parameters performs worse than adapting cls+emb parameters (Table 2), any insight on why additionally adapting bn parameters leads to worse performance? The authors adequately acknowledge the computational complexity of their method in section 5.\nPlease refer to weakness and questions for additional concerns and questions. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "a2Aa0bR8M_I", "lAP9NWvj-P", "AIjNkHawo-w", "b8QBMAd12iZ", "8t1wvhkn6Mx", "lAP9NWvj-P", "Z-TfgTGxfWI", "hnnjHQZRJCB", "Y6DrS_4etf_", "bRsV3yi02it", "ZtKem4aEMSX", "nips_2022_G6cJsOOx2R3", "nips_2022_G6cJsOOx2R3", "nips_2022_G6cJsOOx2R3", "nips_2022_G6cJsOOx2R3" ]
nips_2022_VhgC3SMTiy
Audio-Driven Co-Speech Gesture Video Generation
Co-speech gesture is crucial for human-machine interaction and digital entertainment. While previous works mostly map speech audio to human skeletons (e.g., 2D keypoints), directly generating speakers' gestures in the image domain remains unsolved. In this work, we formally define and study this challenging problem of audio-driven co-speech gesture video generation, i.e., using a unified framework to generate speaker image sequence driven by speech audio. Our key insight is that the co-speech gestures can be decomposed into common motion patterns and subtle rhythmic dynamics. To this end, we propose a novel framework, Audio-driveN Gesture vIdeo gEneration (ANGIE), to effectively capture the reusable co-speech gesture patterns as well as fine-grained rhythmic movements. To achieve high-fidelity image sequence generation, we leverage an unsupervised motion representation instead of a structural human body prior (e.g., 2D skeletons). Specifically, 1) we propose a vector quantized motion extractor (VQ-Motion Extractor) to summarize common co-speech gesture patterns from implicit motion representation to codebooks. 2) Moreover, a co-speech gesture GPT with motion refinement (Co-Speech GPT) is devised to complement the subtle prosodic motion details. Extensive experiments demonstrate that our framework renders realistic and vivid co-speech gesture video. Demo video and more resources can be found in: https://alvinliu0.github.io/projects/ANGIE
Accept
This paper enjoyed a reasonable interaction between the authors and the reviewers, with the authors addressing the reviewers' concerns about the novelty of the proposed method, its specificity to the "talking head" scenario, the fact that the model is used in a speaker-dependent fashion, and some concerns about specific details in the writing. Three of the four reviewers responded to the authors during the discussion period, and the fourth reviewer acknowledged having read the rebuttal during the discussion between the AC and reviewers. In the end, all reviewers recommend acceptance of the paper, citing the good performance of the model, the novelty of co-speech gesture generation in the image domain, and the nice design of the model (specifically, the VQ motion plus residual structure).
train
[ "wYOgBQJ_gfC", "hAWsVGewjz-", "jD1b31Ixh4C", "a-JeA-_0dev", "CnhVJ2vPt78", "6UyygMWoxqG", "t1NSaIj9XBV", "Oq-Aub4X3IN", "phl6UWx9FvV", "Il19lzvNRefs", "Vei3dn-EmSV", "aMGqLSOJUN-", "aDTH8wQA0b9", "34wu6JnMnWD", "7MHxmiDDi7k", "ArVUpvZA8l", "VFOyRIY4lYJ", "xltQySJn4d_", "IzV7UiQ3W2-", "BSAJ6QD7gaS", "uSSf9VZAo-k", "lNwj0Lrufb", "N6A451n51Ed" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer WZ3S:\n\nWe are delighted to hear that your concerns are addressed! Many thanks again for your very constructive comments, which have helped us improve the quality of the paper significantly.\n\nBest,\n\nPaper 365 Authors.\n\n", " We sincerely thank the reviewer for the additional feedbacks. We use the constant-Q chromagram mainly for two reasons: **1)** As mentioned in L212 of the main submission, previous studies [a, b] suggest that the onset strength information is more suitable for cross-modal pattern learning. Therefore, we generally follow [a] to use the constant-Q chromagram as one of the input features to predict the quantized motion pattern codes. **2)** Since the chromagram could reflect the harmonic and energy changes, we use it as supplemental information to the onset features in a more explicit manner. We agree with the reviewer that some chromagram information is already captured in the spectral flux-based onset strength. We will include the ablation experiments on the audio feature choice in the final version.\n\nWe are delighted to hear that the reviewer is mostly satisfied with our responses! Many thanks again for your very constructive comments, which have helped us improve the quality of the paper significantly. Please don't hesitate to let us know if there are further clarifications that we could offer!\n\n****\n\n[a] - Li et al., \"Bailando: 3D Dance Generation by Actor-Critic GPT with Choreographic Memory.\"\n\n[b] - Tang et al., \"Dance with Melody: An LSTM-Autoencoder Approach to Music-Oriented Dance Synthesis.\"", " Dear Paper365 Authors:\n\nThank you for the response. All of my concerns have been addressed, so I'll be updating my rating accordingly.", " I thank the authors for responding to my review with various changes and additions to the paper.\n\nI am mostly satisfied with the responses. I have one follow-up question though about the onset features used. Is there any motivation behind using the constant-Q chromagram? All the other features make sense for onset or rhythm information, but the chromagram is typically used in music information retrieval to track harmonic changes. If you are expecting the change in energy in this feature to be informative to the network, then that is already captured in the spectral flux-based onset strength that you extracted from librosa.", " The authors have provided standard deviations of the MOS scores and explicated their feature processing. Please read their response and reply.\n\nThanks", " The authors have directly addressed your question: \"How well does the approach generalize to speakers that are not in the training data?\"\n\nPlease read their response and reply to it.\n\nThanks", " Dear Reviewer WZ3S:\n\nWe sincerely thank you again for your great efforts in reviewing this paper, especially for the precious advice that has helped us improve the quality of this paper significantly!\n\nWe have polished the related work, updated the citation and included the limitation examples in the revised version. Please don't hesitate to let us know if there are further clarifications or experiments that we could offer!\n\nBest,\n\nPaper 365 Authors.", " Dear Reviewer S12n:\n\nSorry for the bothering. We are very delighted that your concerns have been addressed and we sincerely thank your comments for acknowledging that our responses are satisfactory! We would like to kindly remind that your rating of this paper seems unchanged. If you have any further question, please let us know. Many thanks again!\n\nBest,\n\nPaper 365 Authors.", " Dear Reviewer FuNA:\n\nWe sincerely thank you again for your great efforts in reviewing this paper, especially for the precious advice that has helped us improve the quality of this paper significantly!\n\nWe have included the user study standard deviation, fixed the variable index typo and elaborated the audio feature extraction details in the revised version. Please don't hesitate to let us know if there are further clarifications or experiments that we could offer!\n\nBest,\n\nPaper 365 Authors.", " Dear Reviewer SBef:\n\nWe sincerely thank you again for your great efforts in reviewing this paper, especially for the precious advice that has helped us improve the quality of this paper significantly!\n\nWe have clarified the novelty of this work, included the discussions on model's generalization ability and implemented additional experiments in the revised version. Please don't hesitate to let us know if there are further clarifications or experiments that we could offer!\n\nBest,\n\nPaper 365 Authors.", " Dear Reviewer S12n:\n\nWe are delighted to hear that your concerns are addressed! Many thanks again for your very constructive comments, which have helped us improve the quality of the paper significantly.\n\nBest,\n\nPaper 365 Authors.", " Thank you for the detailed responses to my concerns. I appreciate the time you have taken to address the issues and run the additional experiments. In light of this, the opinions of the other reviewers, and your response to their reviews, I am satisfied that you have addressed my concerns. I will revise my decision accordingly.", " We sincerely thank the reviewer for your insightful comments and recognitions to this work, especially for acknowledging that our approach is novel with potential benefits to the community and relevant research domains. We have polished the paper and made the clarifications in the revised version. \n\nThe technical contributions and novelty of this work are highlighted in the General Response. Please kindly refer to it for details. Note that the following polishments have been made according to your advice: \n\n* We have polished to highlight the potential impact of our work on relevant research domains in Section “Related Work” of the main submission.\n* We have eliminated the Wikipedia article reference (in L301 of the main submission) and elaborated the concept of Fleiss's Kappa statistic in the supplemental document (in Section J of the supplemental document). \n* We have included the motion estimator training details in Section C of the supplemental document.\n\nThanks again for your very constructive comments, which have helped us improve the quality of the paper significantly! Below we would like to provide point-to-point responses to all the raised questions:\n\n> **Q1: \"The related works shown in section 2 is limited in the application viewpoint. It would be much better if authors could make a connection to slightly other domains that the proposed method could be also applied. In overall, the scope of the paper is too specific on the co-speech gesture image generation. To meet broader NeurIPS readership, I wish there would be some implications outside the main problem.\"**\n\n**A1:** Thank you for the precious advice! We have polished the writing of related work part in the main submission accordingly. Though focusing on the specific task of co-speech gesture generation in this paper, the novel constrained vector quantization design and residual refinement idea could potentially benefit relevant research domains like constrained VQ problem and video generation. We sincerely thank the reviewer for appreciating this!\n\n> **Q2: \"Please do not cite Wikipedia article as a reference (line 303, Fleiss's Kappa statistic). Please briefly describe about the concept in the appendix.\"**\n\n**A2:** Many thanks for your suggestion! We have polished this part in the revised version. We eliminate the Wikipedia article reference (L301) and elaborate the concept of Fleiss's Kappa statistic in the supplemental document (Section J).\n\n> **Q3: \"Regarding the line 272 - \"The motion estimator is pretrained for knowledge distillation\" - please elaborate on this. The authors might want to describe such implementation details on the appendix.\"**\n\n**A3:** We first follow the pipeline of MRAA [a] to pretrain the motion estimator and image generator via self-reconstruction. Then, the motion estimator module is freezed and serves as supervision in VQ-Motion Estimator training, i.e., we train the VQ-VAE model to reconstruct the motion representation from the pretrained MRAA motion estimator. In this way, the model gradually learns the knowledge of the pretrained MRAA motion estimator, which resembles the process of knowledge distillation. We have included the implementation details in Section C of the supplemental document.\n\n> **Q4: \"What is a new dataset that the authors have collected, as described in line 92? Is it simply an extension of PATS dataset by some post-processing steps?.\"**\n\n**A4:** We complement the original PATS dataset with more processed features, including: **1)** pre-processed image frames and **2)** onset strength audio features (as detailed in L238 of the main submission). These new features are important to the co-speech gesture image generation task. We hope this could facilitate future research in the community.\n\n> **Q5: \"It would be better if the authors could provide specific limitation examples (discussed in line 344).\"**\n\n**A5:** Many thanks for your advice! Due to the difficulty of this challenging task, ANGIE fails to generate human images of extreme pose. Besides, with the lack of high-resolution co-speech upper body images, in some cases ANGIE fails to synthesize the high-fidelity face and upper body simultaneously. Please kindly refer to Section D of the supplemental document for specific examples of the limitation, future work and discussions.\n\n****\n\n[a] - Siarohin et al. \"Motion Representations for Articulated Animation.\"\n\n****\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer!\n", " We sincerely thank the reviewer for your insightful and constructive feedbacks. We have polished the paper, added the experiments and made the clarifications in the revised version.\n\nThe technical contributions and novelty of this work are highlighted in the General Response. Please kindly refer to it for details. Note that the following polishments have been made according to your advice: \n\n* We have corrected the misunderstanding of “generalize ability” by ”capacity” in L35, L82 of the main submission.\n* We have highlighted “position-irrelevant” denotes “image location invariant” in L194 of the main submission. \n* We have included the analysis on the model’s generalization ability in Section E of the supplemental document.\n* We have added the results for unseen audio from a different person in Section E of the supplemental document.\n* We have included the analysis on additional text information input in Section H of the supplemental document.\n* We have included the discussion of model’s ability in conversation in Section D of the supplemental document.\n\nThanks again for your very constructive comments, which have helped us improve the quality of the paper significantly! Below we would like to first address your common concerns, then provide point-to-point responses to all the raised questions.\n\n> **Common Q1: \"Why not use the language model or text (semantic) information in this work, so that some semantics relevant gestures like iconics, metaphorics, and deictics are unlikely to be generated. Only the beat gestures could be generated.\"**\n\n**Common A1:** Thank you for the precious advice! We give our responses below:\n\n**1)** We would like to clarify that our main focus and technical contributions are how to animate the co-speech gesture in the image domain, but not on the influence of input modality. We choose the audio-driven setting mainly for two reasons: \n\n* We generally follow the problem setting of baselines [a, c, d], where all the compared methods take only the speech audio as input modality for fair comparison. \n\n* In order to use the text information, we have to pre-process the raw transcripts to be temporally aligned with audio using tools like Gentle. To prevent the potential alignment inaccuracy in the pre-processing step, and to simplify the problem setting for better focusing on audio-driven co-speech gesture image animation, we do not involve the text input in this work.\n\n**2)** As proved in Automatic Speech Recognition (ASR) [e, f] and recent co-speech gesture studies [g], the speech audio actually contains some high-level semantic information. Such implicit semantic information in the speech audio could guide the model to capture some specific co-speech gesture patterns like metaphorics [g]. Besides, as shown in the Kubinec chemistry lecture setting, our model manages to synthesize deictic gestures of pointing to the screen by learning such implicit audio-gesture correlations (the reasons and analysis on why we could learn this are elaborated in **Common A2**). We would like to respectfully claim that the generated gestures of our model are not only limited to the beat gestures, but some semantic gestures could also be synthesized.\n\n**3)** We agree with the reviewer that language model/text information contains rich semantics, which are beneficial to the learning of semantics relevant gestures like iconics, metaphorics, and deictics. Previous study [h] has also verified the influence of each input modality on co-speech gesture, including audio, text and speaker identity, etc. Therefore, we additionally complement an experiment of using text feature: we encode the transcripts by TextTCN [i] and further concatenate with audio features. The combined audio text features are fed into Co-Speech GPT with Motion Refinement network to predict the quantized code as well as motion residuals. The results are reported below, which suggest that the text feature could indeed facilitate better co-speech gesture generation. We have included the experiment results in Section H of the supplemental document.\n\n|Methods|FGD$\\downarrow$|BC$\\uparrow$|Diversity$\\uparrow$| \n|-|-|-|-|\n|ANGIE (Extra Text Concat)|1.30|0.73|49.8|\n|ANGIE (Ours)|1.35|0.72|49.4|\n\n**4)** As an early attempt to explore audio-driven co-speech image generation, this work could serve as a baseline for further studies in the research community. However, how to effectively fuse the multiple modality information (including audio and text) and better map to the implicit motion representation remains an open problem. We will explore this in future work.\n", " > **Common Q2: \"Since no text or language models is used, why could we generate “pointing to screen” gesture in Kubinec subset? Is the screen always to the speaker’s left? Seems overfit to the specific sequences.\"**\n\n**Common A2: 1)** In the Kubinec chemistry lecture setting, the screen is always to the speaker’s left. Such dataset bias indeed exists, which is verified and visualized by previous work [c] (please kindly refer to the Figure 2 of [c] for the gesture heatmap). They refer to such phenomenon as “individual speaking style”, where the behavior of frequently pointing to the screen is the speaker style learned by the model, but not overfitting [a, c, d].\n\n**2)** Since the reference video of the same audio does not show such behavior, our model is neither overfitting nor memorizing the specific hand location (otherwise, the ground truth will point to the same location as well). We further analyze which module leads to such phenomenon by visualizing the generated results of each module. We find that the speaker’s hand is pointing to the screen (not exactly on the “four quadrants”) after the vector quantize module, while the motion refinement module refines the height of hand, so that the hand is pointing to that location. This shows that the VQ network determines a certain motion pattern of pointing to screen, and residual learning refines the hand moving height, which demonstrates the effectiveness of our method.\n\n**3)** As mentioned in the **Common A1 (2)**, the speech audio indeed contains some high-level semantic information. With the proposed vector quantization design, it simplifies the learning from a harder regression problem to an easier classification problem (as detailed in L172-174 of the main submission). In this way, we ease the difficulty of learning such a cross-modal mapping. The model could thus better grasp the connections between speech audio and the “point to the screen” gesture.\n\n> **Common Q3: \"Separate model for each speaker, but criticize prior work for limited generalization ability.\"**\n\n**Common A3: 1)** We would like to clarify that our meaning of “limited generalization ability” is under the context of comparisons between MoCap data based methods and video pseudo annotation based methods (L34-38 of the main submission). We want to express that the model capacity is limited due to the limited dataset scale. Sorry for the misunderstanding. We have revised the writing in the revised version. \n\n**2)** As shown in previous studies [c, d], the co-speech gesture motions and styles vary a lot for different speakers, which is termed as “individual speaking style” [c]. Therefore, it is suitable to train a separate co-speech gesture generation model for each person following the experiment settings of baselines [a, c, d]. However, we explore a more challenging task of co-speech gesture image generation in a unified framework without structural prior and achieve superior performance. Even for a single-person subset, it is non-trivial to animate non-rigid human body in image space by speech audio, especially with the interference of complex background scenes.\n\n**3)** It is difficult to achieve a person-agnostic co-speech gesture **image** generation model with **currently available datasets**. In particular, the commonly used datasets are TED Gesture [h] and PATS [c, d]. TED Gesture is based on TED Talk videos, while PATS contains 25 speakers of talk shows, lectures, etc. Due to the frequent camera movements and viewpoint shift in TED videos, there lacks clear co-speech gesture clips for **image** generation. Hence we narrow down the experiments to PATS dataset in this work. A dataset with high-quality co-speech gesture image frames of multiple speakers is needed to learn a model of novel person generalization ability. We will strive for this in future work.\n\n**4)** We verify the potential generalization ability of our approach in two aspects: \n\n* We could animate the same speaker’s different appearances with speech audio (as shown in the codebook analysis part of demo video, we could animate Oliver’s different appearances), while previous studies [a, c] that resort to off-the-shelf pose2img generator only support a single appearance. \n\n* We additionally implement the experiments of animating with unseen audio from a different person. The evaluated results are reported below. It shows that the model’s performance is still effective with the unseen audio input. With the proposed vector quantize design, each codebook entry defines a reasonable co-speech gesture pattern. In contrast to directly mapping to the continuous coordinate space, such technical design guarantees a valid gesture even when generalizing to the unseen audio from a different person. The results are included in Section E of the supplemental document.\n\n|Methods|FGD$\\downarrow$|BC$\\uparrow$|Diversity$\\uparrow$| \n|-|-|-|-|\n|ANGIE (Novel Audio)|1.46|0.69|48.5|\n|ANGIE (Ours)|1.35|0.72|49.4|\n", " > **Q1: \"The paper refers to gestures as “common motion patterns” and “rhythmic dynamics”. What do these mean? Use standard terminology from the co-speech gesture literature.\"**\n\n**A1: 1)** The “common motion patterns” determine the general appearance of gestures in a generated sequence, while the “rhythmic dynamics” mean the subtle movements to match the speech audio [a, b]. Previous studies refer to the common motion pattern as similar terms like “motion template” [a] or “pose mode” [b]. \n\n**2)** In this work, with the vector quantize design, we could extract the reusable motion patterns from training data as codebook entries. Since those gestures of similar general appearance tend to be mapped into the same quantized code, we refer to such motion patterns as “common”.\n\n> **Q2: \"Seems overfit to specific sequences. Why does the speaker points towards the screen in Kubinec? Is the screen always to the speaker’s left? No language model/speech content, why we would see this behavior.\"**\n\n**A2:** Please kindly refer to the Common A2.\n\n> **Q3: \"Train a separate model for each speaker, but criticized prior work for limited generalization ability.\"**\n\n**A3:** Please kindly refer to the Common A3.\n\n> **Q4: \"Consider gesture in a very limited context - talk show host speaking monologue to camera. How well could generalize to conversational gestures that are likely more subtle and drawn from a larger lexicon?\"**\n\n**A4:** Thank you for the precious advice. Generalizing co-speech gesture avatars to more complex and general settings like conversation is a promising idea of great practical usage. Currently, the biggest bottleneck could be the lack of high-quality conversational image dataset. Although CMU Panoptic [j] contains multi-view conversation videos, the image quality is poor for co-speech image animation. Besides, since the social co-speech gesture is more diverse, some model designs like VQ codebook size should be well studied. We will delve into this interesting problem in future work. The discussions are included in Section D of the supplemental document.\n\n> **Q5: \"Fail to cappture iconics, metaphorics, and deictics without text. Only beat gesture.\"**\n\n**A5:** Please kindly refer to the Common A1.\n\n> **Q6: \"The audio clips are variable length, are these clipped to be a fixed size?\"**\n\n**A6:** At the audio pre-processing step, they are not clipped to a fixed size. At the training stage, since we have to input a certain length of audio to the model, we sample a sliding window of 96 frame audio clip with stride of 32. The details are elaborated in L256 of the main submission.\n\n> **Q7: \"Consistency when generating longer sequences.\"**\n\n**A7: 1)** The long-term consistency is guaranteed by Co-Speech GPT, where the attention mechanism in transformer model would take the self-attention of previous gestures and cross-attention of input speech audio for coherent results. We will include longer sequence results in the final version. \n\n**2)** Compared to previous setting of generating co-speech gesture skeleton, animating long sequence in the image domain is harder. Actually, generating long sequence remains an open problem in video generation, let alone the more challenging cross-modal audio-to-gesture generation. We will keep exploring how to generate the long-sequence results in future work.\n\n> **Q8: \"Position-irrelevant. It was cleared up in later text that you mean image location invariant.\"**\n\n**A8:** Yes, your understanding is correct! We have polished accordingly in the revised version.\n\n> **Q9: \"Only use the speech envelope and MFCCs to generate gestures that seem to point to four quadrants.\"**\n\n**A9:** Please kindly refer to the Common A2.\n\n> **Q10: \"Clarify that the approach is only for beat gestures.\"**\n\n**A10:** Please kindly refer to the Common A1.\n\n****\n\n[a] - Qian et al. \"Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates.\"\n\n[b] - Xu et al. \"Freeform Body Motion Generation from Speech.\"\n\n[c] - Ginosar et al. \"Learning Individual Styles of Conversational Gesture.\"\n\n[d] - Ahuja et al. \"Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach.\"\n\n[e] - Yu et al. \"Audio-Visual Recognition of Overlapped Speech for the Irs2 Dataset.\"\n\n[f] - Winata et al. \"Lightweight and Efficient End-to-end Speech Recognition Using Low-rank Transformer.\"\n\n[g] - Liu et al. \"Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation.\"\n\n[h] - Yoon et al. \"Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity.\"\n\n[i] - Bai et al. \"An Empirical Evaluation of Generic Convolutional and Recurrent Networks\nfor Sequence Modeling.\"\n\n[j] - Joo et al., \"Panoptic Studio: A Massively Multiview System for Social Interaction Capture.\"\n\n****\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer!\n", " We sincerely thank the reviewer for your insightful comments and recognitions to this work, especially for acknowledging that our approach is technically sound with novel vector quantization design. We have corrected the typo and made the clarifications in the revised version.\n\nNote that the following polishments have been made according to your advice: \n\n* The standard deviation of user study result is included in Section F of the supplemental document. \n* The variable index typo is corrected in Section 3.1 of the main submission.\n* The details on audio feature extraction are included in Section G of the supplemental document.\n \nThanks again for your very constructive comments, which have helped us improve the quality of the paper significantly! Below we would like to provide point-to-point responses to all the raised questions:\n\n> **Q1: \"The subjective evaluation involves mean opinion score (MOS). The authors do not add 95% confidence intervals or a standard deviation to their scores to see the spread in the scores given to each system.\"**\n\n**A1:** Many thanks for your precious advice! The standard deviation of each user study score is reported below, which is included in Section F of the supplemental document. Besides, in the initial submission we have measured the participants’ scoring disagreement with Fleiss’s-Kappa statistic in L301 of the main submission. The value shows that the agreement among scorers is highly consistent.\n\n|Methods|Ground Truth|S2G|HA2G|SDT|TriCon|ANGIE (Ours)| \n|-|-|-|-|-|-|-|\n|Realness|0.492|0.413|0.293|0.206|0.385|0.312|\n|Synchrony|0.480|0.574|0.214|0.265|0.350|0.345|\n|Diversity|0.252|0.629|0.471|0.313|0.221|0.287|\n\n> **Q2: \"L140: Shouldn't the output frames be from I^hat_(2:N) if I_1 is given? If that is the case, perhaps the authors can just say I_0 is given and then the rest of the paper need not be changed.\"**\n\n**A2:** Thank you for pointing out this typo! We have corrected the variable indexes according to your advice in Section 3.1 of the main submission. Please kindly check the highlighted changes of blue text color in the formula and problem definition part.\n\n> **Q3: \"How is the onset feature strength computed? What is the window size or hop size for spectral flux computation? What the 426 dimensions are in the onset feature?\"**\n\n**A3:** The audio onset strength feature of T frames is of shape (T, 426), where T is the temporal dimension (frame number) and 426 is the feature channel dimension. It is the concatenation of constant-Q chromagram, tempogram, onset beat, onset tempo and onset strength. Most features are derived from the audio onset strength/envelope and the channel dimension is summed up to 426. We utilize the librosa onset functions to extract the features, including “librosa.onset.onset_strength”, “librosa.feature.tempogram” and “librosa.beat.beat_track”, etc. The audio sample rate is 16000, the time lag for computing differences is 1, the hop length is 512 and the window length is 384.\n\n> **Q4: Similarly, how are the mfcc features computed? Again, I don't understand the dimensions of the MFCCs. Are you using 12 mfccs or 28 mfccs. In any case, if the onset strength length is 426, why is the temporal dimension of the mfccs so small? And you mention that they are computed with a window size of 10 ms. What about hop size? What about fft block size?\"**\n\n**A4:** The original audio mfcc feature was calculated as 12 mfccs which has the dimension of (T’, 12) and T’ is the original audio frame number. In our implementation, we use a 28-dim sliding window to further unfold the mfcc feature into a final shape of (T, 28, 12), where T denotes the final temporal dimension (video frame number), 28 denotes the size of sliding window and 12 is the mfcc feature dimension. MFCC feature is extracted with a sample rate of 16000, window length of 25 ms, window step of 10 ms, cepstrum number of 13, filters number of 26 and FFT block size of 512. In the motion refinement module, we use the a certain frame’s mfcc feature of shape (28, 12) and forward a series of convolution and linear layers to extract the per-frame audio feature of dimension 128.\n\n> **Q5: Does the network also infer mouth movement? It would seem that way since there is no special treatment of head/face from the body gestures. Why have the authors not also evaluated using audio-driven face animation metrics?\"**\n\n**A5:** Our main focus in this work is the upper body co-speech gesture. We follow previous studies [a, b] that post-process the facial movement merely for demo visualization. Please kindly refer to L105-117 of the supplemental document for more details, analysis and discussions.\n\n****\n\n[a] - Ginosar et al. \"Learning Individual Styles of Conversational Gesture.\"\n\n[b] - Qian et al. \"Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates.\"\n\n****\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer!\n", " We sincerely thank the reviewer for your insightful comments and recognitions to this work, especially for acknowledging that our approach is technically effective with superior performance. We have polished the paper, added the experiments and clarified the below points in the revised version. \n\n> **Q1: \"Learning the representation and mapping from utterances to the representations are not new. The main novelty is applying the idea on gesture generations.\"**\n\n**A1: 1)** We would like to clarify that our main novelty lies in the vector quantize design to extract the common motion pattern and residual refinement to complement subtle movement details (as recognized by Reviewer FuNA, WZ3S), while the motion representations are not the key focus of this paper. Therefore, we follow MRAA and introduce them merely as a preliminary section for self-contained contents.\n\n**2)** As specified in L172-174 of the main submission, our mapping design from utterances to representations is **actually different from previous studies** in two-folds: \n\n* Previous studies directly map utterances to pose coordinates in a continuous space, which is a _harder regression_ problem. On the contrary, we ease the problem by predicting a category of quantized codebook (i.e., codebook entry), which is an _easier classification_ problem. We thus alleviate the cross-modal audio-to-gesture learning difficulty. \n\n* With the quantized motion code sequences, we could use powerful attention-based Transformer for better mapping learning (as recognized by Reviewer WZ3S).\n\n**3)** As elaborated in the General Response, we provide a solution on how to deal with the constrained vector quantization problem and how to complement sequential results with missing details. Such novelty could potentially benefit relevant researches like constrained vector quantization problem and video generation, which is not limited to co-speech gesture generation.\n\n> **Q2: \"How well does the approach generalize to speakers that are not in the training data?\"**\n\n**A2: 1)** As shown in previous studies [a, b], the co-speech gesture motions and styles vary a lot for different speakers, which is termed as “individual speaking style” [b]. Therefore, it is suitable to train a separate model for each person following the baseline’ experiment settings [a, b, c]. However, we explore a more challenging task of co-speech gesture image generation in a unified framework without structural prior and achieve superior performance. Even in a single-person subset, it is non-trivial to animate non-rigid human body in image space by speech audio, especially with the interference of complex background scenes.\n\n**2)** It is hard to generalize to speakers that are not in the training data with **currently available co-speech gesture image datasets**. In particular, the commonly used datasets are TED Gesture [d] and PATS [a, b]. TED Gesture is based on TED Talk videos, while PATS contains 25 speakers of talk shows, lectures, etc. Due to the frequent camera movements and viewpoint shift in TED videos, there lacks clear co-speech gesture clips for **image** generation. Hence we narrow down the experiments to PATS dataset in this work. A dataset with high-quality co-speech gesture image frames of multiple speakers is needed to learn a model of unseen person generalization ability. We will strive for this in future work.\n\n**3)** We verify the potential generalization ability of our approach in two aspects: \n\n* We could animate the same speaker’s different appearances with speech audio (as shown in the codebook analysis part of demo video, we could animate Oliver’s different appearances), while previous studies [b, c] that resort to off-the-shelf pose2img generator only support a single appearance. \n\n* We additionally implement the experiments of animating with unseen audio from a different person. The evaluated results are reported below. It shows that the model’s performance is still effective with the unseen audio input. With the proposed vector quantize design, each codebook entry defines a reasonable co-speech gesture pattern. In contrast to directly mapping to the continuous coordinate space, such technical design guarantees a valid gesture even when generalizing to the unseen audio from a different person.\n\n|Methods|FGD$\\downarrow$|BC$\\uparrow$|Diversity$\\uparrow$| \n|-|-|-|-|\n|ANGIE (Novel Audio)|1.46|0.69|48.5|\n|ANGIE (Ours)|1.35|0.72|49.4|\n\n****\n\n[a] - Ahuja et al. \"Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach.\"\n\n[b] - Ginosar et al. \"Learning Individual Styles of Conversational Gesture.\"\n\n[c] - Qian et al. \"Speech Drives Templates: Co-Speech Gesture Synthesis with Learned Templates.\"\n\n[d] - Yoon et al. \"Speech Gesture Generation from the Trimodal Context of Text, Audio, and Speaker Identity.\"\n\n****\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer!\n", " We sincerely thank all the reviewers for your constructive feedbacks and recognitions to this work, especially for acknowledging that **the novel vector quantize design could benefit further research** (Reviewer FuNA, WZ3S), **the motion refinement module is novel** (Reviewer WZ3S), **the performance is superior** (all reviewers), and **the ablation study is thorough** (Reviewer SBef, FuNA, WZ3S). We have polished the paper, added the experiments, and made the clarifications in the revised version. \n\nWe would like to re-emphasize the novelty and technical contributions of this work:\n\n* To the best of our knowledge, we are one of the earliest attempts to explore such a challenging setting of generating co-speech gesture images in a unified framework without structural prior annotation. Actually, it is non-trivial to animate non-rigid human body in image space by speech audio, especially with the interference of complex background scenes. In spite of this, a novel framework is proposed with superior performance than baselines. We sincerely hope that our contributions could be appreciated.\n\n* We design two novel modules named VQ-Motion Extractor and Co-Speech GPT with Motion Refinement. **1)** Instead of naively applying VQ-VAE to motion representations, we design a cholesky decomposition strategy to solve the constrained vector quantization problem (i.e., guarantee that the reconstructed covariance matrix is symmetric positive definite). **2)** Then, we improve the quantization scheme to encode the relative motion representation that is position (absolute location) irrelevant. **3)** The motion refinement module is further devised to complement the subtle motion details. The effectiveness of all modules is verified by extensive experiments.\n\n* As an early attempt to explore audio-driven co-speech image generation, this work could pave way for further studies in the audio-visual generation community. Besides, our approach gives an idea on how to deal with the constraints in vector quantization and how to complement sequential results with missing details. We hope this paper could provide insights for relevant domains like constrained vector quantization problem and video generation tasks.\n\n****\n\nWe have revised our manuscript to include the following changes according to all the reviewers’ insightful comments. Note that all the polishments on the main submission and supplemental document are highlighted with **blue** text color for better visualization.\n\n* We have polished to highlight the potential impact of our work on relevant research domains in Section “Related Work” of the main submission.\n\n* We have corrected several typos/misunderstandings in the main submission, which includes: change the wording from “generalization ability” to ”capacity” (L35, L82); correct the variable index typo (Section 3.1); highlight that the “position-irrelevant” denotes “image location invariant” (L194); eliminate the Wikipedia article reference (L301) and elaborate the concept of Fleiss's Kappa statistic in the supplemental document (Section J).\n\n* We have included the analysis on the model’s generalization ability in Section E of the supplemental document.\n\n* We have added the experimental results for unseen audio from a different person in Section E of the supplemental document.\n\n* We have included the user study score standard deviation in Section F of the supplemental document.\n\n* We have included the audio feature extraction details in Section G of the supplemental document.\n\n* We have included the analysis on additional input modality of text information in Section H of the supplemental document.\n\n* We have included the discussion of model’s potential ability in general conversational setting in Section D of the supplemental document.\n\n* We have included the motion estimator training details in Section C of the supplemental document.\n\nPlease don't hesitate to let us know of any additional comments on the manuscript or the changes.\n", " This paper looks into the topic of generating gesture videos from given speech and images. The proposed approach learns the representations of gesture motions as a codebook, and learns the mapping from the speech to the codebook. With the learned representation and the speech-to-codebook mapping, the approach can animate the given image with gestures based on the given speech. The speech representation for the image is based on previous work “Motion Representations for Articulated Animation”. Since the codebook provides quantized representation and can miss some details, the approach includes an additional residual learning approach to further construct the image details from speech. User studies showed improvement compared to previous work. Ablation studies are provided to show the effectiveness of the proposed techniques. Strength:\nThe approach generates good quality gesture videos based on the given image, and showed improvement compared to previous work.\n\nAblation studies demonstrated the effectiveness of the proposed techniques.\n\n\nWeaknesses:\nThe idea of learning representations and learning the mapping from utterances to the representations are not new, and the motion representations used in the work is proposed in the previous work. The main novelty is applying the idea on gesture generations.\n\nIt is not clear how the approach performs on speakers that are not in the training data.\n How well does the approach generalize to speakers that are not in the training data? The authors have properly discussed the limitations.", " The paper presents ANGIE, an approach for audio-driven image animation with specific focus on upper body gestures. ANGIE leverages an implicit motion model (MRAA) as the intermediate motion representation instead of operating on a landmark-based skeleton representation which allows it to work directly in the image domain. The problem is divided into 3 parts: a motion representation learning module, a motion prediction module, and a motion refinement module. First, a VQVAE is trained to discretize the motion into a fixed set of codes. This model learns a position-independent codebook of co-speech gestures. Next, a Co-Speech Gesture GPT learns to map audio onset features to the discrete codes of motion extracted from training videos. This GPT network is also trained with a residual loss which focuses on refining the motion predicted from the discrete codes.\n\nThe authors compare their approach to previous state-of-the-art audio-driven gesture generation networks and evaluate using objective metrics and subjective user studies. Overall, the proposed method outperforms the baselines. One of the baselines: HA2G tends to have better or similar beat consistency than the proposed method. The authors also perform ablations studies to compare different vector quantization strategies and confirming the benefit of the motion refinement step. The authors also analyze the quantized motion representations. Strengths:\n\n- This paper is the first to animate face and torso images without using a skeleton representation of the images and demonstrates improved performance over those approaches. \n- The method is another interesting application of VQ-VAE followed by a prediction model. The novelty also lies in the design of the VQVAE. Instead of naively learning a codebook for the motion features of MRAA, the authors transform their data to address constraints in their problem, such as learning a codebook from the lower triangular decomposition of their covariance matrix and using relative motion parameters instead of absolute parameters. Their decisions are also ablated in their evaluation showing the benefits of these transformations.\n\nWeaknesses:\n\nThere are some minor issues with the paper in terms of details. \n- The subjective evaluation involves some kind of mean opinion score (MOS). The authors do not add 95% confidence intervals or a standard deviation to their scores. This is important to see the spread in the scores given to each system.\n- Some details regarding audio feature extraction are missing in the paper. Please refer to the questions section.\n - L140: Shouldn't the output frames be from I^hat_(2:N) if I_1 is given? If that is the case, perhaps the authors can just say I_0 is given and then the rest of the paper need not be changed.\n- How is the onset feature strength computed? What is the window size or hop size for spectral flux computation? I don't understand what the 426 dimensions are in the onset feature. Onset strength should be a one dimensional feature per frame. Do you mean to say that the training videos are a fixed length and have 496 audio frames? It does seem like the videos are fixed length (96 frames @ 25 fps) but then you also need to mention some technical specifications: sample rate of the audio.\n- Similarly, how are the mfcc features computed? Again, I don't understand the dimensions of the MFCCs. Are you using 12 mfccs or 28 mfccs. In any case, if the onset strength length is 426, why is the temporal dimension of the mfccs so small? And you mention that they are computed with a window size of 10 ms. What about hop size? What about fft block size?\n- Does the network also infer mouth movement? It would seem that way since there is no special treatment of head/face from the body gestures, but then I am really surprised by the quality of the mouth motion in the sample videos shared. Why have the authors not also evaluated using audio-driven face animation metrics?\n The authors adequately discuss the limitations of their work along with ethics considerations of such algorithms.", " This paper concerns synthesis of co-speech gestures, gestures are coded in terms of entries in a VQ code book, and a GPT-like model is used to generate the sequence of symbols corresponding to input audio. In the generation, first the broad gestures are created, and then these are refined to improve the overall fidelity. + To construct the training data, an existing dataset was augmented with new features, and this will be made available.\n+ The generated image sequences look compelling.\n+ The approach was compared against, and beat, several baselines.\n\n- The paper refers to gestures as “common motion patterns” and “rhythmic dynamics”. What do these mean? Use standard terminology from the co-speech gesture literature.\n- It looks like the models are over fit to the specific sequences. For example, I think it’s the Kubinec sequence: when the speaker is referring to information on the screen to the speaker’s left, the speaker points and gestures towards the screen. Why? Is the screen always to the speaker’s left? There is no language model and there is no explicit capturing of the speech content, so it is not clear to me why we would see this behavior.\n- I believe you are training a separate model for each speaker, but you criticized prior work for limited generalization ability.\n- The paper considers gesture only in a very limited context - talk show host speaking monologue to camera. How well do you expect this to generalize, e.g., to conversational gestures that are likely more subtle and drawn from a larger lexicon?\n- There are broad categories of gesture that this approach will fail to capture. Specifically without the use of language, iconics, metaphorics, and deictics, are unlikely to be generated. You ought to clarify that the focus is specifically only on beats.\n The audio clips are variable length, are these clipped to be a fixed size?\n\nHow do you ensure consistency when generating longer sequences? Only short segments are shown in the demo video, it would be nice to see longer generated sequences.\n\nI was confused by “position-irrelevant” because position is important as position gives the sense of scale in gesture space. It was cleared up in later text that you mean image location invariant.\n\nIn the demo video, the speaker refers to “four quadrants” and it looks like the model produces gestures that seem to point to four quadrants. This seems suspicious since the “common motion patterns” are generated from just the speech envelope, and the refined motion from the MFCCs.\n Clarify that the approach is specifically targeting only beat gestures.", " This paper works on audio-driven gesture image generation, which receives a still image and audio to generate a sequence of images (video). Compared to previous works, it does not rely on skeletal pose annotation to prevent error accumulation caused by cascading of synthesis networks. The proposed system (named as ANGIE) is consisted of VQ-motion extractor and Co-speech GPT, each mapping the image to vector quantized representation and transformer network autoregressively predict next frame's quantized representation, respectively. Strenghts\n- By vector-quantizing the images, the proposed network could enjoy powerful capability of the Transformer. This kind of framework might be also applied at various video generation networks.\n- The motion refinement network to aid missing details of reconstructed images from quantized representation seems novel.\n- Ablation studies (Table 3) clearly justifies the choice of the components.\n\nWeaknesses\n- The related works shown in section 2 is limited in the application viewpoint. It would be much better if authors could make a connection to slightly other domains that the proposed method could be also applied. In overall, the scope of the paper is too specific on the co-speech gesture image generation. To meet broader NeurIPS readership, I wish there would be some implications outside the main problem.\n- Please do not cite Wikipedia article as a reference (line 303, Fleiss's Kappa statistic). Please briefly describe about the concept in the appendix. - Regarding the line 272 - \"The motion estimator is pretrained for knowledge distillation\" - please elaborate on this. The authors might want to describe such implementation details on the appendix.\n- What is a new dataset that the authors have collected, as described in line 92? Is it simply an extension of PATS dataset by some post-processing steps? It would be better if the authors could provide specific examples of the limitation of ANGIE (discussed in line 344)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "jD1b31Ixh4C", "a-JeA-_0dev", "t1NSaIj9XBV", "phl6UWx9FvV", "uSSf9VZAo-k", "BSAJ6QD7gaS", "N6A451n51Ed", "aMGqLSOJUN-", "uSSf9VZAo-k", "BSAJ6QD7gaS", "aMGqLSOJUN-", "ArVUpvZA8l", "N6A451n51Ed", "lNwj0Lrufb", "lNwj0Lrufb", "lNwj0Lrufb", "uSSf9VZAo-k", "BSAJ6QD7gaS", "nips_2022_VhgC3SMTiy", "nips_2022_VhgC3SMTiy", "nips_2022_VhgC3SMTiy", "nips_2022_VhgC3SMTiy", "nips_2022_VhgC3SMTiy" ]
nips_2022_rWgfLdqVVl_
Visual Concepts Tokenization
Obtaining the human-like perception ability of abstracting visual concepts from concrete pixels has always been a fundamental and important target in machine learning research fields such as disentangled representation learning and scene decomposition. Towards this goal, we propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept. Particularly, to obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens, preventing information leakage across concept tokens. We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts. The cross-attention and disentangling loss play the role of induction and mutual exclusion for the concept tokens, respectively. Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition. VCT achieves the state of the art results by a large margin.
Accept
This paper proposes an unsupervised transformer based framework called Visual Concepts Tokenization (VCT) to extract visual concepts from concrete pixels for tackling disentangled representation learning and scene decomposition. Experiments on several popular datasets validated the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition in which VCT outperforms the previous works significantly. Reviewers generally agree the proposed VCT framework is novel and the empirical results are promising (though the results on real-world images seem not as strong as the synthesized data and clearly there is a room to improve on the real-world image data). Authors did a great rebuttal job in making extensive efforts to make revision and give comprehensive answers in response to the reviewers's concerns. Overall, this is a solid paper that has enough contributions to the disentangled representation learning topic and thus is recommended to accept.
train
[ "bc56DbjYPKo", "jy3FH6FW6vX", "VtZ2Ppoiolp", "8a3YI6vyzST", "-_GNt9dNbXw", "OIoXPsCcBa", "axKfB2z49R", "5IhwGBwnaRh", "LL_EBAusiCb", "lISA7-OS_Xr", "qveEFlrv1L", "aSomoh-iMUx", "0z9uBf76IP-", "ktp95JErHSc", "ActI4mOS43t", "k3-VjkAMY-0", "Te6zzn1HE6", "W7YQqOgf7Ab", "I4H-S6LwU1R" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer yDux,\n\nWe want to send you a kindly reminder for the discussion, since the stage of discussion will be soon concluded.\n\nWe thank you again for your valuable comments, and we are happy to extend our response if you have any other concerns left.\n\nThanks.", " Dear Reviewer YRhc,\n\nWe want to send you a kindly reminder for the discussion, since the stage of discussion will be soon concluded.\n\nWe thank you again for your valuable comments, and we are happy to extend our response if you have any other concerns left.\n\nThanks.\n\n", " Dear Reviewer zBAX,\n\nWe want to send you a kindly reminder for the discussion, since the stage of discussion will be soon concluded.\n\nWe thank you again for your valuable comments, and we are happy to extend our response if you have any other concerns left.\n\nThanks.", " Thanks for your reply. Your remained concerns are addressed below:\n\n- As approbated by Reviewer yDux and Reviewer zBAX, the generalizability of our work to the real world is well demonstrated via results on the following representative and popular real-world datasets (MSCOCO, KITTI, LSUN cat/church, FFHQ, ImageNet) in Appendix B.5. \nWe want to remind you that our statement (\"the total number of concepts is large and unknown, and the number of concepts is image specific'') is to emphasize the difficulties existing in the real-world dataset, but does not mean that our method has those limitations (\"the number of visual concepts can be non-trivial to define in advance'', \"It may not always hold that different instances from the same domain would share the same number of visual tokens.'').\nPlease note that we don't assume that the specific value of the number of visual concepts is known in our method. In addition, 1) Even though the number of visual concepts is smaller than GT concept numbers, our method does not catastrophic fail but still works to some extent (see results of ``tokens number = 3'' in the second Table of our response). 2) Even though the number of visual concepts is unknown, we still have promising results on real-world datasets like KITTI, which is impressive to Reviewer zBAX. \n\n- From a theoretical perspective, we provide an analytic intuition here. Independence is one of the key requirements for disentangled representation in the literature [8, 5, 22, 27, 41]. In some prior works [8, 9, 5,22, 27, 41], the Total Correlation ($p(z_1,\\dots,z_m) = \\Pi_i p(z_i)$, where $[z_1,\\dots,z_m]$ is the representation derived by the encoder) was regarded as a theoretical guarantee and constraint of the independence applied on extracted representation. \nIn our paper, we constrain the independence of the extraction process: no interference between the process of extracting concept tokens, i.e., a concept token is the function of only the corresponding prototype, which means that other prototypes do not affect this concept token. We provide proof of such independence as an analytic intuition.\n\nTarget of proof: the concept token $c_i$ is the function of prototype $p_i$ but is independent of other prototypes $p_j, j\\neq i$.\n\n\n***Proof*** We denote the output of the cross-attention operation as $U \\in \\mathbb{R}^{M\\times D_v}$,\n$$\nU = \\text{cross-attention}(P,Z,Z) = softmax(\\frac{1}{\\sqrt D_q}Q_PK_Z^T)V_Z, $$\nwhere $Q_P=PW_Q \\in \\mathbb{R}^{M\\times D_q}$ is the projection of the prototypes $P \\in \\mathbb{R}^{M\\times D}$ via projection parameters $W_Q \\in \\mathbb{R}^{D\\times D_q}$. $p_i, i=1,2\\dots,M$ is the $i$-th row of $P$.\n$K_Z=ZW_K \\in \\mathbb{R}^{N\\times D_q}$ is the projection of the image tokens $Z\\in \\mathbb{R}^{N\\times D}$ via projection parameters $W_K \\in \\mathbb{R}^{D\\times D_q}$.\n$V_Z = ZW_V \\in \\mathbb{R}^{N\\times D_v}$ is the projection of the image tokens $Z\\in \\mathbb{R}^{N\\times D}$ via projection parameters $W_V \\in \\mathbb{R}^{D\\times D_v}$.\n\nSince the softmax operator here applies the softmax function on every row of its input matrix, the output of the cross-attention operation can be reformulated as:\n$$\nU = softmax(\\frac{1}{\\sqrt D_q}PW_QK_Z^T)V_Z $$\n$$\\quad= [softmax(\\frac{1}{\\sqrt D_q}p_1W_QK_Z^T)V_Z, \\dots, softmax(\\frac{1}{\\sqrt D_q}p_MW_QK_Z^T)V_Z] \n$$\nWe use $f$ to denote other operations (layer norm, feed-forward network, skip connection, which is operated on token-level) followed the cross attention operation. Therefore, we can derive the output (concept tokens $C\\in\\mathbb{R}^{M\\times D_v}$) of the cross-attention layer:\n$$ [c_1, \\dots,c_M] = C = f(U) = f(softmax(\\frac{1}{\\sqrt D_q}Q_PK_Z^T)V_Z)$$\n$$\\quad= [f(softmax(\\frac{1}{\\sqrt D_q}p_1W_QK_Z^T)V_Z), \\dots, f(softmax(\\frac{1}{\\sqrt D_q}p_MW_QK_Z^T)V_Z)] \n$$\nTherefore, we have $c_i = f\\left(softmax(\\frac{1}{\\sqrt D_q}p_iW_QK_Z^T)V_Z\\right)$. Therefore, the changes of prototype $p_j$ will not influence $c_i$, if $i \\neq j$. Similarly, since $C$ is the query in the next cross-attention layer, this also holds for the multi-layers case. Therefore, there is no interference between the process of extracting concept tokens.", " Thanks a lot for appreciating our work and the generalizability of the proposed method. In addition, thanks a lot for your acknowledgment of our rebuttal.\n\n\n\n\n\n\n\nFor the remaining concern, we may not have made it clear in our response before that this process is completely automated when GT labels are available. Specifically, for each factor of variation, we sample a set of (at least two) images with only this factor of variation different, but others kept the same and extract concept tokens of these images, the token with the largest variance represents that it corresponds to this factor of variation. No matter the operation of sampling images, encoding, calculating variance, and taking the largest value, all parts of this process can be automated. Also, please note that the above identifying process does not require locating the meaningful tokens first because the variance of the meaningless token itself is almost 0. In the absence of GT, as far as we know, there is currently no method in the literature (e.g., neither all kinds of VAEs nor DisCo) that can automatically find these correspondences between representation and factors of variation. In this context, this is a very interesting and worthwhile question to explore.", " The authors provided a detailed response to my questions and concerns. Their response solves most of my concerns. For the concern about real-world generalization, they offered more case studies on KITTI and MSCOCO. The real-world cases on the KITTI dataset look pretty impressive to me. They also solved the presentation issue in Section 4.3 and Section 4.4, now Figure 6 is much easier to understand than before. From the additional experiment on VQ-VAE, they demonstrated that VQ-VAE itself has almost no effect on disentanglement. The only concern I have now is about the procedure of identifying the factor of variation for each meaning token. From my point of view, the whole process is more of a manual step rather than decided by an algorithm automatically. In summary, I still believe this is a pretty good paper that makes enough contribution to the field of disentangled representation learning. Therefore, after reading both the author response and other review comments, I decided to keep my original rating.", " I have read the response and other reviews carefully. \n\nTwo concerns remain:\n- from the practical side, it is not always clear how the proposed disentanglement transfers to real-world cases, where the number of visual concepts can be non-trivial to define in advance. It may not always hold that different instances from the same domain would share the same number of visual tokens. However, this may not be a specific issue for this paper but not uncommon for disentangle learning works.\n- from the theoretical perspective, there seems a lacking of analytic intuition while the main contribution is architectural. Experiment evidence is valid while not always convincing.\n\nI am therefore not in particular impressed but wouldn't argue if the paper gets in, considering the approach is generic and possibly principal. ", " Thanks for approbating the generalizability of the proposed method. Verifying such correlation is a good question for evaluation in the disentangled representation literature, which is an important evaluation aspect that the disentanglement metrics already considered (MIG)[8],(DCI)[14],(FactorVAE score)[27],($\\beta$-VAE score)[22] , e.g., Figure 5 in [22] and Figure 2 in [27]. Currently, flowing previous works, using the four metrics [8,14,27,22] (also used in our paper), is the common way to verify such correlation (higher metrics indicate higher correlation).", " Dear reviewer zBAX,\n\nWe would appreciate your feedback, and would be happy to address any your remaining concerns.\n\nBest,\n\nPaper362 Authors", " Dear reviewer YRhc,\n\nWe would appreciate your feedback, and would be happy to address any your remaining concerns.\n\nBest,\n\nPaper362 Authors", " I thank the authors for proving the additional results on naturalistic images, which show the generalizability of the proposed method. Regarding my second question, I was asking if there is a way to verify the correlation between concept tokens (i.e., prototypes) and visual concepts. Currently, the concept tokens are implicitly learned, and it is unclear which concepts they encode (e.g., token 1 corresponds to green color).", " Thanks for providing constructive comments. Your concerns are addressed below. \n\n**W1&Q1**: Thanks for pointing out this. We find that we have a typo here. In Table 2, the term “VQVAE” should be “pretrained VQVAE.” Therefore, comparing the following two cases: (i) AE + VCT vs AE + VCT w/o $\\mathcal L_{dis}$; (ii) pretrained VQVAE + VCT vs pretrained VQVAE + VCT w/o $\\mathcal L_{dis}$ The results support the statement in the main paper: without $\\mathcal L_{dis}$, VCT significantly drops but can still learn a disentangled representation to some extent. In addition, we also provide the results of “pretrained AE + VCT w/o $\\mathcal L_{dis}$” below, which also conforms to our statements.\n| Models | MIG | DCI |\n| :-----:| :----: | :----: |\n| pretrained AE + VCT |0.560 | 0.849 | \n| pretrained AE + VCT w/o $\\mathcal L_{dis}$ | 0.180 | 0.674 | \n\n\n**W1&Q2**: Yes, for a dataset of more complex scenarios (i.e., with more GT concepts), a larger token number M is needed in VCT. Specifically, M should be no smaller than the number of GT concepts. Since we apply the disentangling loss inside each batch, to ensure the diversity inside a batch, the batch size should also be no smaller than the number of GT concepts. As the number of GT concepts is usually relatively small in the synthesized data, e.g., 6 for Shape3D, the setting of M and batch size are often satisfied. In this sense, the statement “more complex scenarios are more vulnerable with respect to the number of tokens’’ is held.\nFurthermore, in order to verify this, we add an experiment on Shapes3D with token number M and batch size set to 3, which are smaller than the number of GT factors/concepts number 6. As the table shows below, the performance significantly drops. However, if the concept number is already >= GT factors/concepts number, the performance is robust to the concept tokens number (See Table 2). Therefore, this phenomenon supports the claim that “batch sizes, token numbers influence sample diversey… but VCT is still robust to batch sizes, token numbers”, under the condition that token number/ batch size>= GT factors/concepts number. We also added this condition to the revised version (highlighted in blue). \n\n| Settings| MIG | DCI |\n| :-----:| :----: | :----: |\n| tokens number = 3 | 0.450 | 0.599 | \n| tokens number = 10 | 0.533 | 0.867 | \n| tokens number = 20 | 0.525 | 0.884 |\n| tokens number = 30 | 0.493 | 0.885 |\n\n| Settings| MIG | DCI |\n| :-----:| :----: | :----: |\n| batchsize = 3 | 0.418 | 0.790 | \n| batchsize = 16 | 0.497 | 0.862 | \n| batchsize = 32 | 0.525 | 0.884 | \n| batchsize = 64 | 0.535 | 0.900 | \n\n**W1&Q3**: As claimed in our paper, to the best of our knowledge, we are the first transformer-based architecture for learning disentangled visual representation. Here we want to emphasize that, in VCT, the operation of obtaining concept tokens (retrieving from input via cross attention) is well aligned with the kernel requirement of getting disentanglement representation, i.e., the process of extracting different concepts should be independent. \n\n**W2**: Thanks for this suggestion. To make clear how our VCT would transfer to real-world datasets, we conduct experiments on KITTI and also MSCOCO datasets. As mentioned in the common response (to all reviewers), disentanglement in the real world is still **quite challenging** [17]. Compared to synthesized data, the real-world data contains more diverse and unlimited scene variations, the total number of concepts is large and unknown, and the number of concepts is image specific. However, we find that **VCT still** produces some **promising** results on those two datasets. The results and implementation details are shown in Appendix B.5 (highlighted in blue). We also find that VCT can be combined with pretrained GAN, which can further unleash the power of VCT in the real world. It can work well and find some disentangled concepts on ImageNet (BigGAN), LSUN cat/church (StyleGAN), and FFHQ (StyleGAN). We think those promising results can inspire the way to totally solve disentanglement in the real world. \n", " Thanks for providing constructive comments. Your concerns are addressed below. \n\n**W1 & Q1**: Thanks for your suggestion. To demonstrate that VCT can generalize to broader scenarios and validate the effectiveness in editing applications, we conduct the following experiments: (i) we apply VCT on real-world dataset KITTI, and also MSCOCO datasets **(224x224)** and (ii) We combine VCT with pretrained GAN to discovering the latent disentangled directions for editing. As mentioned in the common response (to all reviewers), disentanglement in the real world is still **quite challenging** [17]. Compared to synthesized data, the real-world data contains more diverse and unlimited scene variations, the total number of concepts is large and unknown, and the number of concepts is image specific. However, we find that **VCT still** produces some **promising** results on those two datasets. The results and implementation details are shown in Appendix B.5. We also find that VCT can combine with pretrained GAN, which can further unleash the power of VCT in the real world. It can work well and find some disentangled concepts on ImageNet (BigGAN), and LSUN cat/church (StyleGAN) with **larger image size (i.e., 256x256)**. We think those promising results can inspire the way to totally solve disentanglement in the real world. \n\n**W2 & Q2**: Generally, in disentangled representation learning literature, the metrics used in our paper (DCI disentanglement, MIG, betaVAE score, Factor VAE score) are commonly used and well accepted for evaluating the disentangled representation [5, 8,9,22,27,28,29,30,36,41]. These metrics not only evaluate how to disentangle between tokens but also how the tokens are associated with independent visual concepts [8,14,27,22]. Among those metrics, only MIG is information-based, but others are not. \nIn our paper, following the previous works [8,14, 22,27], these metrics are computed by using the concept ground truth labels. The matching degree of concept tokens and independent concept ground truth labels is also evaluated by these metrics.\n\nWe suppose the meaning of “justify the techniques for attributing prototypes to concepts’’ is “why the concept prototypes can learn these concepts.” If we have a misunderstanding, please give us a more detailed description. In a nutshell, to get disentangled concept tokens from a given image, 1. the process of extracting those tokens should be **independent** (there is no interference between the process of extracting concept tokens), 2. ensure that each concept token can only reflect one kind of visual concept variation. These two key points can be well implemented by using cross attention operation (for point 2) without self-attention operation (for point 1) in extracting concept tokens. Further, the proposed Concept Disentangling Loss encourages the **mutual exclusivity** between the visual variations caused by modifying different concept tokens. In the cross-attention operation, each concept prototype encodes each kind of concept variation, and each concept variation corresponds to a visual concept.\n\n\n**W3 & Q3**: The prototype Y is not so important for VCT. Even replacing the Concept Detokenizer with the original transformer (see ”Transformer DeTokenizer“ in Table 2, which does not have Y), the performance does not have catastrophic drops. Please note that the use of the prototype Y makes the Concept DeTokenizer symmetric with Concept Tokenizer since a symmetric autoencoder architecture is a common design in the literature (e.g., VAE). The prototype Y behaves as placeholders or containers to allow the concept tokens to inject information into the decoding process. \n\n**W4**: Thanks for pointing out it and inspiring us to use VCT editing images. The answer is yes. Considering these works [20,37] that utilize pretrained GANs without applying reconstruction loss, thus we propose a method for using VCT by taking pretrained GANs as a decoder and discarding the reconstruction loss, as shown in Figure 10 in the Appendix, our VCT for editing achieves promising results on more challenging datasets including ImageNet and LSUN cat and church and FFHQ.", " Thanks for providing constructive comments. Your concerns are addressed below. \n\n**W1 &Q1**: Thanks for pointing out this. To verify that VCT can generalize to the real world, we conduct experiments on KITTI and also MSCOCO datasets. As mentioned in the common response (to all reviewers), disentanglement in the real world is still **quite challenging** [17]. Compared to synthesized data, the real-world data contains more diverse and unlimited scene variations, the total number of concepts is large and unknown, and the number of concepts is image specific. However, we find that **VCT still** produces some **promising** results on those two datasets. The results and implementation details are shown in the Appendix. We also find that VCT can be combined with pretrained GAN, which can further unleash the power of VCT in the real world. It can work well and find some disentangled concepts on ImageNet (BigGAN), and LSUN cat/church (StyleGAN). The results and implementation details are shown in Appendix B.5 (highlighted in blue). We think those promising results can inspire the way to totally solve disentanglement in the real world. \n\n**W2 & Q2**: Thanks for pointing this out. We provide the details on how to decide which learned concept token corresponds to which factor of variation. The details can be divided into the following two steps: \n(i) locating the meaningful tokens: we present a method to identify these meaningful concept tokens in the Appendix. As shown in Figure 2 in the Appendix, we calculate the variance of concept tokens across a batch of instances and obtain a variance vector for each concept. Then, we calculate the l2 norm of the variance vector. The norms of meaningful concept tokens are significantly larger than the rest of the tokens. \n\n(ii) identifying the factor of variation for each meaning token. If the ground truth concept is available, we take a set of (at least two) images (with only one target concept different) and extract their concept tokens, the token with the largest variance represents the target concept . If the ground truth is not available, we swap the concept token of two different images and then manually observe the change of the decoded images, and determine the concept of this token. This is quite similar to previous works in identifying the factor of variation by traversing the disentangled representation [8,9,27,28,29,30,36,41].\n\n**W3**: Thanks for this suggestion. We conduct an evaluation on the representation of pretrained VQ-VAE by regarding the quantized vector as concept tokens. As the results are shown below, the representation of pretrained VQ-VAE (Pretrained VQ-VAE) has almost no disentanglement. To further verify the effectiveness of VCT, we take a randomly initialized vanilla AE (AE + VCT) and pretrained vanilla AE (pretrained AE) as Image Tokenizer. VCT slightly drops and even has gains on MIG compared to the default setting (pretrained VQ-VAE + VCT). These results demonstrate that the power of learning visual concepts is not dependent on pretrained VQ-VAE. \n| Model| MIG |\n| :-----:| :----: |\n| pretrained VQ-VAE | 0.0185 |\n| AE + VCT | 0.484 |\n|pretrained AE + VCT | 0.560 |\n|pretrained VQ-VAE + VCT| 0.525 | \n\n**W4**:Thanks for pointing this out. We agree with you that 4.3 and 4.4 are not well presented here due to space limitations. We have also modified the corresponding part in our revised version (highlighted in blue) and added more details in the appendix (highlighted in blue). As for figure 6, we reformed it in the main paper and put the original figure 6 with the refined descriptions into the Appendix. ", " **W3**: Thanks for your suggestion. We have some discussion on the motivation of our design in the introduction of our paper. We should emphasize more on it. In a nutshell, to get disentangled concept tokens from a given image, we have two key points. 1. the process of extracting those tokens should be **independent** (there is no interference between the process of extracting concept tokens). 2. Ensure that each concept token can only reflect one kind of visual concept variation. These two key points can be well implemented using cross-attention operation (for point 2) without self-attention operation (for point 1) in extracting concept tokens. Further, the proposed Concept Disentangling Loss encourages the **mutual exclusivity** between the visual variations caused by modifying different concept tokens. Lastly, The extracted concept tokens should be **complete** to represent the image, i.e., the image can be well reconstructed from the concept tokens. This inspires us to adopt an auto-encoder architecture. Considering the disordered nature of concepts, the ranking order of tokens should not carry any information, so we do not adopt positional embedding for concept tokens and prototypes. ", " **We thank all the reviewers for the positive feedback and constructive comments. VCT is principal (Reviewer YRhc) and interesting (yDux), and achieves decent (Reviewer YRhc) and SOTA (Reviewer zBAX) performance.**\n\n**The main common concern is whether VCT can be generalized to real-world data or not.** Please note that disentanglement in the real world is still **quite challenging** [17]. Compared to synthesized data, the real-world data contains more diverse and unlimited scene variations, and the total number of concepts is large and unknown. The number of concepts in a single image is image specific. The previous SOTA method can only address CeleBA [8,9,27,28,29,30,41] (with limited scene variations as pointed out by Reviewer zBAX). Please note that [20,37] do not target learning disentangled representation but discovering the latent direction of the pretrained GAN instead. Besides, they highly rely on pretrained GANs.\\\nWe generalize VCT to more complex real-world datasets MSCOCO and KITTI, and find that **VCT still** produces some **promising** results on those two datasets. The results and implementation details are shown in Appendix B.5 (highlighted in blue). Furthermore, inspired by the suggestion of Reviewer yDux and DisCo [36], we combine the tokenizer of VCT with a pretrained GAN, resulting in a new architecture. As the results are shown in Appendix B.5 (highlighted with blue), this new architecture can learn a lot of disentangled **concepts on ImageNet (BigGAN), LSUN cat/church (StyleGAN), and FFHQ (StyleGAN).** \\\nThose observations all show the potential capability of VCT in real-world scenarios. However, we do not mean this challenging problem is totally solved by VCT, as some failure cases are shown in Appendix B.5 (highlighted in blue). We think that VCT, with a novel architecture, is promising in addressing this challenge, which can be beneficial to this research community. \n", " This paper proposes a new unsupervised transformer architecture for learning disentangled visual concepts. Main contributions include the novel transformer-based architecture to represent images as a set of tokens, each reflecting a visual concept; and a concept disentangling loss, which asks the model to predict mutated concept tokens. Experiments are conducted on Shapes3D, MPI3D, Cars3D, which demonstrate better disentangling and scene decomposition capabilities of the propose model compared with other existing works. When using CLIP encoder as image tokenizer, the framework also enables language-aligned disentanglement. S1: The propose architecture seems principle, and as the paper shows can be used in combination with different model architectures.\n\nS2: The concept disentangling loss is beneficial to further learn disentangled visual representation, without additional manual annotations.\n\nS3: Quantitative and qualitative results are decent in demonstrating the quality of the obtained disentangled representation on the commonly used datasets.\n\nS4: The presentation is smooth and easy to follow. \n\nW1: Some parts of the writing may be under-supported by evidence. See Questions.\n\nW2: Disentanglement results are mostly on simulated datasets. It is not completely clear how the proposed method would transfer to real-world datasets, such as KITTI. The results on CelebA in the supplementary is not always promising, with attributes tangled in some cases.\n\nW3: The design may seem not very well motivated. Contributions are mostly on the architecture side, with no strong intuition why or whether such a design leads to superior performance in general. 1. From Table 2, how L_dis will affect the result if AE / VQ-VAE are pre-trained? \n2. From Table 2, are the sensitivity results (batch sizes, token numbers) dependent on the datasets? For example, I would expect more complex scenarios are more vulnerable with respect to the number of tokens.\n3. Has transformer-based architecture used previously for learning disentangled visual representation? What are the main points to highlight in VCT in comparison to them? The key limitation is the lack of convincing results on real-world scenarios, considering the conclusion are most experimental rather theoretical. In addition, it is not always clear why such an architecture leads to better disentangled representation in general, due to the lacking in a good motivation.", " The paper proposes an unsupervised framework called Visual Concepts Tokenization (VCT) to extract visual concepts from concrete pixels for tackling disentangled representation learning and scene decomposition. VCT adopts a cross-attention-based tokenizer to abstract visual information into concept tokens. In addition, it also utilizes a concept disentangling loss based on the visual concept token manipulation to ensure the exclusivity of different tokens. The authors conduct extensive experiments to demonstrate the superiority of VCT against SOTA methods on multiple popular disentanglement benchmarks. Furthermore, the authors perform many ablation studies to verify the design choices of cross-attention only concept tokenizer and concept disentangling loss. Even trained without any dedicated design for scene decomposition tasks, VCT can still well decompose a scene into object-level visual representations. Strengths:\n1. The paper is well-written and easy to follow. The proposed VCT approach performs well on disentangled representation learning and scene decomposition tasks. It achieves SOTA performances on three datasets and beats all four types of baseline methods (VAE-based, GAN-based, pre-trained GAN-based, concept-based) by a relatively large margin.\n2. As shown in the quantitative results in Figure 3, 4, and 5, VCT performs pretty well on decomposing, reconstructing, and manipulating the learned visual concepts, which demonstrates great model interpretability.\n3. Even not trained with explicit object mask annotations, VCT can still abstract high-level visual representations and achieve scene decomposition by representing each object as a concept token.\n4. Extensive ablation studies are conducted to confirm the design choices of image tokenizer, concept disentangling loss, concept tokenizer, and concept detokenizer. Another big advantage of VCT is that it is robust to hyper-parameter variations.\n\nWeaknesses:\n1. Except for the face dataset CeleBA, the paper conducts all experiments on synthesized datasets. CeleBA is also with limited scene variations. It is doubtful whether the proposed VCT can generalize well to real-world images such as those in ImageNet or MSCOCO.\n2. From the qualitative results presented in the Appendix, for predefined $M$ visual concept tokens, only a proportion of them have explicit meanings after training. Moreover, it is unclear in the paper how to decide which learned concept token corresponds to which factor of variation.\n3. The default setting used the image tokenizer and detokenizer from a pre-trained VQ-VAE. However, VQ-VAE might already learn meaningful quantized representations that correspond to visual concepts via the vector-quantization layer.\n4. Section 4.3 and Section 4.4 are not well presented. The descriptions of the decomposition and recombination processes are hard to understand. Meanwhile, the caption of Figure 6 seems not to match its content, especially for Figure 6 (b). I suggest the authors further improve the presentation for scene decomposition experiments. 1. Can VCT generalize well to real-world image datasets?\n2. How to decide which learned concept token corresponds to which factor of variation? No obvious negative societal impact was found.", " This paper proposes a new transformer-based method for learning a set of prototypes that describe different visual concepts. It leverages the cross-attention/self-attention mechanism to model the interactions between visual inputs and trainable prototypes, and presents a concept disentangling loss to encourage the learning of their correlations. Experimental results on several 3D datasets demonstrate the usefulness of the proposed method. This paper has the following strengths:\n+ It is an interesting idea to leverage the attention mechanism for decomposing visual inputs into unique concepts. \n\n+ The proposed method is able to learn visual decomposition in an unsupervised manner.\n\n+ The paper provides extensive ablation study and analysis with different settings, and the concept disentanglement based on language is also interesting.\n\nHowever, there are also weaknesses that can not be overlooked:\n- The paper only carries out experiments on synthetic datasets, and the image size is typically very small (i.e., 64x64). It is unclear how well the method could generalize to broader scenarios, for example, prior studies (e.g., [20, 37]) typically consider naturalistic images with more complicated visual scenes. \n\n\n- A major claim of the method is to learn disentangled prototypes that correspond to unique concepts. The paper mainly demonstrates the effectiveness of disentanglement based on information-based metrics, and it is difficult to validate if the disentangled representations are truly associated with independent visual concepts.\n\n- Besides the targeted concept prototypes, the method also learns a set of dataset-specific prototypes Y. This is relatively counterintuitive, considering that the visual concepts are supposed to represent a diverse range of visual scenes. What do these dataset prototypes learn? And how important are they to the overall method?\n\n- In addition to removing/adding concepts or swapping concepts between images, is it possible to utilize the proposed method for image editing? I am aware of the qualitative results shown in the supplementary materials, however, they are either limited to specific domains (synthetic data) or not very impressive (causing distortion or no significant effects on facial images).\n (1) Please consider experiments with data from broader domains, and also validate the effectiveness of the method via applications such as image editing. \n\n(2) Please justify the unique concepts learned from the proposed method, and the techniques for attributing prototypes to concepts.\n\n(3) Please provide the details about the mechanism and impacts of the dataset-specific prototypes.\n I did not find significant negative societal impacts in this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "qveEFlrv1L", "axKfB2z49R", "OIoXPsCcBa", "axKfB2z49R", "OIoXPsCcBa", "ktp95JErHSc", "ActI4mOS43t", "qveEFlrv1L", "W7YQqOgf7Ab", "Te6zzn1HE6", "0z9uBf76IP-", "Te6zzn1HE6", "I4H-S6LwU1R", "W7YQqOgf7Ab", "Te6zzn1HE6", "nips_2022_rWgfLdqVVl_", "nips_2022_rWgfLdqVVl_", "nips_2022_rWgfLdqVVl_", "nips_2022_rWgfLdqVVl_" ]
nips_2022_GGtH47T31ZC
Orthogonal Transformer: An Efficient Vision Transformer Backbone with Token Orthogonalization
We present a general vision transformer backbone, called as Orthogonal Transformer, in pursuit of both efficiency and effectiveness. A major challenge for vision transformer is that self-attention, as the key element in capturing long-range dependency, is very computationally expensive for dense prediction tasks (e.g., object detection). Coarse global self-attention and local self-attention are then designed to reduce the cost, but they suffer from either neglecting local correlations or hurting global modeling. We present an orthogonal self-attention mechanism to alleviate these issues. Specifically, self-attention is computed in the orthogonal space that is reversible to the spatial domain but has much lower resolution. The capabilities of learning global dependency and exploring local correlations are maintained because every orthogonal token in self-attention can attend to the entire visual tokens. Remarkably, orthogonality is realized by constructing an endogenously orthogonal matrix that is friendly to neural networks and can be optimized as arbitrary orthogonal matrices. We also introduce Positional MLP to incorporate position information for arbitrary input resolutions as well as enhance the capacity of MLP. Finally, we develop a hierarchical architecture for Orthogonal Transformer. Extensive experiments demonstrate its strong performance on a broad range of vision tasks, including image classification, object detection, instance segmentation and semantic segmentation.
Accept
The paper presents orthogonal attention mechanism for vision transformers. All reviewers found the overall system has good performance and the introduced orthogonal attention has the potential to be widely used. The authors' rebuttal resolves the majority of the questions. The authors should add their promised additional experiments in the final version.
train
[ "-DQoS1bMu3", "rq0E2qNW1Cl", "FrAHCTU86Cb", "Vpipui5yqbw", "xebHUytYJim", "4EtRy1xBpnY", "nBLDZOpLEK", "CDjIURn-UzT", "XvY-nWgGhUQ", "OpeZGExH5jL" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the Area Chair and the reviewers for your efforts and valuable comments. We thank you for recognizing the positive aspects of our paper, such as the novelty of the orthogonal attention, the extensiveness of experiments, and the impressive performance of our models. \n\nThe major concern is that the orthogonal attention's contribution might be mixed with other architecture tweaks. We would like to highlight that as shown in Table 6 and Table I, based on the same backbone except for the attention layers, the orthogonal attention can achieve better performance than other efficient self-attention mechanisms. \nTo better isolate its contribution, we would like to compare the self-attention mechanisms on two additional backbones, including vanilla ViT and vanilla Swin Transformer. We will also compare the performance gain by the orthogonal attention among different backbones (Ortho/ViT/Swin). This might further validate whether or how much the orthogonal attention's contribution is mixed with other architecture tweaks. For time constraints, additional experiments will be included in the revised version.\n", " Dear authors,\n\nBelow please see the discussions from reviewers after the rebuttal. In general, the majority of the reviewers still have concerns that your orthogonal attention's contribution might be mixed with other architecture tweaks. Additional experiments are expected from them. Could you please state your plan or show some further quick results (if feasible) on how to better isolate the contribution of your orthogonal attention layer.\n\nReviewer hQkC:\nI think the paper has an impressive system-level performance by mixing up with many other architecture tweaks. However, the ablation and analysis of the paper's own contribution have not been well demonstrated. The response from the author has only addressed part of my concerns.\n\nI intend to raise my score to 5 as I believe the proposed Orth SA is somewhat new to the community and may benefit the research of SA mechanism. But I strongly encourage the authors to add more content to analyze the proposed method to strengthen their contributions.\n\nReviewer SBMR:\nOverall, I think the orthogonal attention layer is a neat trick, and the experiments are pretty extensive. I still have some concern that this change is mixed up with many other architectural tweaks that mask the value of this component.\n\nI think on balance this paper meets the bar for me. The Orth SA layer could be something more widely used. I see many concerns about its novelty/value of the Orth SA layer; if these are not addressed by the rebuttal and ablations (e.g. Table I, Appendix D), then I would not fight strongly for acceptance.\n\nReviewer 2jJ5:\nI think the paper overall presents a system-level ViT variants to achieve its current performance. Regarding each of its core components, novelty is limited. The core component it claimed (orthogonal. sa) only gives about 0.3% top-1 imagenet improvement, as shown in Table 6, which is a big concern for me. I'm leaning towards borderline accept if authors can add more studies regarding the orthogonal operation or rephrase its contributions. Would not fight for acceptance as well if other reviewers still have unsolved concerns.\n", " We sincerely appreciate your hard work and positive comments on our paper. We will address your concerns in the following parts.\n\n**Q1:** About the orthogonalization layer.\n\n**R1**: Thanks for your constructive suggestion. It is a nice idea to run a vanilla ViT/DINO setup with just the OT layer replacing self-attention. However, vanilla ViT uses a large patch-size (e.g., 16 in ViT-S) to reduce the number of tokens. The major superiority of the proposed orthogonal self-attention (OSA) over vanilla self-attention is enabling transformer to compute self-attention in high-resolution space with low computation complexity. Adopting a large patch-size and computing self-attention in low-resolution space cannot validate the superiority of OSA. If we set a small patch-size for ViT, the complexity would increase explosively. For example, the value of FLOPs increases from 4.6G to 157G when the patch-size changes from 16 to 4 (which is the commonly used patch size in many vision transformers). Limited by the computing resources, we choose to compare our method with Swin Transformer rather than vanilla ViT in a simple backbone.\n\nSpecifically, we adopt the setup of Ortho-T except that we replace convolutional position encoding with absolute position encoding, adopt the same patch embedding with vanilla ViT, and employ outside transition like Swin Transformer. We compare the proposed orthogonal self-attention layer with Swin’s self-attention layer. The proposed OSA achieves better performance than shifted window attention, specifically 69.9% vs 67.6% on ImageNet image classification. This verifies the effectiveness of OSA. \n\nBesides, we also conduct comparisons between different self-attention mechanisms in Appendix D.1. Except the self-attention layer, other parts in the transformer network are kept same. Our OSA outperforms other self-attention mechanisms consistently on three vision tasks, i.e., ImageNet Image classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation. This also verifies that the orthogonal self-attention mechanism can boost the performance on various vision tasks.\n\n**Q2:** Name.\n\n**R2:** Thanks for this constructive suggestion. We follow previous vision transformer works, such as Swin Transformer and Focal Transformer, to name our model as Orthogonal Transformer. We expect to generalize it to other fields, like NLP and video processing in the future work (possible manner may be replacing the 2D convolutions with 1D/3D ones). We will modify the name if accepted.\n\n**Q3:** Typos.\n\n**R3** Thanks. We will correct them and check the paper carefully.\n\n**Q4:** Code.\n\n**R4:** Thanks for your kind reminder. We will release the code as soon as possible if accepted.\n", " We thank the reviewer for recognizing the positive aspects of our paper, and we will address the reviewer’s concerns in the following parts.\n\n**Q1:** Motivation.\n\n**R1:** Thanks. We would like to clarify that the claim *\"lose fine-level details for coarse global self-attention or hurting long-range modeling for local self-attention\"* is borrowed from and proved in [Tang et al., 2022], [Yu et al., 2021]. Related descriptions are listed below.\n\n*To reduce the computational cost, the PVT uses downsampled keys and values, which is harmful to capture pixel-level details. In comparison, the Swin Transformer restricts the attention in local windows in a single attention block, which might hurt long range dependencies* (Tang et al., 2022)\n\n*Spatial-reduction attention can reduce memory and computation costs to learn high-resolution feature maps, yet with a price of losing details which are expected from the high-resolution feature maps. Adopting self-attention within local windows is efficient with linear complexity, but it sacrifices the most significant advantage of Transformers in modeling long-range dependencies.* (Yu et al., 2021)\n\n*Although the Glance branch can effectively capture long-range representations, it misses the local connections across partitions* (Yu et al., 2021)(Glance branch uses dilated self-attention)\n\nThe proposed OSA can capture global dependency without losing fine-level details for two reasons: 1) It can capture global dependency because it can always cover the whole images; 2) Orthogonal transformation is reversible and builds local connections among adjacent tokens explicitly, leading to strong capacity of local correlation learning.\n\nAs shown in Table 6 and Table I (in Appendix), compared with other self-attention (SA) mechanisms, orthogonal self-attention performs better on various vision tasks, which may be attributed to its strong capacity of learning global and local dependencies. Besides, from the visualization of orthogonal tokens, we found that some of them capture low-frequency global information while some capture high-frequency local textures. The attention scores illustrate that OSA can capture global dependency on low-frequency tokens and model local correlation on high-frequency tokens. The comparisons on the attention scores against other SA mechanisms also show the superiority of OSA in modeling global and local dependencies. We will add visual comparisons on tokens and attention scores if accepted.\n\n[1] Shitao Tang, Jiahui Zhang, Siyu Zhu, and Ping Tan. Quadtree attention for vision transformers. ICLR, 2022.\n\n[2] Qihang Yu, Yingda Xia, Yutong Bai, Yongyi Lu, Alan L Yuille, and Wei Shen. Glance-and-gaze vision transformer. NeurIPS, 2021.\n\n**Q2:** Contribution. \n\n**R1:** Thanks. We would like to highlight that the key novelty is that we construct a trainable orthogonalization module and the orthogonal self-attention layer is simple and general. The orthogonality is realized by constructing an endogenously orthogonal matrix that can be optimized as arbitrary orthogonal matrices without extra regularizer. Based on it, Orthogonal Transformer is built and achieves competitive performance on various vision tasks. We believe that Orthogonal Transformer can serve as a strong general baseline and considerably advance a broad range of vision tasks. \n\nBesides, we agree that depth-wise convolution (DConv) has been applied in previous works. Our major difference is that strided DConv is used to perform downsampling within the transformer block. In this way, we do not need extra patch merging layer to perform feature downsampling. Table 6 shows that it can achieve better performance than outside the transformer block. We also conduct extensive experiments to investigate the best location of DConv in Appendix D.3. Note that there exist various manners to employ DConv in vision transformers (e.g., PVT v2, CeiT, CMT and ours adopt DConv in different locations). We believe our work is suggestive to find appropriate manner of incorporating depth-wise convolution into transformers. We will discuss about PVT v2, CeiT, and CMT along with our differences if accepted.\n\n**Q3:** Shifted WSA.\n\n**R3:** Thanks. We have conducted experiments with variants using shifted window. As reported below, shifted window SA achieves similar performance with window SA. The shifted windowing scheme is designed to allow for cross-window connection. In the Ortho-S backbone, DConv in MLP allows for cross-window interaction. That may be why shifted window SA is not superior to window SA herein.\n\n| Method | #Param (M) | FLOPS (G) | Acc. (\\%) | AP$^b$| AP$^m$|mIoU|\n| :------ | -------: |-------: |-------: |-------: |-------: |-------: |\n|window sa | 24.0 | 4.5 | 82.6 | 46.2 | 42.0 | 47.5|\n|shifted window sa | 24.0 | 4.5 | 82.5 | 46.4 | 42.1 | 47.6|\n|dilated sa| 24.0 | 4.5 | 82.9 | 46.2 | 41.7 | 46.6|\n|ortho. sa| 24.0 | 4.6 | 83.2 | 47.0 | 42.3 | 48.0|\n|window/ortho. sa | 24.0 | 4.5 | 83.4 | 47.0 | 42.5 |48.2 |\n", " We thank the reviewer for recognizing the contributions of our paper and giving us constructive suggestions. We will address the reviewer’s concerns in the following parts.\n\n**Q1:** The essentialness of using orthogonal matrix. \n\n**R1:** Thank you for your constructive suggestion. We have investigated the essentialness of orthogonality in Appendix D.2. We compare our method with variants where $A$ is initialized randomly or orthogonally, and $A$ is trained with or without the orthogonal loss $L_{ortho} = \\frac{1}{n^2}\\| \\mathbf{I} - \\mathbf{A}^{\\mathrm{T}}\\mathbf{A}\\|^2$ or not regularized. When trained without $L_{ortho}$, the orthogonality of $A$ is not enforced. \nAs shown in Table II (repeated below), the performance degradation without orthogonal regularization implies the essentialness of orthogonality. More details are in Appendix D.2.\n\n|Random Init.| Ortho. Init. | Ortho. Loss | presented Ortho. | Acc. (\\%)|\n|:-----------|----------:|----------:|----------:|----------:|\n|$\\surd$ | | | | 73.5 |\n|$\\surd$ | | $\\surd$ | | 73.4 |\n||$\\surd$ | | | 73.8 |\n||$\\surd$ | $\\surd$ | | 73.3 |\n||$\\surd$ | | $\\surd$ | **74.0** |\n\nThanks for your precious suggestion. We realized the above analysis neglects the case where the matrices are different in Step 1 and Step 3. We further conduct experiments using two different matrices, i.e., $\\mathbf{A}\\in \\mathbb{R}^{n\\times n}$ in Step 1 and $\\mathbf{B}\\in \\mathbb{R}^{n\\times n}$ in Step 3. We explore two variants: $\\mathbf{A}$ and $\\mathbf{B}$ are randomly initialized without additional regularization; $\\mathbf{A}$ is randomly initialized, $\\mathbf{B}$ is initialized as the pseudo inverse of $\\mathbf{A}$ and they are regularized with the reverse loss $L_{rev} = \\frac{1}{n^2}\\| \\mathbf{I} - \\mathbf{B}\\mathbf{A}\\|^2$. As shown below, our method using orthogonal matrix surpasses those without orthogonal matrix, validating the essentialness of orthogonality. The reversibility between Step 1 and Step 3 can improve the performance, but the regularization way with $L_{rev}$ lags the presented endogenously orthogonality construction.\n|Reverse Loss |presented Ortho. | Acc. (\\%)|\n|:----------|------------:|------------:|\n|||69.3|\n|$\\surd$||71.5|\n||$\\surd$|74.0|\n\n\n**Q2:** Lower resolution in the orthogonal space.\n\n**R2:** Thanks. It is true that the tokens obtained from multiplying the orthogonal matrix will not change the total dimensions. Line 41 means that tokens in the orthogonal space have lower spatial resolution. As described from Line 157 to Line 163, token orthogonalization would transform the input feature $Z\\in \\mathbf{R}^{(h\\times w)\\times c}$ into $n_o$ groups of orthogonal tokens $\\hat{Z}^j\\in \\mathbf{R}^{(\\frac{h}{m_o}\\times \\frac{w}{m_o}) \\times c}$ (where $j=0,\\ldots,n_o-1$). The spatial dimension is reduced from $h \\times w$ to $\\frac{h}{m_o}\\times \\frac{w}{m_o}$. We will clarify this in the final version.\n\n**Q3:** Window size for WSA and OTB.\n\n**R3:** Following Swin Transformer, we set the window size for WSA as 7. We set the window size for OSA to make the complexity of OSA similar with that of WSA in the same stage. For clarity, we repeat the complexities of OSA and WSA in the following:\n\\begin{equation}\n \\Omega(OSA) = 4hwC^2 + \\frac{1}{m_o^2}2(hw)^2C + 2n_ohwC,\n\\end{equation}\n\\begin{equation}\n \\Omega(WSA) = 4hwC^2+2m_w^2hwC.\n\\end{equation}\nThe window size $m_o$ for OSA is set as $m_o=\\frac{\\sqrt{hw}}{m_w}$, leading to the second terms in $\\Omega(OSA)$ and $ \\Omega(WSA) $ equaling to each other. For example, given an input image of pixel-size $224\\times 224$, for the first stage where $h=w=56$, we set $m_w=7$ and $m_o=8$.\n\nThanks for your constructive suggestion. We conduct additional experiments to investigate the relationship between the window size and the performance. We vary the window size and build several variants on the backbone of Ortho-T. The results are reported in the following.\n|$m_w$ for WSA |$m_o$ for OSA| Params (M) | FLOPs (G)| Acc. (\\%)|\n|----------:|----------:|----------:|----------:|----------:|\nVarying $m_w$\n|4 | 8,4,2,1 | 3.9| 0.71 | 73.6|\n|7 | 8,4,2,1 | 3.9| 0.71 | 74.0|\n|14|8,4,2,1 | 3.9| 0.72 | 74.1|\nVarying $m_o$\n|7 | 8,4,2,1 | 3.9| 0.71 | 74.0|\n|7 | 4,4,2,1 | 3.9| 0.73 | 73.9|\n|7 | 2,2,2,1 | 3.9| 0.86 | 74.1|\n\nThe enlargement of the window size $m_w$ for WSA will bring about both performance gain and complexity increase. $m_w=14$ can achieve slightly better performance than $m_w=7$. We choose $m_w=7$ following Swin Transformer.\n\nFor an input image of pixel-size $224\\times 224$, we set $m_o$ to ensure that the height/width is divisible by $m_o$ in every stage. Hence, the largest window sizes for OSA in the four stages are 8, 4, 2, 1, respectively. As shown above, when $m_o$ varies, the performances are close, but the complexity increases significantly when $m_o$ decreases. We set the values of $m_o$ as 8, 4, 2, 1 to achieve low computation complexity with competitive performance.\n", " We thank the reviewer for recognizing the positive aspects of our paper, and we will address the reviewer’s concerns in the following parts.\n\n**Q1:** Linear independency.\n\n**R1:** Thank you for your constructive suggestion. Linear independency would help self-attention explore different properties of representation. We empirically found that the learned orthogonal transformation can split feature maps into groups that capture different characteristics. For example, some may capture low-frequency information, and some may capture high-frequency textures. The attention scores vary for different groups, leading to stronger capability of representation via different views. If accepted, we will add visualizations of orthogonal tokens as well as the attention scores to show how linear independency help explore image properties in different views.\n\nBesides, we also conduct ablation experiments to validate the role of linear independency. We modify OSA\n\\begin{equation}\nf_{OSA}(Z) = \\mathbf{A}^{\\mathrm{T}} f_{MSA}(f_{LN}( \\mathbf{A}Z)) + Z ,\n\\end{equation}\nas\n\\begin{equation}\nf_{OSA}(Z) = \\mathbf{B} f_{MSA}(f_{LN}( \\mathbf{A}Z)) + Z ,\n\\end{equation}\nwhere $\\mathbf{A}\\in \\mathbb{R}^{n\\times n}$ is randomly initialized and $\\mathbf{B}\\in \\mathbb{R}^{n\\times n}$ is initialized as the pseudo inverse of $A$. They are regularized with\n\\begin{equation}\nL_{rev} = \\frac{1}{n^2}\\| \\mathbf{I} - \\mathbf{B}\\mathbf{A}\\|^2.\n\\end{equation}\nSuch variant remains the properties of reduced resolution, reversibility (the converged value of $L_{rev}$ is 1e-4 that is very close to zero), and token connections, but removes linear independency of orthogonal transformation. As shown in the following, the performance degradation implies that linear independency is important for the overall performance. \n| Method | Acc. (\\%) |\n| :---------- | -------: |\n| w/o Linear Independency| 71.5 |\n| with Linear Independency | 74.0 |\n\n**Q2:** Complexity analysis. \n\n**R2:** Thanks. We compare complexity against the global self-attention to show that the proposed OSA can reduce the computation complexity of self-attention, especially for high-resolution vision tasks.\n\nOSA has higher complexity than dilated self-attention but the gain is marginal, which is exactly the third term in Eq. (5), i.e., $2n_o hwC$. It can be ignored when $n_o\\ll\\sqrt{hw}$ (this is usually true for high-resolution vision tasks). \nFurthermore, OSA can achieve better performance than dilated self-attention. \nAs shown in Table 6 (the related parts are repeated in the following), compared with dilated self-attention, the network with OSA has comparable FLOPs (4.6G vs 4.5G) and achieves better performance on ImageNet classification, COCO detection and ADE20K segmentation. \nTherefore, we believe the marginal gain of complexity for OSA is acceptable considering the obvious performance improvements. \n\nWe will compare the complexity against dilated self-attention with a detailed analysis if accepted.\n\n| Method | #Param (M) | FLOPS (G) | Acc. (\\%) | AP$^b$| AP$^m$|mIoU|\n| :------ | -------: |-------: |-------: |-------: |-------: |-------: |\n|dilated sa| 24.0 | 4.5 | 82.9 | 46.2 | 41.7 | 46.6|\n|ortho. sa| 24.0 | 4.6 | 83.2 | 47.0 | 42.3 | 48.0|\n\n**Q3:** Novelty.\n\n**R3:** Thanks. We would like to highlight that the key novelty is that we construct a trainable orthogonalization module and the orthogonal self-attention layer is simple and general. The orthogonality is realized by constructing an endogenously orthogonal matrix that can be optimized as arbitrary orthogonal matrices without extra regularizer. Based on it, Orthogonal Transformer is built and achieves competitive performance on various vision tasks. We believe that Orthogonal Transformer can serve as a strong general baseline and considerably advance a broad range of vision tasks. The idea of orthogonal self-attention may inspire researchers to explore efficient and effective attention mechanisms for high-dimensional data, such as high-definition images and long videos.\n", " This paper introduces orthogonal transformation of tokens and combines it with the idea of interleaving window self attention (WSA) and dilated self-attention (DSA) to capture both local and global interactions without incurring the prohibitive cost of full global self attention. They showed that initialization and optimization of orthogonal matrices can be simplified by using Houholder transformations and leveraging already established optimization techniques to learn these matrices. In addition, the paper also make use of Positional MLPs where MLPs are equipped with depth-wise convolutions to allow for downsampling within the transformer block. \n\nAuthors individually ablated different techniques proposed in the paper and combined the best configurations to achieve top performance on wide range of image tasks. The main differentiating contribution of the paper is using the Householder transformation trick to introduce orthogonal transformations. This idea is coupled with already existing techniques of window self-attention and dilated self-attention. Authors present extensive experiments ablating each individual technique introduced. The paper is also well written and easy to read and understand. line 44: Authors claim that one of the advantages of orthogonal transformations is that the tokens can be separated into linearly independent groups. But, there is no further explanation as to why this linear independency is important or how this contributes to the overall performance of the model.\n\nequation 5: Authors present complexity analysis in comparison with the global self-attention. But, it is only fair to compare against dilated self attention complexity. Given that the paper is just concatenation of already existing ideas with minimal novelty of newly introduced techniques, I am not inclined to accept the paper. ", " The paper proposes a new efficient self-attention form (Orthogonal Self-Attention, OSA) that conducts self-attention within groups, where orthogonalization of window tokens are performed first before forming the groups. The final architecture consists of alternative OSA and window self-attention (WSA) as attention, with FFN equipped with depth-wise convolutions. It shows competitive results on ImageNet classification, COCO object detection, ADE20k semantic segmentation. Strengths: \n \tThe way to construct orthogonal matrix endogenously by multiplying householder reflectors is novel and interesting. The OSA form proposed enabled the token connections within local window and links across windows. \n\tThe experiments results show competitive performance across different vision tasks, including image classification, object detection, semantic segmentation.\n\nWeaknesses:\n\tThe essentialness of using orthogonal matrix is not studied. \nThe whole OSA process \n1.\tconnects tokens within local windows with local window token orthogonalization, is serves as MLP layer within local windows, except the weight matrix of MLP is naturally orthogonal.\n2.\tConnects tokens beyond local windows by forming new groups across previous local window. \n3.\tToken reverse as the inverse of orthogonal matrix is easy to get, just the transpose of the matrix. \n Step 2 can be done regardless of the weight matrix of this local window MLP is orthogonal or not. \n\tStep 3 is the vital part that only orthogonal matrix weight can perform, I believe this should be studied, which is not presented, for validating the essentialness of using orthogonal matrix rather than just following the form that connects local and connects beyond local windows. \n 1.\tAs mentioned in weakness, The reversible operation is the novel part, but no ablation on that. Experiments do not show the essentialness of using orthogonal matrix instead of regular one. What if we keep the form (i.e. using non-orthogonal matrix to transform tokens within local windows, and then group them in same way as current strategy to perform self-attention), then no token reverse as inverse of weights is hard to get, maybe replacing multiplying the inverse of A with another new weights A’. \n\n2.\tLine 41 mentioned that tokens have lower resolution in the orthogonal space, can the authors provide more details regarding this? From my understanding, the new tokens obtained from multiplying the orthogonal matrix will not change dimensions, Z’ = AZ, Z’ is (n x c), A is (n x n), Z is (n x c), n = h x w. The computation saving for attention here is from conducting group-wise attention, which does not seem to be brought from orthogonalization of tokens, but from window and group partitions of tokens. \n3.\tHow to choose window size for WSA and OTB? Will different size affect the final performance? \n\n\n\n-----post-rebuttal---\nThanks for the authors presenting the ablation study results on the essentialness of orthogonalization during rebuttal. Still, considering that the gain from ortho. sa is limited, 0.3% compared to dilated sa as in Table 6, I'll keep my initial rating as borderline accept. NA", " This paper proposed an efficient Transformer backbone, Orthogonal Transformer, for efficient processing of visual inputs. The backbone is mainly built on the proposed efficient self-attention mechanism, orthogonal self-attention (OSA), to model dependencies between tokens in the orthogonal space. The proposed backbone achieves excellent performance-FLOPs tradeoff on visual tasks. ### Strengths\nThe performances of the proposed backbone are superior.\nThe paper is well written and easy to follow, and the illustration is clear to me.\n\n### Weaknesses\nThe motivation is not clear to me. Why do other efficient self-attention approaches fail? The author claimed other efficient self-attention mechanisms \"lose fine-level details for coarse global self-attention or hurting long-range modeling for local self-attention\" (L35-36). However, I cannot find any pieces of evidence to support the claim. In addition, how does the proposed orthogonal self-attention (OSA) work? The author claimed OSA \"captures global dependency without losing fine-level details\" (L37-38). I have found no evidence to support the claim, either theoretically or empirically.\n\nThe contribution is not significant. I believe the only novel part of this paper is the proposed efficient self-attention approach. The proposed Positional MLP in Sec 3.4 has been studied in PVT v2 [CVMJ 22], CeiT [ICCV 21], and CMT [CVPR 22]. So I would not say this is the novel part. \n\nThe ablation is not sufficient. In Tab 6, the authors compared the proposed OSA with window SA and dilated SA. How about the window SA with shified window like Swin? The questions about the paper are listed in the Weaknesses. Yes.", " The paper proposed an token orthogonalization layer, there self-attention is performed on groups of orthogonalized tokens. The paper then designs the Orthogonal Transformer (OT) which combines the orthogonal self-attention layer with a pyramid Transformer architecture, positional MLPs (adding a depthwise-convolution into the MLP layer), early convolutions, and a novel downsampling method between stages. OT performs well on ImageNet-from-scratch, COCO detection and instance segmentation, and ADE20k semantic segmentation. Strengths\n\n* I like the proposal of constructing a trainable orthogonalization module from the product of Householder matrices. The orthogonal self-attention layer is fairly simple and general, and I think is a nice contribution.\n\n* The experiment cover multiple tasks, appear thorough, and compare to many state-of-the-art alternatives.\n\n* There is an ablation of the four components of the orthogonal Transformer.\n\nWeaknesses\n\n* I think the main weakness is the large number of moving parts in OT: the combination of both windowed attention and orthogonal attention, addition convolutions at the start and in the middle of the network, and a new downsampling mechanism. The combination of these factors significantly increases the complexity of the network over the original Transformer/Vision Transformer, potentially limiting adoption. The ablation study shows that some of the \"minor\" components have an equal-or-greater impact than the orthogonal attention layer which is the main \"selling point\" for OT. E.g. Conv position embeddings improve the scores on all tasks over absolute position embeddings more than orthogonal attention. Therefore, it feels like the orthogonal self-attention layer is not the key driver of performance in OT.\n\n* Given that the network contains many vision-specific components (convolutions), I feel that \"Orthogonal Transformer\" is over-selling or over-generalizing the network. A name like \"Orthogonal Vision/Image Transformer\" would make it more clear that this is a vision-specific variant.\n\n* There are a few typos, such as:\n\n\"tokens has a lower resolution\" -> have\n\n\"singe OTB\" -> single OTB Please address my main concern about the orthogonalization layer. Given that the orthogonal self-attention layer is marketed the main contribution of the paper it would be useful to have an experiment that demonstrates the value of this layer in isolation. I think the best way to do this would be to run a vanilla ViT/DINO setup with just the OT layer replacing self-attention, but without any of the additional changes/tricks. This would help determine with the OT layer is useful in isolation, or it requires the other network modifications in addition to be a useful component.\n\nSecond, you answered \"Yes\" to \"Did you include the code, data, and instructions needed to reproduce the main experimental result?\". I did not see any mention of opensourced code, will code be made available? Section F in the appendix dedicated to limitations and societal impact. The societal impact is adequately addressed. The limitations section is brief, but I think it is adequate, and i believe captures the main limitation that the study is restricted only to image-based tasks." ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "rq0E2qNW1Cl", "nips_2022_GGtH47T31ZC", "OpeZGExH5jL", "XvY-nWgGhUQ", "CDjIURn-UzT", "nBLDZOpLEK", "nips_2022_GGtH47T31ZC", "nips_2022_GGtH47T31ZC", "nips_2022_GGtH47T31ZC", "nips_2022_GGtH47T31ZC" ]
nips_2022_Aisi2oEq1sc
Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io.git
Accept
After the rebuttal and discussion all reviewers are positive, and recommend acceptance. The AC agrees with this recommendation.
val
[ "t2r3KdfkACi", "aio625Nr5J0", "KrX1dIF8kOj", "qWRkqBKlcw", "meOucUMY8hw", "cQ-1ZZSOFx", "VKckjHSv7TL", "Z60R7HNHSH", "6F4vJOC4Kjc", "1zQQSIMfVuH", "BPCHAZC_hdM", "WlPvjYK4bcM", "7q_fREyNQs3", "LiMxGPuSgw" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' response to my questions and comments and their speed in updating their submission. I think this is a well-written study on differences between ViTs and CNNs. I currently feel though that without an experiment that grounds these results on real-world data, I have some uncertainty about how well these trends transfer to general image classification. I'll maintain my score.", " Absolutely, we will do so. Thank you again for your time and the thorough review.", " Thank you for including confidence intervals. I did not zoomed into enough. Make sure you mention the confidence intervals shortly in the caption in a possible camera ready version. ", " On confidence intervals, I believe we correctly understood the initial suggestion, we think that it is a good idea and that is what we have done (i.e. bootstrap resampling on the finite test set without re-training). The new figures are included in the revised paper. You might have to zoom in quite a bit since the error bars are very small for some of the figures. We also decreased the plot line width in order for the error bars to be more visible. We also agree that given the size of the dataset the variance is not a large concern but we think that it is a good idea to include the error bars.", " Thank you for your clarifications regarding open sourcing NVD. I also appreciate the updated figures and conclusion.\n\nMy suggestion regarding the confidence intervals was meant a bit different. I did not want to suggest re-training the models, but only to quantify the confidence given your finite test set. My concern was that the test set was relatively small, so that the confidence interval would be rather broad. However, given that you have 272k samples in total, this should not be a real concern. I would still suggest including them: you can use bootstrap resampling to construct them (https://acclab.github.io/bootstrap-confidence-intervals.html ). \n\nI agree that using different train seeds would be desirable but also highly computationally expensive. I appreciate that you cross-checked your results with the models from the timm's codebase.", " We thank the reviewer for the positive comments and the thorough review. We are happy that they appreciated the problem we study, the quality of our work, the originality of the paper and the writing. We also thank them for their insightful comments and feedback.\n\n**Add confidence intervals to figures (bootstrap resampling)** \\\nA strong suggestion. We re-plot all figures with error bars using bootstrap resampling (100 resamples). Error bars plotted are the standard deviation of all samples. We find almost identical curves to the original ones, with very small error bars for all experiments. We believe this is due to the large size of our simulated dataset. For example the occlusion experiment contains 92 main objects, 27 lighting environments and 3 occluder objects, which generates a total of ~7.4k object-occluder-lighting combinations for each occluder position (i.e. for each point in the graph). We observe that the conclusions of our work are conserved.\n\n**How large is NVD?** \\\nWe provided numbers for the amount of samples in the NVD dataset (272k total images) at the end of the Introduction and in the caption of Figure 1, but we did not re-iterate them in the Dataset section, which is important. We will add this and highlight it. Thank you.\n\n**Conclusion needs to be extended - it currently reads more like abstract. Make it informative summary of findings. Move discussion in related work to the conclusion.** \\\nWe have revamped the conclusion to follow this recommendation and have added the key findings of the work. We have also moved the Limitations section under the conclusion.\n\n**Will NVD be public?** \\\nYes, we will release NVD upon acceptance. We will also open-source the code used to generate the NVD dataset in order for other research teams to generate new variations or add object assets if they wish. We will highlight this in the introduction and method sections. Thank you for the remark.\n\n**Font size in figures is tiny** \\\nThank you for the suggestion. We will make them larger in the camera ready version. We saved the figures with high dpi using matplotlib, but we will use the suggested method for the final version of the work.\n\n**Figure y-axis** \\\nThank you for this suggestion. We have changed the figures in the main paper to have the same y-axis. This gives a lens into the differences between sizes of architectures, which we believe is very interesting. For example, we can clearly see increased performance and robustness as architecture size goes up. Also, importantly, when networks are trained on the ImageNet-22k dataset robustness in all categories goes up.\n\n**Try different training seeds for models** \\\nThis comment is on point. We had the same question early during our investigation. It is very expensive to retrain Swin and ConvNext models with limited resources. Even the tiny versions of the models can take >5 days depending on the hardware that is available. We were able to source ConvNext-Tiny and ConvNext-Base network trained using another codebase (timm codebase), with a different random seed and different hardware. We verified two things (1) ImageNet accuracies between the official checkpoints and the re-trained checkpoints differed only slightly (ConvNext-T: 81.9 vs reported acc 82.1 / ConvNext-B: 83.5 vs reported acc 83.8). (2) trends on NVD did not change.", " We thank the reviewer for the comments and the thorough review. We are happy that they found that we select good architectures for comparison, and that we have a careful selection of scene variations that are more comprehensive than related work. Finally, we thank the reviewer for their positive comments on our proposed metric and for the insightful feedback. We will incorporate all proposed changes.\n\n**The performance gap on several variations (e.g. Fig 4, 6) are not significant enough to make a conclusion** \\\nWe believe the variations that lead us to claim our main conclusions (e.g. Fig 4, 6) are significant enough. We follow R3's suggestion to include error bars using 100 bootstrap samples for all of our plots. We observe error bars in Fig 4,6 are very small and indicate that the performance gap is statistically significant. We don't see any change in our conclusions given the size of the uncertainty. We also highlight that the amount of images tested for these experiments is very high (88k for Fig 4 and 44k for Fig 6).\n\n**It would be better if further look into learned features similar to previous works on comparing ViT and ResNet to provide more insights on the difference, e.g. feature correlation, frequency, etc** \\\nWe agree that this would be a very interesting future work. Specifically, we would think an interesting avenue would be to zoom in on one specific experiment, let's say occlusion, and to study the variation of features. One key difficulty here, that is still an open problem, is to know which type of feature analysis makes sense and gives an interpretable lens into the inner workings of the networks. This problem is challenging enough that it spans different subfields of computer vision (interpretability, explainability, neuron level interpretability, causality, etc.). Another key difficulty is: how should features be compared across such different architectures? Our study abstracts from these two problems by directly looking at prediction results, which are a closer proxy to model generalization than model features. Finally, this research avenue does add a key difficulty in our situation, given that ConvNext generalizes better in the simulated setting. A method, in a similar vein to PCCP, would have to be invented in order to abstract from the better domain generalization of one architecture. We leave this interesting exploration for future work and we agree that it is a challenging but exciting problem. We have some initial thoughts on conducting this research, with planned explorations in the mapping of visual saliency.\n\n**Are classes in NVD representative of ImageNet?** \\\nAll objects included in our simulated dataset are included in the ImageNet label space. In this sense they are a strict subset of ImageNet classes. We were graciously given access to the full set of object assets by the ThreeDWorld owners for this work. We parsed the entire list of assets and found all objects that mapped to an ImageNet class. We then filtered objects that were not suitable due to extremely low recognition levels for both networks (e.g. modern iPods that did not exist when ImageNet was released) and anomalous objects with incorrect scale. We end up with 92 objects from 18 classes. It is important to note that this is small compared to the 1k classes in ImageNet and we will add more discussion about this in the camera ready version. We have done our best effort to include the maximum number of classes that were available to us, given the restricted amount of realistic assets that exist for this type of study. We have done a thorough online search for more object assets and we found no compatible asset package online that included (1) enough ImageNet class objects (2) that were realistic enough. We would also like to note that our work contains more ImageNet object classes, scene variations and lighting variations than related work in the same vein.\n\nFurther, it is hard to know what would make a representative sample of classes from ImageNet. First, we work in an indoor environment with inanimate objects - which is a realistic scenario. This does not fully align with ImageNet, since ImageNet has a very large amount of animal images with many represented subspecies. We believe our scenario is in some sense more aligned with modern applications of computer vision that deal with objects in households (e.g. robotics, home assistants, etc.). \n\nFinally, thanks to R3's suggestion, we include error bars for all of our plots via bootstrap resampling. This shows that the variance of the results is very small, even when some of the classes we use are under/oversampled in the bootstrap sample.", " **Dataset quality: although the dataset is called Natural variation object dataset, the synthetic images in Figure 1 do not look natural/realistic enough to me. Also scene background is relatively simple compared to real images. The number of object classes is rather small.** \\\nWe call the dataset Naturalistic Variation Object Dataset, avoiding the word \"natural\" since we do agree that the dataset is not a natural dataset. Instead we opt for the word naturalistic since the variations that we propose (pose changes, occlusions, etc.) are found in the natural world and affect computer vision algorithms drastically. The word naturalistic is not supposed to reference the realism of the environment, but the plausibility of the variations included in the dataset. We do agree that there is a domain gap between real images and images in NVD - and we call out this fact several times in the work (l.68,118,122,etc.). In fact a large part of our contribution is the counterfactual metric that seeks to minimize the impact of the domain variation. We will call out this limitation further in the camera ready version. Further, we agree that scene variability is not as large as that seen in COCO for example - but we do this on purpose since we focus on a one-object classification task. We try to minimize the amount of first-order distractors in the scene while remaining realistic (instead of using a limited blank scene with only one object). We think the task is already challenging enough for the pre-trained networks, with low generalization accuracy overall and a more complicated setup would introduce much more noise to the signal. Finally, we point to our answer above (Are classes in NVD representative of ImageNet?) for a discussion on the number of object classes.\n\n**Why use top5 instead of top1? Please show top1 accuracies on table 1. What if you use top1 for the figures?** \\\nWe think that the top1 accuracy is an unreliable metric in our setting, and top5 is a much more appropriate metric to study. This is because our simulated images contain some first order distractors that are in the ImageNet label space. The primary distractor is the dining table where the objects are set. We decided to include these naturalistic distractors, in order to make the scene more realistic. An essentially blank scene without other objects would not make a convincing experiment. This means that analyzing top1 metrics is very noisy given that a network could output one of the distractors (e.g. the dining table) as its top1 prediction and this would not represent the overall power of the network which could be predicting both a distractor and the main object with high confidence. We have verified that this exact phenomenon occurs. Nevertheless, we have included the top1 accuracy on Table 1 and plotted figures for top1 accuracy that will be included in the supplementary material with this explanation on why we think these figures contain more noise than signal.\n\n**What is your guess/try to see if results in table 1 and figs are the same on real data?** \\\nThere are several considerations on why a real experiment is hard to control. The first consideration is scale. In our NVD dataset we have 272k images. Assuming a person has to manipulate an object or camera between each picture, and assuming the time spent between pictures is around 10 seconds, this would equate to around 755 labor hours without breaks. Assuming this is split into a group of 10 people, we would have 75.5 hours of labor per person at the low end. This is very hard to achieve. Alternatively, an automatic capture setup would have to be devised, which would probably be able to achieve this at a faster rate. A usual limitation of this type of approach is that the scene is not a natural environment, but instead a lab environment. Also, the conceptualization and realization costs are high. The next consideration is precision, it’s hard to move the camera or object in precise measurements. And some variations such as scale are not easily feasible. The last consideration is the consistency of some variations, such as controlling the lighting to be exactly the same for all images with the same lighting environment. This would require a room with no windows and artificial lighting that will be very different from ambient or natural lighting.\n\nNevertheless, we believe that given the scale of our experiments, if we set them up using the same scale of real data, we would have very similar results and the same conclusions. Unfortunately, currently there are no real dataset with realistic variations that have the sufficient scale to make strong conclusions. We believe this is interesting future work. Finally, we would like to note that we do have an experiment on real data for patch occlusions in Figure 3, and that this experiment echoes the findings in our simulated occlusion experiment, giving us validation that there is transfer between domains.", " **Does the top5 performance advantage of convnext translate to PCCP?** \\\nThis is a good question. First, it is very hard to compare two different networks, especially in a domain different from the source domain. Our main motivation by designing the PCCP metric was to try to abstract from this exact performance advantage of ConvNext in this specific domain. We believe that we have been successful in some measure, given that ConvNext is better in some tasks than Swin, but Swin is better than ConvNext in others (occlusion). Finally, we would like to note that we compare the two most comparable architectures in the literature - with very close top1 and top5 accuracies in ImageNet.\n\n**Can authors put main conclusions at the end of the abstract?** \\\nWe have added a concise explanation of the main differences that we found between ConvNext and Swin networks in the abstract. Thank you for this suggestion.\n\n**For maximal occlusion x=0, how much of the main object is visible?** \\\nGood question! The occluded percentage of the main object at x=0 is variable, between 80% and 100% - and we will add this to the dataset section of our work. In our simulated dataset we did not restrict objects to have the exact same volume, given that this would have reduced the realism overall (e.g. a computer mouse should be smaller than a laptop). We also decided against resizing the occluders mainly due to positioning issues between occluder/object/table. If the occluder were to be resized depending on the main object it would result in many instances of object clipping if the positioning schedule of the occluder was not independently designed for each main object. We sought the more general solution of having variable occluded percentages, with a static positioning schedule for the occluder. Finally, note that we have 92 main objects, 27 lighting environments and 3 occluder objects, we have a total of ~7.4k object-occluder-lighting combinations for each position of the occluder. We thank the reviewer for this comment and we will add all of these assumptions to the experiment section of our work.\n\n**Can authors put main conclusions at the end of the abstract?** \\\nWe have added a concise explanation of the main differences that we found between ConvNext and Swin networks in the abstract. Thank you for this suggestion.\n\n**Related work: add and discuss suggested.** \\\nThank you very much for bringing this work to our attention. We will gladly include this work in our related work section, with an appropriate discussion. We think this is very interesting work with some interesting conclusions. Our work differs in several ways (1) we study OOD generalization, while they purposefully study in-domain generalization with an unbiased training set (2) we both show weaknesses in current SotA networks, but the weaknesses are different: they show that learning using these networks inevitably causes the networks to underperform for certain variations, we show that current networks trained on vast amounts of real data are incredibly fragile with respect to all sorts of simple variations of data (3) they show that different types of networks fail in roughly the same ways, we also see that in our case but we make a special attempt to understand where different architectures fail in different ways by proposing NVD and our counterfactual metrics\n\n**Add link to dataset generator + license** \\\nWe will add a link to the TDW simulator in the camera ready version. We have also included our dataset generator code in the supplementary material and we will open-source this code upon acceptance. Finally, we will move the license information from the supplementary material to the main paper in the camera ready version (when more space is allowed for the main submission). We will also include the full license of the TDW simulator and assets in the README of the code.", " We thank the reviewer for their comments. We are happy that the reviewer found our work enjoyable and interesting to read. We are encouraged that they found that some of our results are likely to be very compelling for the ML community. Finally, we thank them for their valuable suggestions.\n\n**Suggestion: Finetuning networks on simulated images.** \\\nWe believe this is a very promising study, although these are some considerations at hand: (1) our work tries to study out-of-the-box generalization performance, which is why we do not finetune and test on objects the networks have never seen before (2) when finetuning it is hard to determine how long the network should be finetuned such that it doesn't overfit the object+lighting+pose. (3) should the network be trained on some objects per class and try to generalize to others? In this case NVD has a limited amount of objects and this could prove tricky.\n\nThis being said, we run a version of this experiment and include it in the revised supplementary material. Specifically, we finetune all Swin and Convnext networks on a dataset composed of all the objects in NVD, under bright lighting and in a canonical view. We use 30 epochs, with the same learning rates across architectures (5e-5 for Tiny, Small / 2e-5 for Base, Large). We then run the object rotation simulated experiment on these networks. We find very similar conclusions to the original, non-finetuned experiment presented in Fig. 8: ConvNext networks are more robust to object pose changes on average than Swin networks. There is one peculiar difference. It seems that smaller ConvNext networks have overfitted slightly to a canonical view of the object, with harsh drop rates for a specific rare pose (around 180 degrees, when the object is fully turned around). This is an interesting phenomenon that is worth investigating further and we thank the reviewer for their suggestion. For more details please find the experiment at the end of the supplementary material.\n\nWe believe this experiment deserves special care to design and we think future work can more thoroughly study this scenario. Nevertheless, we can run all the simulated experiments using this setup and include them in the supplementary material for the camera-ready with an analysis.\n\n**It is possible for PCCP to not marginalize domain shift effects. Suggestion: run occlusion experiment on ImageNet val.** \\\nWe thank the reviewer for their comment. It is true that there exists no metric that would be able to perfectly marginalize domain shift effects, nevertheless we believe it is an important problem to tackle and we propose an attempt that we believe is much better than naive comparison of accuracies. We also have evidence (thanks to the reviewer’s suggestion of finetuning, addressed above) that we retrieve the same conclusions even when the networks have been finetuned on simulated objects. Finally, we believe the scale of our dataset mitigates this problem given that there is lower likelihood that this would happen across 2.5k object/lighting combinations.\n\nWe believe such an occlusion experiment on ImageNet would be valuable, but there are some considerations: (1) If we use a segmentation network in order to occlude parts of an object in the ImageNet dataset we have to subject ourselves to the fail-rate of that specific segmentation network, which might bias results (2) Further, the segmentation network might have failures that are correlated to the failures of ImageNet-trained classifiers, thus filtering out important hard test cases (3) We have asked ourselves this same question, and have explored datasets to study occlusion on real images. To our surprise we found no dataset that was suited for this task, and most real studies for occlusion on data are handled in similar ways to the patch occlusion experiment. (4) In our work, we do the next best thing which is an occlusion experiment using patch occlusion on ImageNet, since this method does not depend on an auxiliary network - and we find corroborative evidence for our simulated occlusion experiment (i.e. Swin is more robust).\n\n**Figs: use the same y-axis. Maybe a new plot for that?** \\\nThank you for this strong suggestion. We have changed the figures in the main paper to have the same y-axis. This gives a lens into the differences between sizes of architectures, which we believe is very interesting. For example, we can clearly see increased performance and robustness as architecture size goes up. Also, importantly, when networks are trained on the ImageNet-22k dataset robustness in all categories goes up.", " **NVD: will it be released?** \\\nYes, we will release NVD upon acceptance. We will also open-source the code used to generate the NVD dataset in order for other research teams to generate new variations or add object assets if they wish. We will highlight this in the introduction and dataset sections. Thank you for the remark.\n\n**Authors discussed that they primarily study generalization differences between ConvNext and Swin architectures which is not necessarily indicative of generalization differences between general CNNs and ViTs.** \\\nWe agree with this statement of our limitation and point to it in our Limitations section. We will add a further specific callout in this same section in the camera ready version saying that the conclusions only apply for the architectures that we study in the work (ConvNext, Swin, Swin-v2).", " This paper studies the robustness of different Imagenet pre-trained classifiers to changes in object scale, object pose, scene lighting, and 3D occlusion. The authors generate a large dataset of 272k synthetic images (\"NVD\") to do so, which is the central contribution of the paper. The 2 architectures primarily under study are ConvNext architectures and Swin ViT architectures. The authors first show that ConvNext models generalize significantly better to NVD than Swin Transformer models. The authors then show that -- when accounting for the different affects of the real-to-synthetic domain gap -- Swin Transformers are more robust to occlusion, but ConvNext models are generally more robust to variation in object scale, changes in viewpoint pitch and changes in viewpoint yaw. I found this paper enjoyable and interesting to read. In my opinion, some of the results are very likely to be compelling to the ML community, such as the result that ConvNext architectures are much more robust to random patch drop and synthetic object occlusion than the older CNN architectures tested in prior work (Figures 3-4). The result that ConvNext architectures generalize significantly better than Swin architectures to the synthetic data is also an interesting finding, and counters prior work suggesting ViTs are better for OOD generalization (Table 1).\n\nI have a few suggestions that I think would further improve the impact of the paper:\n- In lines 124-149, the authors make the argument that their PCCP metric marginalizes out the differing effects of the real-to-synthetic domain gap in order to solely investigate performance under changes to object scale and viewpoint etc. I think this point is very important for the paper and some justification is necessary to support this claim. I'm not entirely convinced that this metric does enough to account for that gap - isn't it still feasible that how sensitive a model is to changes in viewpoint on the synthetic data does not correlate with the model's sensitivity on the real data? I'm wondering if the authors could maybe post-process the extent to which an object is occluded (maybe with a segmentation model?) in the Imagenet validation set and verify that the \"performance vs. occlusion\" curves on these natural images matches the curves in Figure 4.\n- To further improve the impact of the paper, I recommend that the authors additionally test how the curves in Figures 4-8 shift when models are fine-tuned on the synthetic data. In practice, if one finds a subclass of images where the model is performing poorly, the next step is to train on more images of that type. It's therefore very practically useful to understand if the difference in performance between the two architectures vanishes or widens when both architectures are fine-tuned for a small number of steps. - Figures 3-8 - to some extent, I feel the authors should keep the same y-axis limits across the subplots in each of these figures. Or at least provide an additional plot where all curves are all on the same plot? It would be good to understand better how differences between the two architectures scale with model size (e.g. are large CNNs and large ViTs more similar to each other than small CNNs and small ViTs?), which is hard to parse from the current plots.\n- NVD - the authors should make it clear whether or not they plan to release to the public this dataset along with their code for computing metrics. Yes, for instance the authors discussed that they primarily study generalization differences between ConvNext and Swin architectures which is not necessarily indicative of generalization differences between general CNNs and ViTs.", " This paper presents a comparative study of ConvNext and Swin Transformer (with comparable model size and GMACs, and design and training techniques) on synthetic images with controlled scene variations. To avoid real-to-synthetic domain gap, the authors propose to measure (relative) accuracy drop (or robustness) on synthetic images with varying scene parameters, i.e. the so called counterfactual simulation testing and the proposed metric called proportion of correct conserved predictions. Therefore another contribution is the scene variation dataset generated with ThreeDWorld simulator, where the detailed description of what type of scene variations are provided. The comparison results show the differences to object viewpoint, scale and occlusions. Strengths\n- Selection of ConvNext and Swin transformer is a fair choice for the comparative study than previous works, also tested SwinT v2 in the appendix.\n- The proposed metrics largely avoids the real-to-synthetic gap.\n- Careful choices of scene variations to occlusion, object pose, scale, camera poses, more comprehensive than previous studies. e.g. occlusion is better with occluder objects than random patch drop.\n\nWeaknesses\n- The performance gap on several variations (e.g. Fig 4, 6) are not significant enough to make a conclusion\n- It would be better if further look into *learned features* similar to previous works on comparing ViT and ResNet to provide more insights on the difference, e.g. feature correlation, frequency, etc\n- Dataset quality: although the dataset is called *Natural* variation object dataset, the synthetic images in Figure 1 do not look natural/realistic enough to me. Also scene background is relatively simple compared to real images. The number of object classes is rather small.\n- Writing can be improved to be more concise and clearer.\n\n 1. The last column in Table 1 shows the top-5 accuracy when test on NVD. How about top-1?\n2. Since all results on robustness metrics are based on top-5 predictions, and ConvNext is better than Swin Transformer on top-5, will most results on PCCP affected by this large difference or not? eg. Fig 4-7 almost all show ConvNext is better than SwinT with variations to those factors. How unstable the results would be if using Top-1?\n3. Are the selected classes in NVD dataset representative in ImageNet?\n4. Have you tried/what's your guess: will the results (table 1 and other Figs) transfer to real dataset with scene variations as well? (It would be much stronger if there is results on real validation data)\n5. To clarify: for maximal occlusion (x=0), how much visible is the object of interest?\n6. Can author put the concise main results/observations about the difference at the end of the abstract? - related work (disclaimer: not related to myself) also looks at effect of scene parameter variations (in-distribution): Madan, Spandan, et al. \"Small in-distribution changes in 3D perspective and lighting fool both CNNs and Transformers.\" arXiv preprint arXiv:2106.16198 (2021).\n- please add a link to dataset generator and license in the main paper.", " The paper compares the robustness of Vision Transformers to Convolutional Networks under different transformations such as scaling, object rotation, camera changes, occlusion, and random patch deletion. The authors resort to a synthetic dataset to perform these transformations. Specifically, they use the MIT ThreeDWorld scene generator. The main finding is that ConvNext is often more robust than Swin – except for the occlusion task. Overall this is a fine paper. The authors selected the two similar architectures ConvNex and Swin. The comparison using a synthetic photo-realistic dataset makes sense. I also liked that differently sized networks were compared. The paper is well written and original as it is the first study comparing Convolutional Networks to Vision Transformers on these transformations. Studying the robustness is also a significant problem. The approach to rely on a synthetic dataset gives the authors a high degree of control and therefore experimental validity. However, its results are a bit predictable when you know the \"Appendix B. Robustness Evaluation\" from the ConvNeXt paper. There, it is reported that ConvNeXt generalizes better to different datasets (ImageNet-A/R/C/Sketch) than Swin.\n\n\n\nA few improvements are:\n\n- Figures 3-8 lack confidence intervals; including them is necessary to judge how substantial the differences are. For example, they might invalidate the conclusion from in Figure 6 that \"ConvNext contain a higher proportion of conserved correct predictions\". I would recommend generating them using bootstrap resampling. \n- How large is the NVD dataset? 92 3D-models with 27 lighting conditions, thus 2484 images? Could you please include the number of samples in the paper? \n- The conclusion needs to be extended. It reads more like an abstract than an informative summary of the findings and main takeaways. Some parts of the Discussion are included in the Related Work section, which goes against the more common structure to combine it with the conclusion. \n\nMy main criticism is the missing confidence intervals. Adding them might change the conclusion slightly, but I suspect that the overall story (\"ConvNext models tend to be more robust than Swin models\") still holds. Please add them as this would strengthen your analysis.\n\nAdditionally, it is unclear to me if the authors plan to make the NVD dataset accessible to the public? As they claim the dataset as a major contribution, I would expect this, but it is not stated explicitly in the paper. \n\nAlthough, I generally liked this paper, I would like to vote for it only with a weak accept for now due to the missing confidence intervals. I would be willing to upgrade the score if my concern is address accordingly.\n\nMinor Points:\n\n- The font size of the figures is tiny and cannot be read when printed. Maybe, you could save the figures in the pgf format with the correct figure size (see https://jwalton.info/Matplotlib-latex-PGF/). \n- Additionally, the y-axis of each Figure 3-8 should be fixed. For Figure 6, the y-axis changes from Tiny to Small, making it hard to compare them visually.\n\n These questions were also stated less explicit in the previous section:\n\n- Do you plan to open source your NVD dataset and the corresponding code?\n- How do the confidence intervals look for Figure 3-8? The paper includes a limitation section, which I highly appreciate. The main limitations listed are the problem of how representative a synthetic dataset is and the difficulty of comparing different network architectures. It could also be mentioned that they only compare one specific training seed per network model, e.g., for each network architecture, only a single parameter set is used. However, I do not consider this a significant limitation as they included 5 different network architectures for Swin and ConvNext. Additionally, the authors might be excused as retraining them would be costly. Still, I wonder how much variation can be explained by the specific weight parameters.\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "BPCHAZC_hdM", "KrX1dIF8kOj", "qWRkqBKlcw", "meOucUMY8hw", "cQ-1ZZSOFx", "LiMxGPuSgw", "7q_fREyNQs3", "7q_fREyNQs3", "7q_fREyNQs3", "WlPvjYK4bcM", "WlPvjYK4bcM", "nips_2022_Aisi2oEq1sc", "nips_2022_Aisi2oEq1sc", "nips_2022_Aisi2oEq1sc" ]
nips_2022_juE5ErmZB61
Polynomial Neural Fields for Subband Decomposition and Manipulation
Neural fields have emerged as a new paradigm for representing signals, thanks to their ability to do it compactly while being easy to optimize. In most applications, however, neural fields are treated like a black box, which precludes many signal manipulation tasks. In this paper, we propose a new class of neural fields called basis-encoded polynomial neural fields (PNFs). The key advantage of a PNF is that it can represent a signal as a composition of a number of manipulable and interpretable components without losing the merits of neural fields representation. We develop a general theoretical framework to analyze and design PNFs. We use this framework to design Fourier PNFs, which match state-of-the-art performance in signal representation tasks that use neural fields. In addition, we empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space interpolation. Code is available at https://github.com/stevenygd/PNF.
Accept
There is a clear consensus for accepting the paper. The area chair agrees with the reviewer's comments and follows their recommendation.
train
[ "9lZTMfwB_94", "bCdTmPQEfom", "kMU9WB6vdgC", "qvfqQdvox6", "s0ahE_PGLub", "QIjksSHMGn", "GEfT_QIv5N3", "SBG4MLQc9gd", "tmHhF0_Dwj" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the overall positive and constructive feedback. \n\n**Efficiency analysis**\nPlease refer to our general comment for a detailed discussion of parameter and computation efficiency analysis. In brief, PNF achieves comparable performance with SOTA methods using a similar number of parameters. PNF takes longer training/inference time per step since it requires larger activation memory. Nonetheless, PNF still can achieve faster convergence time since it takes much fewer steps to converge.\n\n**NeRF experiments on Real-world Scene.** \nOur method is applicable to fitting NeRF for real-world scenes. We follow the prior work of BACON for the experiment set-ups and BACON focuses on Blender scenes. Note the main contribution of the paper is a novel neural fields architecture that allows provable decomposition and manipulation in the granularity of subbands. The NeRF experiment shows that our method is capable of achieving such decomposition and manipulation without sacrificing expressivity. Nevertheless, we will attempt to add real-world scene comparisons in the next revision.\n\n**Related works for Polynomial Neural Networks (PNNs).**\nWe discussed polynomial neural networks in L81-L84 and L48-50. Basis-encoded Polynomial neural fields can be thought as evaluating a polynomial neural network with a selected set of basis functions. This formulation is more general than the original PNNs, which can be thought of as using a polynomial function basis : $\\{b_i(x) = x^i\\}_{i=1}^{\\infty}$. Our work studies a variety of different basis functions including the Fourier basis (in the main paper) and the Gabor basis (in the supplementary). The understanding from PNNs can potentially be extended to PNFs and we believe that our analysis can also be helpful for the PNN community.\n\nIn the revision, we will include additional citations (for example, [A] and [B]) of recent PNNs and provide detailed discussion of polynomial neural networks in our related work section.\n\n[A] Kileel, Joe, Matthew Trager and Joan Bruna. “On the Expressive Power of Deep Polynomial Neural Networks.” NeurIPS (2019).\n[B]Choraria, M., Dadi, L., Chrysos, G.G., Mairal, J., & Cevher, V. (2022). The Spectral Bias of Polynomial Neural Networks. ICLR 2022.", " We thank the reviewer for the insightful review! We are glad that the review found our paper making a valuable contribution by adding subband manipulation capability to neural fields. \n\n\n**Predetermined subband decomposition.** \nWe agree with the reviewer that our method needs a subband decomposition defined before training. The use of a predetermined subband decomposition dates back to classical wavelet decomposition as well as Laplacian [8] and Steerable [51] pyramids. These subband decompositions have been instrumental in many applications such as texture analysis and synthesis. One of our main contributions is to enable such decomposition for neural fields and we demonstrate such a technique is useful in a variety of applications (Section 4). \n\nAlleviating the need of predeterministic subband decomposition is an interesting future direction. One potential way to achieve this is to design a basis function to enable the network to learn tunable subband decompositions. We believe that our work lays the basic theories for achieving this goal and hope to see future research in this direction.\n\n**Memory complexity.**\nIt’s true that we use one network to encode one subband series and the final output is an ensemble of the outputs from these networks. We mentioned in the limitation section that the activation memory will thus increase linearly to the number of subband series. \n\nPNF can still be very compact in terms of the storage memory. This is due to the fact that each small network only needs to capture signals from a subband instead of the whole signal, which is arguably an easier task that requires much less network parameters to learn. Please refer to our general comments for a detailed discussion about parameter efficiency.\n\n**Clarifications.** Thanks for pointing out the potential confusions in the paper! \n\nLine 99: $\\gamma$ is a function that takes $\\mathbb{R}^n$ coordinate and maps it to a $d$-dimensional vector. We denote each dimension of the output vector $\\gamma(x)$ is denoted as $\\gamma_i$.\n\nLine 110: $b_1, b_2$ are two basis functions from family $\\mathcal{B}$. $a_i$ are complex coefficients. The property requires that for each pair of basis functions, there exists a series of coefficients that can express their product as a weighted sum of the basis.\n\nWe will make the math notations more clear in the revision.\n", " We thank the reviewer for the positive feedback and for appreciating the strengths of our work.\n\n**Writing.** \nWe appreciate the clarity suggestion. We will add a subsection in Section 3 and introduce some of the related widgets of coordinate-based neural network (e.g. Random Fourier embedding, MFN, BACON) and how to use them to build networks like Figure 1(b).\n\n**Optimized parameters** \nOnly the network weights are optimized, corresponding to the $W$’s in Eq.4-Eq.6. Other parameters such as the initialization schema of the basis encoding function $\\gamma$ are considered hyper-parameters.\n\n**Scaling cost with respect to image resolution.** \nTo train Fourier PNF on images with 2x resolution, one can potentially use the same network architecture (i.e. same amount of parameters), but double the initialization frequency when constructing $\\gamma$. But we usually found that increasing the width of the network is beneficial when fitting more complicated signals. For example, increasing the width of the network by $10%$ is sufficient to gain comparable results for the image overfitting experiment of camera men when scaling from $256^2$ to $512^2$. Note that such an increase in the number of parameters is also required in prior work like BACON.\n\nThe forward and backward computation (i.e. time and memory) is linear to the number of input points, so we expect a 4x amount of the original compute.\n \n***Limitation of metrics, confident interval, and significant digits.**\nWe follow the prior work (BACON) to use PSNR, SSIM, and CD to evaluate the neural field’s ability to overfit a signal. Sometimes these metrics are not always well-correlated with human perception [A, B]. We will include confidence intervals and significant digits in the paper revision.\n\n[A] Zhang, R., Isola, P., Efros, A.A., Shechtman, E., & Wang, O. (2018). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 586-595.\n[B]Smirnov, D., Fisher, M., Kim, V.G., Zhang, R., & Solomon, J.M. (2020). Deep Parametric Shape Predictions Using Distance Fields. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 558-567.\n", " We thank the reviewer for the positive and supportive feedback and for appreciating the strengths of our work. \n\n**Parameter and Computational efficiency** \nPlease refer to the general comments for a detailed discussion of computational and parameter efficiency. In short, PNF can be as compact as the baselines (i.e. using about the same amount of parameters) while achieving similar performance. The training and inference time is longer per step due to large activation memory (as mentioned in the limitation section). We believe that this can be addressed by optimizing the implementation. Despite slower training/inference time, PNF still converges faster in wall time.\n\n**Literature comment** \nWe thank the reviewer for noting the work of [R21], which we will gladly add in the paper. As noted by the reviewer, [R21] considers the ability to control frequency from above via MFNs of the Gabor type. Our framework handles a more general set of basis functions and network topologies and can band-limit a signal also from below, thus enabling decomposition of a signal’s frequency bands. \n", " We would like to thank the reviewers for the unanimously positive reviews! We are glad that the reviewers found our paper “timely”, “sound”, and “well-written”. In this paper, we propose a novel neural field architecture called Basis-encoded Polynomial Neural Fields (PNF). To the best of our knowledge, our method is the first neural field that’s capable of achieving analytical subband decomposition of signals. In addition, our paper also proposes a set of theories to generalize the PNF architecture to different basis functions and network topologies. We demonstrate the benefits of PNF on a variety of tasks including texture synthesis and scale-space interpolation. We hope that our paper can inspire new designs of neural fields that are both expressive and interpretable. Next we address the reviewers’ questions regarding efficiency analysis. \n\n--------\n\n## Parameter efficiency\nReviewer 9C3L, kNCf, and pc1o inquire about the parameter efficiency of PNF. We choose the hyperparameters for our method (e.g. hidden layer size) so as to have a comparable number of parameters for all default configurations of the baselines. With a comparable number of network parameters, we are able to build neural fields with expressivity on par with the SOTAs while allowing analytical subband manipulation. The following table shows the number of parameters in our model in comparison with that of the baselines in the expressivity experiments. We will include this parameter analysis in the revision.\n\n| Expriment | PNF | BACON | RFF | SIREN | \n|-|-|-|-|-| \n| NeRF 1x | 0.46M | 0.54M | N/A | N/A | \n| NeRF 1/2x | 0.34M | 0.41M | N/A | N/A | \n| NeRF 1/4x | 0.23M | 0.27M | N/A | N/A | \n| NeRF 1/8x | 0.12M | 0.14M | N/A | N/A | \n| Image | 0.28M | 0.27M | 0.26M | 0.26M | \n| SDF | 0.59M | 0.54M | N/A | 0.53M |\t\n\n--------\n\t\t\t\t\t\t\t\t\n## Training and inference time.\nAs per the reviewer 9C3L and knCf’s requests, we profile the training and inference time of the image experiment in the following tables for the cameramen image. As mentioned in our limitation section (L305-307), PNF requires forwarding through an ensemble of subband networks, and thus requires more activation memory. This usually results in longer training/inference time per step when implemented without any optimization. We believe that this limitation can potentially be addressed by exploiting hardware’s parallelism ability (e.g. writing customized CUDA kernel). Nonetheless, our model converges in very few steps, which leads to faster overall convergence time as shown in the following table. \n\n| - | Time(s)/Step | Time(s) to 36 PSNR | Final PSNR/SSIM (5k steps) |\n|-|-|-|-|\n| BACON | 0.16 | 177 | 37.45/97.33 |\n| PNF | 0.64 | 96 | 37.45/97.44 |\n| SIREN | 0.1| 163 | 36.9/97.50 |\n| RFF | 0.08 | 275 | 36.23/95.05 |\n", " This work proposes to build neural field network using polynomial basis and a Fourier extension. Their goal is to provide sub-band frequency control for the output of the network, but for both upper and lower band frequencies, instead of only upper band frequencies per layer [1] . The authors propose to concatenate finite degree multivariate polynomials which facilitate sub-band decomposition. The final network operates with a controllable set of sub-band decompositions as it constitutes an ensemble across networks with different sub bands. The authors show the performance of the network on multiple types of tasks coordinate networks are used, including pixel reconstruction with different levels of noise, 3d shape reconstruction. The performance of the network is qualitatively better than previous work in these reconstructions for high frequency components, including [1], but quantitatively comparable.\n\n[1] Lindel et al. 2022. BACON This work leverages signal processing techniques for subband decomposition to build better coordinate networks. This in turn leads to an interpretable coordinate network where the layers are lower and upper banded. One interesting result from this work is Fig \n4, where the results show the network can isolate the image content and apply different filters at different layers.\n\nThe paper reads as a story on how to build a network Fig 1b, this in turn makes it hard to read unless you are familiar with all the component necessary for subband decomposition and coordinate networks. The paper would be accessible to a wider audience if it builds from the components/architecture necessary/standard for coordinate networks and the new components proposed by this work.\nThe results in Figs 2, 4 and, 5 show results that appear to be qualitatively superior than previous work including [1]; however, the quantitative results in Tables 1,2,3 do not provide any confidence intervals, and are .## significant digits. \nThe \n\n[1] Lindel et al. 2022. BACON - It is unclear to me from the text, which parameters beyond the network weights need to be optimized? \n- What is the cost of scaling this approach with the dimensions of the image?\n- The results in Figs 2, 4 and, 5 show results that appear to be qualitatively superior than previous work including [1]; however, the quantitative results in Tables 1,2,3 do not provide any confidence intervals, and are .## significant digits. What are the limitations of the metrics used in Tables 1,2 and 3? N/A", " The authors propose a basis-encoded polynomial neural fields with a basic theoretical framework for implicit signal manipulation. Specifically, a set of PNFs (e.g., a Fourier PNF), with the outputs be decomposed in a fine-grained manner in the frequency domain. Experiments show that PNFs achieve comparable performance with other SOTA methods. 1. An interpretable way for signal manipulation.\n2. Multiple manipulation applications to verify the proposed ideas.\n3. With several theories to support the proposed ideas.\n 1. Lack of efficiency analysis, how about the training and inference time with compared methods.\n2. Why only perform experiments on Blender scenes for Neural Radiance Field?\n3. More related works for Polynomial Neural Networks. Please refer to the Questions section.", " The paper describes a novel class of neural fields, which have explicit control over their frequency content. Inspired by classical wavelet decomposition of signals, the approach allows to construct neural fields that covers the Fourier spectrum in separate non-overlapping sections. Each section is controlled by both a lower and upper band-limited, as well as an angular sensitivity. Until now, NFs where only controllable via an upper frequency band-limit. The presented NFs thus have nice properties which could benefit a wide range of applications for which frequency content control is important (of which several are addressed in this paper), the NFs also seem to have generally good approximation quality (both in terms of quality of fit as well as convergence speed). **Strengths**\n* The paper is well-written\n* The paper is timely\n* The paper solves a relevant and so far unsolved problem for controlling frequency content in NFs\n* The paper is sound and is thorough in its citations (even to classic works like Simoncelli-Freeman)\n\n**Weaknesses**\n* A discussion on parameter efficiency and computational efficiency would be appreciated\n\n**Literature comment**\nIn terms of literature that band-limits NFs from above it only makes mention of BACON, however [R21], which is on controlling frequency content (band-limiting) via MFNs of the Gabor type, predates BACON (probably independently solved the same problem?)\n[R21] Romero, David W., et al. \"FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes.\" ICLR. 2021.\n None I see no issues here.", " The paper proposes an approach to allow subband manipulation in neural fields. Specifically, subbands are decomposed in the Fourier space and there is one network for each subband and the final results are combined. Subband manipulation is achieved via manipulating the corresponding neural network. ## Strengths\n1. The idea is interesting and it adds a subband manipulation capability to neural fields. I believe this is a valuable contribution.\n2. The proposed approach yields marginally better results than existing approaches.\n3. The subband manipulation capability is demonstrated via applications such as texture transfer.\n\n## Weaknesses\n1. In this approach, the subband decomposition has to be predetermined and having one network per subband increases the memory complexity of the overall approach. Due to this, the manipulation capability might be limited. Furthermore, how does the number of parameters of the proposed method compares to an existing approach like BACON?\n\n## Post rebuttal\nThank you for providing the parameter efficiency analysis. I'm already positive and increasing my score. Line 99: how does $\\gamma_i$ relate to $f$? The statement is not clear.\nLine 110: What are $a_i$, $b_i$ and $I$? The statement is not clear. Memory limitation is mentioned and the societal impact is adequately addressed." ]
[ -1, -1, -1, -1, -1, 7, 5, 8, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "GEfT_QIv5N3", "tmHhF0_Dwj", "QIjksSHMGn", "SBG4MLQc9gd", "nips_2022_juE5ErmZB61", "nips_2022_juE5ErmZB61", "nips_2022_juE5ErmZB61", "nips_2022_juE5ErmZB61", "nips_2022_juE5ErmZB61" ]
nips_2022_80RnitDehg_
Anticipating Performativity by Predicting from Predictions
Predictions about people, such as their expected educational achievement or their credit risk, can be performative and shape the outcome that they are designed to predict. Understanding the causal effect of predictions on the eventual outcomes is crucial for foreseeing the implications of future predictive models and selecting which models to deploy. However, this causal estimation task poses unique challenges: model predictions are usually deterministic functions of input features and highly correlated with outcomes, which can make the causal effects of predictions on outcomes impossible to disentangle from the direct effect of the covariates. We study this problem through the lens of causal identifiability. Despite the hardness of this problem in full generality, we highlight three natural scenarios where the causal effect of predictions can be identified from observational data: randomization in predictions, overparameterization of the predictive model deployed during data collection, and discrete prediction outputs. Empirically we show that given our identifiability conditions hold, standard variants of supervised learning that predict from predictions by treating the prediction as an input feature can find transferable functional relationships that allow for conclusions about newly deployed predictive models. These positive results fundamentally rely on model predictions being recorded during data collection, bringing forward the importance of rethinking standard data collection practices to enable progress towards a better understanding of social outcomes and performative feedback loops.
Accept
Strengths: * problem is well motivated and stated, writing is clear * interesting and important identification problem * useful results on sufficient conditions * causal assumptions are made explicit (see also weaknesses) Weaknesses: * very strict assumptions on causal structure (specifically that $\hat{Y}$ does not affect $X$) * assumptions made explicit, but writing conveys more general claims at the onset * theoretical results make interesting connections but lack clear novelty * current version missing a thorough discussion of limitations * concerns regarding using predictions as a feature Summary: The paper presents an interesting study of the effects of predictions on outcomes. The key contributions are a clean formulation of the problem, several identifiability results, and some corollaries regarding the use of predictions as input for learning. Reviewers were unanimous in their appreciation of the paper’s quality—but also in their concern regarding the strong assumptions made. While making assumptions is certainly adequate, some reviewers felt that this narrows the scope of how results should be interpreted. One reviewer was also worried that assumptions simplify the problem in a way that makes the paper’s theoretical contributions derive immediately from know results in causal inference. But the main concern was that, while assumptions were indeed clearly stated, earlier sections seem to present the paper as being more general than it is, thus creating false expectations and deferring a much needed discussion regarding the paper’s limitations that follow from its strong assumptions. In the discussion, reviewers were mostly satisfied with the authors’ responses as to how they plan to address the concerns raised. Unfortunately, the authors have only stated what changes they intend to apply, and did not provide reviewers with a revised version. This makes judging these anticipated changes difficult. Nonetheless, all reviewers consider the paper and its contributions favorably; the authors are therefore strongly urged to clearly and adequately frame their paper’s results and limitations, with full integrity, and as early in the paper as possible.
train
[ "EgHol-IbRp", "fbFPZPbTX-z", "GGFsuVhoV8k", "MI3Mrfetp5", "rkAWJyIRD5b", "fMtE2nsu6O", "2r6XCRmCrdL", "M3HWdw_ofkM", "bThVGo4IEJl", "J8RYKCmnFqK", "_vPlgrj9kI", "mr3potzWkBk", "jjh2upkWZGw", "lbksmBhdv_b", "M6EKD9caBcn", "Tl9dRd_sKc-", "-FMBHZ9ujnM", "QDE3XCY-_A", "iCdhSP9qf9T" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your positive assessment and the helpful comments. We will for sure implement all these changes in the camera-ready version to the best of our abilities. \n\nRegarding your last question: \nthe fact that $U$ is unobserved means that, for each individual, we do not observe their precise value of $U$, but only observe $X, \\hat Y, Y$. However, we may still have knowledge about the distribution of the measurement noise $P(U|X)$ which enables the application of our result. For example $P(U|X) = N(X, \\sigma^2)$ where $\\sigma$ indicates the scale of the noise. (Knowing the size of noise does not reveal the precise value of U either.) Depending on the application such external knowledge about the noise may be available for example from test–retest data, prior modeling, or a physical model for the measurement device.\nGiven such external knowledge about noise, we can calculate the distribution $P(U|\\hat Y, Y) = \\int P(U|X) P(X|\\hat Y,Y) dX$ even when $U$ is unobserved; $P(U|X)$ comes from the external knowledge about measurement noise; and $P(X|\\hat Y,Y)$ is a distribution about observed variables and may be estimated from data.\n\nWe hope this clarifies your question even if we agree it is less relevant for the updated manuscript.", " Thank you for your constructive feedback. We understand your concern and we make sure any potential for misunderstanding is eliminated.\n\nMore specifically, we will do the following to be more forthright about the limitations of our assumptions:\n- adjust the wording in abstract and introduction to more closely tie the data-driven strategy 'predicting from prediction' to the identifiability assumptions.\n- add a dedicated *limitation section* to discuss our assumptions together with potential violations (other ways $f_\\theta$ can be performative, anti-causal prediction, spill-over effects).\n- include a sentence along the lines of: *our results do not justify the use of supervised learning to address performativity in broader generality beyond the scope of our identifiability results.*\n- add more discussion of Figure 4(a)\n\nWe would also like to note that the paper already makes an effort in being explicit about assumptions: The assumptions are explicitly stated in all our formal results and we never tried to hide them. We explicitly bring up strategic classification (L83-87) to make clear our causal model covers only a special case of performativity. We have a dedicated section (Section 4) to point to the limitations of the SUTVA assumption underlying our analysis. And in the empirical section we included a result (Figure 4(a)) to demonstrate that supervised learning can go wrong if the identifiability assumptions are not satisfied. That said, we view these changes as an important improvement of existing efforts of being explicit about limitations, rather than an extensive change to the paper.\n\nWe hope that this discussion and the more specific descriptions of the changes offer you more confidence in us that we have the ability to incorporate these changes properly in the paper. We'd also be happy to incorporate any further feedback if there is something you would like to add.\n", " Thanks for the clarification!", " Thank you for your response. I have increased my score based on the changes promised for a camera-ready version of this paper.\n\nRegarding $P(U∣\\hat Y=\\hat y,Y=y)$, I still do not understand what this refers to (since $U$ is unobserved) but perhaps this does not matter if this result is removed from the manuscript.", " Thanks for the detailed response and the insightful comments. In order to achieve positive impacts on the community, discussion of limitations and alignment of expectations are crucial. As long as limitations are properly discussed, I agree with your position on causal models, related work, and completeness. \n\nAdditionally, I would like to thank you for your detailed response to the question of \"quantifying the necessity of observing $\\hat{Y}$\". Logging prediction data seems like a good practice in general. However, using predictions as a feature may raise concerns among practitioners due to the introduction of new dependency structures and feedback loops within the system. Although it may be the case that using predictions as features leads to positive outcomes when the causal structure is as shown in Figure 1, I am not sure whether it will always result in better results. In this regard, I am wondering whether the suggestion of incorporating predictions as features can be quantified in a way that is robust to the actual causal structure in some sense, and whether there are cases in which such a practice would be inappropriate.\n\nIn light of your concrete suggestions for improvement and your discussion of limitations, I have raised my rating to \"Borderline Reject\". It is slightly below the threshold for acceptance due to the fact that the changes, while clearly outlined, appear to be extensive, and will not be peer-reviewed if the paper gets accepted. In order to ensure that this formal rating is appropriate for this scenario, I will consult with the area chair and adjust accordingly. Once again, thank you for your comments! I am looking forward to seeing the eventual outcome of this work.\n", " Thank you for your feedback! We are happy to have Proposition 4 in the appendix where we have more space to elaborate on the details, we just felt it is hard to do justice to this result in the main body. We believe it serves as an interesting example of a possible identification strategy with side-information that goes beyond classical ERM.\n\nWe would like to emphasize that we will still have *four* identification results in the paper: identification through 1)* randomization in predictions and 2) randomization in down-stream decisions, and the two identification results without randomness through 3) incongruence in functional complexity (overparameterization) and 4) incongruence in modality (discrete prediction). We don't think the first two should be dismissed because they are 'technically' less evolved; they connect properties of predictions to overlap from where identification follows. The former connection has not previously been made in the context of performative prediction. We think understanding the limitations of learning performative effects from data collected under the deployment of a *deterministic predictions* and illustrating the need for sufficient randomness (as well as discussing natural settings where it arises) is very relevant in practice.\n\n*The first result is the refined randomness result we decided to add in response to the reviewer comments. It is outlined in the general response, bullet point 4. ", " Dear Reviewer rWDi, \n\nThe discussion phase is ending soon and we will not be able to respond to your comments after that. \n\nWe would appreciate if you could let us know whether our reply to your review (including the general remark) and the discussion of related work addressed your concerns. If there is anything else you think we should do to further improve the paper, please let us know. \n\nThank you!", " Thanks for the detailed response. I still think the identification results are not novel enough and very specific to the causal structure. And it would be better to move proposition 4 to appendix rather than deleting it given that you only have two identification results now. But I think the extra experiments and the clarifications you made here as well as in other responses compensate this weakness so I tend to remain the evaluation I had. ", " I have carefully read the authors' responses, and my evaluation remains the same. ", " **Related work.** Thank you for the relevant pointer. The work by Wager et al paper is interesting because it is also motivated by the downstream effects of predictions, however the feedback effects studied are different from the focus of our work. More specifically, they assume repeated prediction, access to data from multiple consecutive time steps, and focus on detecting whether predictions in one time step impact predictions in the next time step. They show that for detecting such feedback loops, randomness in predictions is sufficient. But apart from the fact that independent variation in predictions helps reason about down-stream consequences of predictions, the insights of the paper are different from ours. In particular, detecting such feedback loops is not sufficient for concluding specifics of how predictions impact individuals, which is what we are interested in. We will elaborate on this. \n\n**Theoretical statements.** There are some inaccuracies in the statement of the theoretical results in the submitted version. We are aware of them and we thank the reviewer for carefully going through the claims and pointing them out. We have fixed them in the meantime. Let us comment on the specific questions:\n\n- *Extrapolation error.* Yes, the conditions are both assumed to hold in the second argument of the loss function. We will clarify this and also add a formal definition of the extrapolation error.\n- *Front-door-adjustment.* We agree that technically there is no confounding in our setting. We used the term font-door to highlight the conceptual approach of causal identification by tracing the path of the causal effect rather than by adjusting. We will clarify this. \n- *Overlap condition.* Thanks for spotting this. $P(T_{\\hat Y}=t∣X)>0$ should be replaced with a condition over measurable sets to make sure it is well defined for continuous t.\n- *Example 3.1* We restated the assumption in the example as follows: Any consistent parameter estimate $\\alpha',\\beta'$ needs to satisfy the following two equations: $ \\alpha'+\\beta'\\xi = \\beta\\xi$ and $\\beta'\\gamma=\\beta\\gamma$ matching the multipliers of the base polynomials. It is not hard to see that this system of equations has a unique solution as long as $\\gamma>0$ which corresponds to the case where $f_\\theta$ is overparameterized with respect to the class of linear functions. \n- *Proposition 4.* Knowledge of $P(U∣\\hat Y=\\hat y,Y=y)$ is required because samples need to be drawn from this distribution in order for the ERM statement to hold with the specified weights. This can directly be seen from Equation (23) in the appendix. So there is some additional adjustment that needs to be done to map the result to the ERM problem, it is not as straightforward as we presented it in the paper. We have decided to remove this result from the paper because the identification strategy is very different from the other results, it does not neatly fit with the ERM problem. In our eyes it does not add substantially to the story and we prefer to leave the development of more complex identification strategies for future work. Instead we have added a refined randomness result to make explicit the set of models we can extrapolate to even if global identifiability can not be established.\n- *Regression discontinuity*. We make an analogy to RDD because we believe many people are familiar with this quasi-experiment strategy and the reasoning behind it. However, Wang and Blei (2019) was the result that we relied on in the proof, thus the reference. Conceptually the results are very similar in spirit. Wang and Blei (2019) leverages a similar inductive bias as the regression discontinuity design to achieve causal identification; they both rely on the incongruence between the discrete treatment and continuous confounder for identification. The $x’$ in the proof refers to any $x’\\neq x$ such that $(\\hat y,x’)$ is observed, we will clarify this.\n\nWe hope this discussion clarifies the concerns with the theoretical statements. Please let us know if these answers are not satisfactory.\n", " **Novelty of the theoretical results.** We agree that, technically, the first two results are direct applications of existing results from causal inference to the performative prediction problem. However we believe there is value in singling out the causal question, transferring insights between these two fields and interpreting the standard assumptions in the context of prediction. This illuminates interesting connections to differential privacy and individual fairness. Furthermore, to better integrate the result Section 3.2 with the overall story we added a general identification results, stating the positivity assumption as an assumption on the predictive models, rather than the induced distribution directly. This builds on the unique property of the performative prediction problem where distributions are characterized by the predictions of the deployed model, and further puts it apart from standard causal identifiability results.\n\n**Overparameterization in neural networks.** Yes, it is possible to replace the overparameterized $f_\\theta$ in our experiments in Figure 4(c) with a neural network instead of just a degree-2 polynomial. That’s a good idea. We can also simultaneously increase the complexity of $g_1$ and $g_2$ while complying with the assumptions of Proposition 6. We will add this experiment to the appendix. \n\n**Proposition 4.** We need to know $P(U|\\hat Y)$ for computing the weights but we also need to know $P(U|Y,\\hat Y)$ because this is the distribution we need to sample from in order to achieve identification. This can be seen from equation (23) in the appendix. Because of this change in distribution, the correspondence with the ERM objective is a bit more involved than stated in the paper. We have decided to remove this result from the main body and focus on results that enable identification via the standard ERM framework. We leave the discussion of more complex identification strategies for future work. To us the most interesting results are Proposition 5&6 that show how two natural properties of prediction functions can help identification despite lack of overlap and we do not want technical details to distract from this clear story. Together with the refined randomness result we believe they can convey a complete picture.\n\nThanks for spotting the typos. \n", " **Causal model.** This work aims to draw attention to the direct causal link between predictions and outcomes. This link reflects that the deployment of a model can lead to changes in P(Y|X). We chose to single out this effect because it is often assumed away by making a covariate shift assumption in out-of-distribution generalization. Certainly, the causal link $f\\rightarrow\\hat Y\\rightarrow Y$ is not the only way performative effects in the joint distribution $P(X,Y)$ surface in practice. We aimed to make this explicit by contrasting with the strategic classification setup in the related work, but we can expand on this (e.g., actional recourse is also an interesting example that is concerned with the link between $X$ and $\\hat Y$). \n\nWhile causal identifiability claims are no longer valid out-of-the-box if additional causal arrows are added, the conceptual insights of our work apply more broadly. For example, imagine the distribution over covariates $D_X$ is not fixed but also depends on $f$. Then, the problem of identifying $h^*$ is further complicated because we also need to establish overlap on $X$. However, once overlap is established our assumption enables identifiability. Thus, the challenges of $X$ and $\\hat Y$ being coupled, the insights that incongruence can help tackle this, and the necessity of modeling $\\hat Y$ explicitly to learn transferable models remains. Thus, we believe it is an important dimension of performativity that brings forward unique challenges that merit investigation.\n\n**Related work.** We thank the reviewer for drawing our attention to related works. In particular, the line of work by Swaminathan and Joachims is interesting because they also emphasize the importance of logging information about policies to be able to do offline evaluation. In their case it is propensities that enable the mitigation of selection bias. In our case it is predictions that allow us to model concept shifts caused by predictions. \n\nHowever, in general, works on mitigating covariate shift by modeling test-to-training density ratios are complementary to our work. Only once identifiability through overlap is established and propensity scores are well defined, can these methods be applied. However, our focus is on understanding when the problem is tractable. Complementarily, we show that identifiability can be established through incongruence even in settings where propensity score weighting is not applicable. \n\nThe line of work on algorithmic recourse focuses on the relationship between covariates and predictions whereas we focus on down-stream effects of predictions on outcomes. Naturally, for learning which covariates would have led to a desired prediction (which is a counterfactual quantity) they need some randomness in the covariates. In our case it is useful to have randomness in the prediction for a fixed set of covariates because we want to understand the effect of the prediction on Y. This is a different problem that brings its own technical challenges. We will make this distinction more explicit in the related work.\n\n**Do the results provide a complete picture?** Our causal identifiability results pinpoint interesting properties of prediction functions that are sufficient to render causal effect estimation of predictions possible. The ability to separate the causal effect of $X$ and $\\hat Y$ on $Y$ is crucial for identifiability without overlap. There might be other special configurations/assumptions that achieve the required degree of separability, but it seems difficult to provide a complete picture of these assumptions.\n\n**Quantify necessity of observing $\\hat Y$.** Our causal model in Figure 1 elucidates that the distribution $P(X,Y)$ in the presence of performativity depends on $f$. Without observing $\\hat Y$, is it impossible to estimate $E[Y|do(\\hat Y), X]$, like without observing the treatment, one can estimate the effect of the treatment. By showing that without knowing $\\hat Y$ there is an inevitable extrapolation gap that one suffers (Proposition 1) we show that this infeasibility result is relevant for prediction. In turn if one is able to observe $\\hat Y$ we can hope to recover the causal effect of predictions to mitigate the extrapolation gap. In this sense, we see Proposition 3-6 as examples of positive results that are only possible because we had access to the predictions. In these settings the extrapolation gap between different models directly reflects the penalty for not having access to this information, quantifying the value of recording $\\hat Y$. \n\nWe hope this discussion clarifies your questions and the difference between the effect we study in our causal model and the type of distribution shifts studied in strategic classification or policy evaluation problems which thus bring forward different technical challenges. \n\nPlease do not hesitate to follow up if you feel some questions remain unanswered.\n", " **Overlap.** By stating that overlap is easier to satisfy we mean that for smaller set $|T|$ it is more reasonable to assume positive probability for every event. But this should not be interpreted as a formal claim, we can remove it if it is confusing.\n\n**Multiple environments.** This is an interesting point. If we have data from the deployment of multiple models (corresponding to different distributions) we can pool the data to increase overlap. However, for deterministic prediction functions we would still need an unbounded number of environments to achieve global identifiability. The reason is that for every $X$ every single model only provides an estimate for one pair $(X,\\hat Y)$. So it does not make the problem significantly less complex unless we are willing to make parametric assumptions.\n\n**More complex causal structures.** Unfortunately, causal inference on observational data is impossible without uncheckable causal assumptions on how the outcome is generated (Pearl, 2009). Thus it is unavoidable to focus on one particular graph. By studying the graph in Figure 1 we want to draw attention to the causal link between the prediction $\\hat Y$ and the outcome $Y$ and pinpoint the challenges of learning this causal effect due to the coupling between $X$ and $\\hat Y$. This is a general challenge that persists also in the presence of additional performative effects. \n\nWe see two valuable extensions of our graph: \n- First, the graph could be extended to allow $f$ to impact the distribution over covariates $D_X$ and thus embrace a broader class of performative effects (e.g., strategic behavior). In this case we would, in addition, need to deal with overlap in $X$. But, once overlap in $X$ is established, our assumptions would still ensure identifiability. \n- A second extension would be a graph where not all features in $X$ are necessarily causal for $Y$. However, if $\\hat Y$ induces changes in $Y$ that propagate to these anti-causal features this leads to cycles in the causal graph that demand additional caution and violate our assumptions. Prior work has demonstrated that anti-causal prediction should be avoided in the presence of strategic manipulations, which supports our idealistic assumption of causal prediction. However, such features might nevertheless be used for prediction, and an investigation of performativity in the context of anti-causal predictions would be an interesting direction for future work. \n\n**On the role of separability.** Since overlap is not satisfied for deterministic prediction functions we wish to see what is possible beyond assuming overlap. The challenge is that without overlap we need to predict unobserved counterfactual quantities. To achieve this we need to be able to disentangle the direct from the indirect link in order to extrapolate. Without some sort of separability in $g(X, \\hat Y)$, the function $g(.)$ and the interventional quantity $E[Y|X, do(\\hat Y)]$ is not identifiable: there exists $g’(.)$ different from $g(.)$ such that $g’(X, \\hat Y) = g(X, \\hat Y)$ under $\\hat Y = f_\\theta(X)$. One such example is the function $g’(.)$ that satisfies $g’(X, \\hat Y) = g’(X) \\stackrel{def}{=} g(X, f_\\theta(X))$. Such a g’(.) may be compatible with the observational data as $g(.)$ does but could imply different values of the interventional quantity $E[Y|X, do(\\hat Y)]$. That said, under separability, this issue of non-identifiable $g(.)$ does no longer exist, and can enable causal inference on $E[Y|X, do(\\hat Y)]$.\n\nIf only parts of $g$ are separable, then overparameterization will help us correctly identify these separable components. Similarly, if separability is violated discrete classification will allow for the effects to be identified locally but not globally (akin to an RDD). In any case, weak violations of our assumptions no longer allow for conclusions about *any* model $f_\\phi$, however they might still be useful to extrapolate to *some* models. To make explicit how violations of the assumptions shrink the set of models we can extrapolate to, we added a refined randomness result sketched in the general comment. From a practical perspective this implies that we should avoid drawing conclusions about models that are very different from $f_\\theta$ if we can not be sufficiently confident that our assumptions hold. \n", " \nIn response to the reviews we will make use of the additional content page as follows:\n\n1. We will add a discussion section to make the *limitations* of our assumptions clear and explain that our results do not justify predicting from predictions as a general purpose strategy for anticipating performativity. It is important to us that this is not misunderstood to ensure that this work has a positive impact on the community in the long run. \n2. We will expand the related work to discuss the papers pointed out by the reviewers and do some further literature search tracing the references therein.\n3. We will anticipate questions and add clarifications where needed.\n4. We will add a refined randomness result to Section 3.2 that makes explicit the set of models we can extrapolate to even if global identifiability can not be established. More formally; we can extrapolate to models $f_\\phi$ for which $\\forall x\\in\\mathcal X$ and subsets $\\mathcal Y'\\subseteq\\mathcal Y$ with positive measure it holds that $P[f_{\\phi}(x)\\in \\mathcal Y']/P[f_\\theta(x)\\in \\mathcal Y']>0$. \nThis assumption puts our result apart from classical overlap assumptions in that it takes into account that $\\hat Y$ is the output of a known prediction function, and the target ‘environment’ is fully specified by $f_\\phi$.\n", " Thank you to the reviewers for their valuable feedback and appreciation of our work. Before we respond to the questions by the reviewers individually, we would like to clarify one concern upfront.\n \n**Bigger picture.** We want to be clear that we are not claiming to ‘solve’ performative prediction or advocate for predicting from predictions as a sufficient solution to tackle performative distribution shifts in full generality. Our work focuses on one specific causal effect and we want our technical results to be understood as a proof of concept that shows how access to the predictions could enable positive results in the presence of performativity. We hope this contributes an additional perspective to recent discussions about data collection for machine learning, in particular emphasizing the importance of collecting information about predictions. We highlight this as a necessary step to enable fruitful research that builds better understanding of the down-stream consequences of predictions. \n\nBeyond this broader goal, we believe our causal identifiability results are of independent interest because they distill interesting challenges and particularities of performing causal inference when the treatment variable is not randomly assigned, but based on a prediction output by a machine learning model.\n", " The setting of the paper is about making predictions from predictions -- given a deployed model and decision subjects may best respond to it, how will their target variable Y change. The goal for the authors is to predict the impact of a new model deployment before actually deploying it. Instead of trying to come up with an optimal solution (e.g. performative optimality), the authors are interested in understanding the underlying causal mechanism of the distribution shifts. I find the problem of identifying $E[Y| do (\\hat{Y} = y), X]$ in the performative prediction setting interesting. Overall the paper makes a novel contribution by providing sufficient conditions to identify the causal effect of predictions under some assumptions. The relation to the literature on spillover effect/social network analysis is also interesting.\n\nTo me, the major weakness to me is that the paper seems to make heavy assumptions in order to have interesting and clean results. For example, the whole paper is built on assuming a particular causal structure (Figure 1), and the definition of the extrapolation error allows a clean separation between the influence of $X$ and $\\hat{Y}$, and in section 3.3, the author assumes the effect of $X$ on $Y$ and $\\hat{Y}$ is separable as well. Thus it wasn't clear to me how likely this work can be extended to more complicated settings. The authors provide some empirical justification, but I would like to see some theoretical insights on how the results would change/not hold without certain assumptions, and I believe it will greatly strengthen the paper. Can the author provide some insights on why overlapping is easier to achieve if $|\\mathcal{T}|$ is small? In general, can we boost the overlapping by including more classifiers? If the underlying causal model is more complicated (e.g. have more variables $X_1, X_2..$ or more complicated causal relationships among each other), what will be a good way to ensure the identifiability of the causal structure? Please see the strength and weaknesses section.", " This work investigates performative prediction through the lens of causal inference, aiming to identify causal structures which allow for counterfactual estimation of performative effects.\n\nInvestigation in this paper assumes a specific causal structure, in which the predicted value $\\hat{Y}$ acts as a mediator between the chosen prediction model $f_\\theta$ and the outcome $Y$, and $f_\\theta$ does not directly affect other variables (Figure 1). For the theoretical analysis, an infinite-sample setting is assumed, where it is assumed that the sample size is infinite. \nThree sets of theoretical results are presented:\n* The first set result assumes a specific performative structure, and quantifies the regret from naively training a model under performative behavior (Proposition 1,2). The result aims to illustrate the negative effects of neglecting a performative causal structure. \n* For the second set of theoretical results, authors identify “overlap/positivity” as the main limiting factor in performative extrapolation. Authors point out three avenues through which this concern can be alleviated - randomization of model outputs (Proposition 3), noisy measurement of covariates (Proposition 4), and incongruence between the effect of $X$ on $Y$ and the effect of $X$ on $\\hat{Y}$ in separable performative structures (“over-parameterization” in Proposition 5, discrete classification in Proposition 6).\n* For the third set of results, authors initiate investigation of spillover effects between users, identifying a spillover structure $G$ through which identifiability is possible.\n\nIn the experimental evaluation, authors present results which validate their claims, showing empirically that results extend beyond the theoretical guarantees.\n Strengths\n* Presentation and mathematical notations are very clear. Causal assumptions are made explicit.\n* Identifying “failure modes” and “success modes” of performative prediction is an approach which can establish strong theoretical foundations for this area of research, and help it further establish applicability in practice.\n* The investigation points out avenues of research which may be interesting for further inquiry, such as the relation between model over-parameterization and performative prediction, or the significance of spillover effects.\n\nWeaknesses\n* The paper claims to address the general problem of \"predicting from predictions\", but in practice assumes a very specific causal structure and does not sufficiently establish its applicability. In particular, the paper assumes a causal model in which $f_\\theta$ does not affect $X$. However, a direct causal path between $f_\\theta$ and $X$ exists in many practical cases, for example in the “Actionable Recourse” setting (Uston et al. 2018). In section 2.1, the authors mention many applications in which a performative structure may be present, but it’s not clear whether they can indeed be approximated using the causal structure presented in Figure 1.\n* The paper presents a collection of interesting insights, but it’s not made clear how they combine into an understanding of the “bigger picture”. Moreover, even though similar problems were investigated in previous literature (e.g “Counterfactual Risk Minimization: Learning from Logged Bandit Feedback” by Swaminathan and Joachims 2015, recent work on over-parameterization and OoD generalization in deep neural nets), these existing results are not discussed in the paper.\n* Theoretical results assume that sample size is infinite, and that predictors are minimizers of the expected risk (i.e $h^*=\\arg\\min_{h\\in\\mathcal{H}} \\mathbb{E}[w(h(x)-y)^2]$) - A quantity which cannot be directly optimized practice. Not clear how the formalism extends to the finite-sample setting. What would change when the dataset size is finite?\n\nA few minor typos: Line 263 (bet -> be), line 307 (inequality expression seems incomplete - missing a random variable next to the expectation operator?), line 374 (lesson -> lessons).\n * When do we expect the performative causal structure assumption (Figure 1) to hold? When do we expect it not to hold? What are the implications of making a wrong assumption about the causal structure?\n* What is the relation between the different propositions in the theoretical analysis? How do they combine into an understanding of a “bigger picture”? Are they \"complete\" in the sense that they cover all possible influence modes?\n* The discussion claims that “one of the most important lessons from this work is that there is high value to logging the state of the deployed predictor when collecting data for the purpose of supervised learning”. I agree with this claim very much, and am wondering if there is a way to support or quantify it.\n* What is the connection to existing work on learning under distribution shift? Literature that comes to mind is:\n * “Counterfactual Risk Minimization: Learning from Logged Bandit Feedback” by Swaminathan and Joachims 2015,\n * “Discriminative Learning Under Covariate Shift”, Bickel et al. 2009\n * “Actionable Recourse in Linear Classification”, Uston et al. 2018 \nLiterature on online learning and multi-armed bandits (e.g can we think of an “$\\varepsilon$-greedy” strategy as an inducer of identifiability?)\n * The paper makes a very strong causal assumption, but the current version does not properly contextualize it within the existing literature. I feel that the paper would benefit from discussing its limitations and implications of the causal structure assumption in more depth.\n* Extension of theoretical analysis to the finite-sample case, and the relation to existing machine learning literature on similar topics.", " The paper studies the problem of estimating counterfactual outcomes under a different predictive model, thus drawing a link between causal inference and the recent literature on performative prediction. The paper starts with hardness of performativity-agonostic learning and then describes identification results that enable the supervised learning approach to work. The authors also conduct experiments showing that the identification strategies work well in simulations and mild violations of assumptions are not of concern. Strengths: The writing is clear and the problem is well motivated and stated. The identification results are clearly stated with enough explanations and examples. Overall the paper is very clear in exposition. The problem studied is also having potential significance in real world examples as non-stationarity arises, which I particuarly like. The experiements are sound with code available for reproducibility.\n\nWeakness: The scope of the work is somewhat limited since the effect of the predictive model is only assumed to affect $\n\\hat{Y}$ (the authors acknowledge this in section 2.1). When restricting to this particular causal graph, the first two identification results are essentially direct application of existing results, thus limiting the originality of the work. Also, it would be nice to see some real examples in the experiment section. 1. In the overparametrization identification result, the author mentions neural network as one of the examples. I am curious about if there is any empirical results illustrating this identification result under this complex model as a lot of the real world models are indeed getting more and more complex.\n2. In proposition 4, is $P(U|Y, \\hat{Y})$ necessary to know or $P(U|\\hat{Y})$ is enough to enable identification?\n3. Could you say more about applicability of knowing $P(U|Y, \\hat{Y})$ in some real world examples, like which examples you mentioned seem more reasonable assuming this?\n\nSome possible typos: line 293 in main text, $f_\\theta$ we will, removing we? line 307, should there be $Y_j$ after the expectation? The author is very clear about the limitation of the work.", " In the traditional supervised setting, we seek to predict a response $Y$ based on features $X$, say as $\\hat{Y} = f_{\\theta}(X)$. The question this paper asks is the following: What happens when the prediction $\\hat{Y}$ itself influences $Y$? Can we then still rigorously reason about our predictive model? To answer the above question, the paper makes the following contributions:\n\n1) It introduces a causal framework to answer the above question. Instead of the traditional regression estimand $\\mathbb E[Y \\mid X=x]$, the estimand of interest in this setting is the following:\n$$ h(x, \\hat{y}) = \\mathbb E[ Y \\mid X=x, do(\\hat{Y}=\\hat{y})] $$\n\n2) It establishes three practical situations under which the above estimand is identified from observational data and may furthermore be estimated by supervised learning based on weighted empirical risk minimization. These three situations are the following: when noisy predictions are released (e.g., in differential privacy), when the features $X$ are noisy measurements of latent features $U$, and under separability (along with overparameterization or discrete classification).\n\n\n3) A semi-synthetic study is conducted based on census data and Kaggle credit scoring data.\n # Strengths\n\n* Motivation: The setting and problem tackled in this paper are exceptionally well-motivated. It is clear that rigorous reasoning about the causal effects of predictions is of great practical and theoretical importance. \n\n* Clarity: The prose of the paper (but not the formal results, see below) is a joy to read and very clear.\n\n\n# Weaknesses\n\n* Previous work: I believe that the paper misses important previous references. Rigorously thinking about causal effects of predictions and seeking to estimate them is not a new idea, even within the NeurIPS community. For example, the following paper (also see references therein) addresses such a problem, and the underlying method is based on adding noise to the predictors (i.e., very similar to one of the identification strategies in the present paper):\n\n> Wager, S., Chamandy, N., Muralidharan, O., & Najmi, A. (2014). Feedback detection for live predictors. Advances in Neural Information Processing Systems, 27.\n\n* Formal results: Several of the formal results of the paper are not well-stated, have errors (e.g., in the proofs), and make incorrect references to previous work. See points below for a more detailed listing of such issues.\n ### Extrapolation error\n\nPlease clearly define what extrapolation error is (I do not think it is ever defined). It may be helpful to state the theoretical result (Proposition 1) with the middle term in the inequality replaced by $R_{f_{\\phi}}(\\psi(f_{\\theta})) - R_{f_{\\phi}}(\\psi(f_{\\phi}))$. Even though the 2nd term is $0$, it may be more instructive to express it as above. In Proposition 1, it seems that smoothness and strong convexity refer to the second argument of the loss function. If so, this should be clarified/explicitly stated.\n\n### Overlap\n\n* Front door adjustment: I did not understand the reference to front door adjustment in both the main text and the proof of Proposition 3. For example, in the proof, it is written that the front door adjustment formula is applied to $\\hat{Y} \\to T_{\\hat{Y}} \\to Y$. But note that here $X$ is actually observed, and is conditioned upon, so front door adjustment does not seem necessary? Instead it seems that the argument boils down to the fact that in this setting:\n$$ \\mathbb E[ Y \\mid X=x, do(\\hat{Y}=\\hat{y})] = \\mathbb E[ Y \\mid X=x, \\hat{Y}=\\hat{y}],$$\nand the RHS is now estimable/identifiable (while is is not under the conditions of Proposition 2).\n\n\n* Page 6, Line 226: $P(T_{\\hat{Y}} = t \\mid X)>0$: This is not correct, this should be replaced by a statement about the conditional density of $T_{\\hat{Y}}$, or alternatively, stated as in Assumption 1.\n\n### Noisy measurement of covariates\n\n* \"information about the noise distribution of the measurement error $P(U \\mid X=x)$\": Is the measurement error distribution not specified instead by $p(x \\mid U=u)$?\n\n* I do not understand the statement of Proposition 4. What are these weights exactly? They are stated as:\n$w(x, \\hat{y}) = P(U | X=x) / P(U \\mid \\hat{Y} = \\hat{y}),$\nbut this does not make sense to me, since $U$ is a latent random variable (and so the above could not be the density at a fixed $u$). Also why does the beginning of the statement state that knowledge of $P(U \\mid \\hat{Y} = \\hat{y}, Y=y)$ is required even though only $P(U \\mid \\hat{Y} = \\hat{y})$ is included in the weight definition?\n\n* Proposition 4: In the proof of Proposition 4, the equality in line (19) is incorrect. \n\n### Separability:\n\n* Proposition 5: It would be very helpful if the specification of $\\mathcal{H}$ in this statement could be more explicit.\n\n* Example 3.1: \"inferring $\\beta = c_1$ and $\\alpha = c_1 - c_2$\": I think for $\\alpha$ one would infer $\\alpha=c_1$ instead of $\\alpha = c_1 - c_2$.\n\n* Proposition 6: Here the main text claims that results for the regression discontinuity design are used, but then the actual proof cites a result from Wang and Blei (2019) that is unrelated to the regression discontinuity design. Also in the proof, what is $x'$? Why does supervised learning recover the true effect?\n\n### Minor:\n\n* *The supervised learning approach*: This should be in bold instead of italics (to match the bold of other paragraphs in the manuscript).\n\n* In equation (8), why are the weights allowed to depend also on $y$? Would it suffice if they only depend on $(x, \\hat{y})$?\n\n* There is a typo in the statement of Assumption 2. Limitations of the framework are not discussed. It would be great if a paragraph could be added with possible directions for future work!" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "MI3Mrfetp5", "rkAWJyIRD5b", "fMtE2nsu6O", "J8RYKCmnFqK", "mr3potzWkBk", "M3HWdw_ofkM", "mr3potzWkBk", "_vPlgrj9kI", "jjh2upkWZGw", "iCdhSP9qf9T", "QDE3XCY-_A", "-FMBHZ9ujnM", "Tl9dRd_sKc-", "M6EKD9caBcn", "nips_2022_80RnitDehg_", "nips_2022_80RnitDehg_", "nips_2022_80RnitDehg_", "nips_2022_80RnitDehg_", "nips_2022_80RnitDehg_" ]
nips_2022_acKK8MQe2xc
Learning Invariant Graph Representations for Out-of-Distribution Generalization
Graph representation learning has shown effectiveness when testing and training graph data come from the same distribution, but most existing approaches fail to generalize under distribution shifts. Invariant learning, backed by the invariance principle from causality, can achieve guaranteed generalization under distribution shifts in theory and has shown great successes in practice. However, invariant learning for graphs under distribution shifts remains unexplored and challenging. To solve this problem, we propose Graph Invariant Learning (GIL) model capable of learning generalized graph representations under distribution shifts. Our proposed method can capture the invariant relationships between predictive graph structural information and labels in a mixture of latent environments through jointly optimizing three tailored modules. Specifically, we first design a GNN-based subgraph generator to identify invariant subgraphs. Then we use the variant subgraphs, i.e., complements of invariant subgraphs, to infer the latent environment labels. We further propose an invariant learning module to learn graph representations that can generalize to unknown test graphs. Theoretical justifications for our proposed method are also provided. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our method against state-of-the-art baselines under distribution shifts for the graph classification task.
Accept
This paper focuses on a new research problem of learning invariant graph representations under distribution shifts, which considers the latent environment labels. The proposal is a joint learning framework called graph invariant learning (GIL), combing three different GNNs of various functions. The philosophy behind sounds quite interesting to me, namely, learning a maximally invariant graph predictor, which composes an environment inference module and an invariant subgraph identification module. The proposed method GIL has good empirical results on several datasets and related theoretical analyses, which further justify its effectiveness. The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance and complexity, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please include the additional experimental results in the next version.
train
[ "pUCMcPpMK2", "YGodvJhIrvJ", "PM2cQMf8HKZ", "ARzdGQHR9hm", "Mmwf99OCHD", "CdUmIRbJyV", "Y5HQPkq7EqV", "gOagjfmarRc", "Rg03B6e8LUZ", "xtuXSXIqWlv", "GaZr8YurMY7", "RznFYbDCwiF", "ARsuAsGzlXM", "GOfSbj-TIyh", "N15tmanQ9_b", "cqc-PU8UmAP", "koKszwiuI5", "J0CddhRKskH", "yRJNx8ygJ4w", "Z46UlsqZsy", "9RUXsIGR37", "AtIJ3WLKRzU", "848hHcGECpW", "S4ICvql11X6" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe sincerely appreciate your positive feedback and acknowledgement of our rebuttal. And thanks again for your time and efforts in reviewing our work.\n\nBest regards,\n\nThe Authors", " Thank you for the detailed responses to all points made in the review. \n\nWe have differing opinions on the validity of certain conditions in the OOD generalization and invariant learning problem literature, but your claim that these are not your own new assumptions, and rather shared with previous work is valid. The methodological novelty is of course also a slightly subjective matter but I will suggest that the specific language and four point enumeration used to state your rebuttal for Q2 below, is in my opinion slightly stronger than the original language, say, at the end of the introduction! Maybe you can incorporate parts of it into the revision. The additional experiments based on exploring the insensitivity to clustering algorithm and hyperparameters are also appreciated.", " Dear Reviewer,\n\nWe sincerely appreciate your quick response! And thanks again for your time and efforts in reviewing our work.\n\nBest regards,\n\nThe Authors", " Thanks for the authors' detailed responses. My concerns about the literature and experiments are resolved. I would like to raise my score to 5.", " Dear Reviewer,\n\nWe have provided detailed responses to your comments. We are wondering whether your concerns have been properly addressed.\n\nIf you have further questions after reading the responses and the revised paper, it would be great to let us know. We are happy to address them.\n\n\nBest regards,\n\nThe Authors", " Dear Reviewer,\n\nWe have provided detailed responses to your comments. We are wondering whether your concerns have been properly addressed.\n\nIf you have further questions after reading the responses and the revised paper, it would be great to let us know. We are happy to address them.\n\n\nBest regards,\n\nThe Authors", " Dear Reviewer,\n\nWe have provided detailed responses to your comments. We are wondering whether your concerns have been properly addressed.\n\nIf you have further questions after reading the responses and the revised paper, it would be great to let us know. We are happy to address them.\n\n\nBest regards,\n\nThe Authors", " Dear Reviewer,\n\nWe have provided detailed responses to your comments. We are wondering whether your concerns have been properly addressed.\n\nIf you have further questions after reading the responses and the revised paper, it would be great to let us know. We are happy to address them.\n\n\n\nBest regards,\n\nThe Authors", " We would like to thank all the reviewers for their thoughtful suggestions on our paper. We are glad that the reviewers have some positive impressions of our work, including focusing on the new/important problem (oeB7, uGwK), clear presentations (52ur, uGwK), good mathematical groundings (oeB7), well-scoped evaluations (52ur), and significant improvements (oeB7, PXQD). \n\nWe have provided detailed responses to all the comments/questions point-by-point and also added new empirical evaluations. The summary of our updates is as follows:\n\n- We further clarify the technical details and the applicability of the assumptions. \n- We add more empirical analyses to justify the effectiveness of inferring latent environments (in Appendix E.7), including the ablation studies, directly using the ground-truth environments, different choices for the clustering algorithm, the mutual promotion of the designed modules in the additional dataset, etc.\n- We add more baselines for comparisons and analyses (in Section 5).\n\nThe above updates are highlighted in the revision. We appreciate all reviewers’ time again. We are looking forward to your reply!", " + **Q8. Try different choices of READOUT functions and GNN architectures.** \n\n**R8**: Thank you for this suggestion. As present in Appendix E.2 GNN Configurations, the READOUT functions and GNN backbones are set the same as one representative baseline DIR [5] for a fair comparison. Considering the significant improvements against baselines under the same setting, we think the effectiveness of the proposed model is justified. Following your suggestion, we also conduct additional experiments on two real-world datasets with different GNNs and readout functions.\n\n| Backbone and READOUT | MOLSIDER | MOLHIV |\n| -------------------- | :--------------: | :--------------: |\n| GIN + add pooling | 63.50 ± 0.57 | 79.08 ± 0.54 |\n| GIN + max pooling | 63.37 ± 0.72 | **79.10 ± 0.42** |\n| GIN + mean pooling | 61.91 ± 0.75 | 78.16 ± 0.47 |\n| GCN + add pooling | 62.31 ± 1.12 | 77.23 ± 0.61 |\n| GCN + max pooling | **63.68 ± 0.91** | 77.61 ± 0.59 |\n| GCN + mean pooling | 61.33 ± 0.45 | 76.98 ± 0.36 |\n\nWe can observe that the choices of READOUT functions and GNN architectures have a slight influence on the performances. Overall, our model is not very sensitive to their choices and can be compatible with most common READOUT functions and backbones. We have added the analyses above into Appendix E.8.1 of the revised paper.\n\n\n\n**References:** \n\n[1] Arjovsky et al., Invariant risk minimization. Arxiv, 2019. \n\n[2] Koyama et al., When is invariance useful in an Out-of-Distribution Generalization problem? Arxiv, 2020. \n\n[3] Chang et al., Invariant Rationalization. ICML, 2020. \n\n[4] Krueger et al., Out-of-Distribution Generalization via Risk Extrapolation (REx). ICML, 2021. \n\n[5] Wu et al., Discovering Invariant Rationales for Graph Neural Networks. ICLR, 2022. \n\n[6] Hu et al., Open Graph Benchmark. NeurIPS, 2020.\n\n[7] Duvenaud et al., Convolutional networks on graphs for learning molecular fingerprints. NeurIPS, 2015.\n\n[8] Peter J Rousseeuw, Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 1987.\n\n", " + **Q4. How do the $\\rm GNN^\\mathbf{M}$, $\\rm GNN^\\mathbf{V}$, and $\\rm GNN^\\mathbf{I}$ be jointly optimized?**\n\n**R4:** Thank you for this question. The pseudocode of our method in Appendix shows the details of the joint optimization process. Specifically, **(1)** we first decompose each input graph into an invariant and variant subgraph (i.e., $G_I$, $G_V$) by the mask matrix $\\rm \\mathbf{M}$ that is generated from the $\\rm GNN^\\mathbf{M}$ (line 3 in Algorithm 1). **(2)** Then we generate representations of the invariant and variant subgraph (i.e., $\\mathbf{h}_I$ and $\\mathbf{h}_V$) by $\\rm GNN^\\mathbf{I}$ and $\\rm GNN^\\mathbf{V}$ (line 5 and line 6 in Algorithm 1) respectively. Note that $\\rm GNN^\\mathbf{I}$ and $\\rm GNN^\\mathbf{V}$ adopt shared parameters (as presented in Appendix E.2 GNN Configurations). **(3)** Finally, we infer environments with clustering representations of variant subgraphs and calculate the invariant learning objective function (by Eq. (8)) to update all model parameters using back propagation. Therefore, the $\\rm GNN^\\mathbf{M}$, $\\rm GNN^\\mathbf{V}$, and $\\rm GNN^\\mathbf{I}$ can be jointly optimized. \n\n+ **Q5. Optimization of the adjacent matrixes $A_I$ and $A_V$.**\n\n**R5:** Thank you for this question. As stated in lines 135-138, directly optimizing a discrete matrix is indeed intractable in practice. Therefore, we follow DIR [5] to adopt a learnable GNN (denoted as $\\rm GNN^\\mathbf{M}$) to generate a **soft mask matrix** (Eq. (2)). And the soft mask value on an edge directly controls the message-passing strength between connected nodes (a very low strength means the edge barely passes any message). Finally, we can decompose the original graph into the invariant subgraph (i.e., $A_I$) and variant subgraph (i.e., $A_V$) (Eq. (3)) in an end-to-end manner.\n\n+ **Q6. More descriptions on the Silhouette score.**\n\n**R6:** Thank you for this suggestion. Silhouette score [8], a commonly used evaluation metric for clustering, is defined as the mean Silhouette coefficient over all samples. The Silhouette coefficient is calculated using the mean intra-cluster distance (denoted as $d_i$) and the mean nearest-cluster distance (denoted as $d_n$) for each sample. The Silhouette coefficient for a sample is $(d_n - d_i) / max(d_i, d_n)$. Therefore, **Silhouette score falls within the range $[-1, 1]$**. A silhouette score close to 1 means that the clusters become dense and nicely separated. The score close to 0 means that clusters are overlapping. And the score of smaller than 0 means that data belonging to clusters may be wrong/incorrect. We have added these details on the Silhouette score in Appendix E.5 of the revised paper.\n\nIn Figure 4, the Silhouette score in our experiments reaches approximately 0.75, which is consistent with the clustering performance in Figure 5. These results indicate that the environment inference module and invariant learning module can mutually enhance each other, leading to an accurate clustering performance and a promising OOD generalization ability.\n\n+ **Q7. The clustering cases without environment inference module.**\n\n**R7**: Thank you for this suggestion. Following your suggestion, we compare our original model (GIL) with an ablated version namely removing the environment inference module (termed as GIL w/o EI). So, for GIL w/o EI, the optimization objective in the invariant learning module (Eq. (8)) will use the randomly partitioned environments. \n\n The results on the synthetic dataset SP-Motif ($r_{test}=1/3$) are as follows.\n\n| $r_{train}$ | $r=1/3$ | $r=0.5$ | $r=0.6$ | $r=0.7$ | $r=0.8$ | $r=0.9$ |\n| ----------- | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: |\n| GIL | 55.44 ± 3.11 | 54.56 ± 3.02 | 53.60 ± 4.82 | 53.12 ± 2.18 | 51.24 ± 3.88 | 46.04 ± 3.51 |\n| GIL w/o EI | 53.15 ± 2.31 | 51.25 ± 3.86 | 48.98 ± 6.15 | 43.18 ± 5.93 | 42.65 ± 4.51 | 39.15 ± 4.14 |\n\n\nWe also conduct comparisons on the real-world datasets.\n\n| | MNIST-75sp | Graph-SST2 | MOLSIDER | MOLHIV |\n| ----- | :----------: | :----------: | :----------: | :----------: |\n| GIL | 21.94 ± 0.38 | 83.44 ± 0.37 | 63.50 ± 0.57 | 79.08 ± 0.54 |\n| GIL w/o EI | 19.75 ± 1.98 | 81.98 ± 1.10 | 57.78 ± 1.21 | 75.81 ± 1.23 |\n\nFrom the results above, we can observe a significant performance drop under all comparisons when removing the environment inference module. It well justifies the effectiveness of our designed module, demonstrating that inferring environments with variant subgraphs can benefit learning invariant graph representations under a mixture of latent environments. We have added the analyses above into Appendix E.7.1 of the revised paper.", " + **Q2. The technical novelty and contributions of the paper are neutral.**\n\n**R2:** Thank you for this comment. We would like to clarify the technical novelty and contributions of this work as follows. **(1)** We focus on a novel and challenging problem, i.e., learning invariant graph representations in a mixture of latent environments. To the best of our knowledge, this problem is not explored by the existing works and its formulation is also not well established in the literature due to the unique challenges for graphs under distribution shifts. **(2)** Since graph data usually comes from a mixture of latent environments and most existing invariance regularizers rely on accurate environment labels, they cannot be directly applied to graphs. It also remains a new research direction and is more challenging to identify the complex invariant patterns on graphs among latent environments. **(3)** We kindly disagree that our model is a simple combination of the k-means clustering and invariance regularizer. The environment inference module and invariant learning module focus on two complementary parts of the input graphs, i.e., variant subgraphs and invariant subgraphs, respectively. Therefore, the two modules can mutually promote each other to identify more accurate invariant and variant patterns during training process. This is also a new insight for graph OOD generalization. **(4)** We design a theoretically grounded learning scheme to find a maximal invariant subgraph generator for solving the graph OOD generalization problem. Our model achieves performance gains on both synthetic and real-world benchmarks with significant improvements against various baselines. We believe these technical contributions are non-trivial and have potential impacts to the community. \n\n+ **Q3.1. The difference between the invariance property in Assumption 3.1 and the last condition of Theorem 4.1.**\n\n**R3.1:** Thanks for this insightful comment. First, we would like to clarify Assumption 3.1 and the last condition of Theorem 4.1. The invariance property in Assumption 3.1 is first introduced to solve OOD generalization problem by the invariant risk minimization (IRM) [1], which is also widely adopted by follow-up works [2-4]. It assumes that the input instance consists of invariant features whose relation to the label is stable among different environments and variant features whose relation to the label is sensitive to environment changes. Therefore, Assumption 3.1 focuses on **the relationship between data and labels**. This assumption can be reasonably introduced into graph-structure data, which starts a new research direction for handling graph OOD generalization problems [5]. On the other hand, the last condition of Theorem 4.1 assumes there exists another environment $e^\\prime$ where the distribution of the invariant subgraphs is the same as that in environment $e$. Therefore, it focuses on **the data distribution and the diversity of environments (not involving labels)**. Considering the above differences, they are fundamentally different assumptions.\n\n+ **Q3.2. Whether these assumptions are hard to achieve.**\n\n**R3.2:** We believe **both assumptions can be easily satisfied for real-world graphs**. For Assumption 3.1, it was reasonably introduced in [5] and can be verified for real-world graphs. For example, as present in Figure 1 in Appendix, for molecule graphs labeled by specific properties [6, 7], the optimal invariant subgraphs represent the \"hydrophilic R-OH group\"/\"non-polar repeated ring structures\", whose relationship with the label solubility/anti-solubility is truly predictive and invariant across different environments. And the corresponding variant subgraphs denote the shared carbon structure or scaffold [6, 7], which could change across different environments. Similarly, for superpixel graphs [5], invariant and variant subgraphs represent the edges corresponding to the digit itself and other edges from the background, respectively. The label is only dependent on the digit and the background information could change while having no influence on the labels. And for the Graph-SST2 datasets (as shown in Figure 7 in the main paper), invariant subgraphs are the positive/negative words in the sentences that are salient for sentiment labels. For the assumption in Theorem 4.1, since we do not have ground-truth of environmental labels for graphs, we cannot easily visualize the results, but the diversity of environments is also commonly believed for graphs [5, 6]. Theorem 4.1 shows that theoretical OOD optimality can be achieved if the assumptions are satisfied.", " We thank the reviewer for the valuable feedback. We addressed all the comments. Please kindly find the detailed responses to the comments below.\n\n+ **Q1.1. The motivation and significance of inferring the latent environments.**\n\n**R1.1:** We would like to clarify the **motivations** for inferring the latent environment labels, which are in two folds. **(1)** Although invariant learning methods [1-4] have achieved satisfactory OOD generalization under distribution shifts, most existing methods cannot be directly applied to graphs. One of the main obstacles is that graph data usually comes from a mixture of latent environments without accurate environment labels [5, 6], while most existing invariant learning methods require multiple training environments with explicit environment labels. Inferring the latent environments is inevitable in bridging this gap. **(2)** Following the invariant learning literature [1-4] and recent work [5], we assume that the input graph consists of an invariant and variant subgraph, where the invariant subgraph captures invariant relationships between predictive graph structural information and labels. The variant subgraph in turn captures variant correlations under different distributions, which are environment-discriminative features, motivating us to adopt the variant subgraphs to infer the latent environments.\n\nThe **significance** of inferring the latent environment is reflected in two aspects. **(1)** Since our model can automatically infer the environment label of graphs without supervision, we can study invariant learning for graph representation learning under a mixture of latent environments. We also further propose a theoretically-guaranteed model and achieve substantial performance gains on several synthetic and real-world datasets. **(2)** The environment inference module utilizes the variant subgraphs, which can also promote the accurate identification of invariant subgraphs for better OOD generalization (as analyzed in Section 5.4). \n\n+ **Q1.2. There lacks instruction to extract, utilize, and analyze the latent environment labels.**\n\n**R1.2**: As discussed in Sections 3.2 and 3.3, we **extract** (infer) the latent environment labels by clustering the representations of all the variant subgraphs. After obtaining the inferred environment labels, we **utilize** the inferred environment labels to optimize the objective Eq. (8) which encourages the output graph representations to be truly predictive to the labels and invariant across different environments. \n\nWe also conduct some empirical **analyses** in the experiments. **(1)** We plot environment inference results on the synthetic dataset. Figure 5 shows that the variant subgraphs perfectly capture the environment-discriminate features and the latent environments behind graph data are accurately inferred. **(2)** We find that the environment inference module and invariant learning module can mutually enhance each other, reflected in Figure 4 which shows that the test accuracy and the clustering performance improve synchronously over training. **(3)** During rebuttal, **we further conduct experiments** to compare our original model with the ablated version namely removing the environment inference module. (Please kindly refer to the response to Q7 for the results.) We observe a significant performance drop of this ablated model, which well justifies the effectiveness of our designed environment inference module. Besides, **we also conduct more experiments on environment inference in Appendix E.7** of the revised paper to analyze this module in our model.", " **References:**\n\n[1] Arjovsky et al., Invariant risk minimization. Arxiv, 2019.\n\n[2] Koyama et al., When is invariance useful in an Out-of-Distribution Generalization problem? Arxiv, 2020. \n\n[3] Krueger et al., Out-of-Distribution Generalization via Risk Extrapolation (REx). ICML, 2021.\n\n[4] Chang et al., Invariant Rationalization. ICML, 2020.\n\n[5] Wu et al., Discovering Invariant Rationales for Graph Neural Networks. ICLR, 2022.\n\n[6] Hu et al., Open Graph Benchmark. NeurIPS, 2020.\n\n[7] Duvenaud et al., Convolutional networks on graphs for learning molecular fingerprints. NeurIPS, 2015.", " + **Q6. Explore different choices for the clustering algorithm.**\n\n**R6:** In this work, we use the k-means clustering algorithm to infer the environment labels. Following your suggestion, we compare with another popular clustering algorithm (termed as convex clustering) proposed in (Lashkari et al., Convex Clustering with Exemplar-Based Models. NeurIPS, 2007) to infer the environment labels.\n\nThe results on the synthetic SP-Motif dataset ($r_{test}=1/3$) are as follows: \n\n| $r_{train}$ | $r=1/3$ | $r=0.5$ | $r=0.6$ | $r=0.7$ | $r=0.8$ | $r=0.9$ |\n| --------------------------- | :----------: | :----------: | :----------: | :----------: |:----------: | :----------: |\n| GIL (k-means) | 55.44 ± 3.11 | 54.56 ± 3.02 | 53.60 ± 4.82 |53.12 ± 2.18| 51.24 ± 3.88 | 46.04 ± 3.51 |\n| GIL (convex clustering) | 55.21 ± 2.45 | 53.60 ± 4.74| 54.01 ± 5.13 | 53.43 ± 1.94 | 50.12 ± 4.15 |47.01 ± 2.54 |\n\n\nThe results on the real-world datasets are as follows: \n\n\n| | MNIST-75sp | Graph-SST2 | MOLSIDER | MOLHIV |\n| --------------------------- | :----------: | :----------: | :----------: | :----------: |\n| GIL (k-means) | 21.94 ± 0.38 | 83.44 ± 0.37 | 63.50 ± 0.57 | 79.08 ± 0.54 |\n| GIL (convex clustering) | 19.98 ± 0.57 | 82.89 ± 0.53 | 63.67 ± 0.43 | 79.01 ± 0.61 |\n\n\nThese results show that the clustering algorithm could have a slight influence on the model performance and overall our model is not sensitive to the choice for clustering algorithm. It means that our model does not rely on specific clustering algorithm to infer the environment labels and can also be compatible with other clustering algorithms. We have added the results and analyses in Appendix E.7.3 of the revised paper.\n\n+ **Q7. Clustering results beyond the synthetic dataset.** \n\n**R7:** Thank you for this suggestion. Following your suggestion, we add the Silhouette score during the training process on the MNIST-75sp dataset in Figure 2 in Appendix of the revised paper. We observe the similar pattern on MNIST-75sp dataset with the results on SP-Motif shown in Figure 4 in the main paper. The test accuracy and the clustering performance improve synchronously over training, indicating that the environment inference module and invariant learning module of our model can mutually enhance each other in both synthetic and real-world scenarios, which is one of the technical contributions of this paper. \n\n- **Q8. Hyperparameter sensitivity analysis beyond the synthetic dataset.**\n\n**R8:** Thank you for this comment. In Section 5.5, the hyperparameter sensitivity analyses are conducted on the synthetic dataset and one real-world dataset MNIST-75sp. Following your suggestion, we add more analyses on the real-world dataset MOLSIDER to study the sensitivity of hyper-parameters: the number of inferred environments $|\\mathcal{E}\\_{infer}|$, the regularizer coefficient $\\lambda$, and the invariant subgraph mask size $t$.\n\n| $\\|\\mathcal{E}\\_{infer}\\|$ | 2 | 3 | 4 | 5 | 6 |\n| ----------------------- | :-----------: | :----------: | :----------: | :----------: | :----------: |\n| GIL | 63.50 ± 0.57 | 63.65 ± 0.39 | 63.88 ± 0.51 | 63.71 ± 0.41 | 63.72 ± 0.33 |\n\n| $\\lambda$ | **$10^{-6}$** | $10^{-5}$ | $10^{-4}$ | $10^{-3}$ | $10^{-2}$ | $10^{-1}$ | $10^{0}$ |\n| --------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: |\n| GIL | 61.71 ± 0.29 | 62.24 ± 0.32 | 63.76 ± 0.38 | 63.50 ± 0.57 | 62.87 ± 0.43 | 62.30 ± 0.39 | 62.11 ± 0.53 |\n\n| $t$ | 0.75 | 0.80 | 0.85 | 0.90 | 0.95 |\n| ------- | :----------: | :----------: | :----------: | :----------: | :----------: |\n| GIL | 63.10 ± 0.63 | 63.50 ± 0.57 | 63.46 ± 0.41 | 62.81 ± 0.33 | 62.13 ± 0.54 |\n\nWe observe **similar patterns on this dataset with the results in Figure 8** of the main paper. **(1)** The number of environments has a slight impact on the model performance, indicating that our method is not sensitive to the number of inferred environments. **(2)** The coefficient $\\lambda$ also has a slight influence by balancing the classification loss and the invariance regularizer term. **(3)** A very large mask size $t$ will result in too many edges in the invariant subgraph and bring in variant structures, while a small $t$ may let the invariant subgraph become too small to capture enough structural information. Overall, our model can outperform the best baselines with a wide range of hyper-parameters choices. ", " + **Q3. Can the conditions for Thm 4.1 approximately hold?**\n\n**R3:** Thank you for this question. We would like to clarify that the conditions of Thm 4.1 can approximately hold in real world.\n\nThe first condition of Thm 4.1 means that **the optimal invariant subgraph and optimal variant subgraph (i.e., the complement of the optimal invariant subgraph) should be independent** in ideal situations (although we may observe correlations between them due to the bias in datasets). This condition is widely assumed in the invariant learning literature [1-5] and also commonly satisfied for real-world graphs. For example, as present in Figure 1 in Appendix, for molecule graphs labeled by specific properties [6, 7], the optimal invariant subgraphs represent the \"hydrophilic R-OH group\"/\"non-polar repeated ring structures\", whose relationship with the label solubility/anti-solubility is truly predictive and invariant across different environments. And the corresponding variant subgraphs denote the shared carbon structure or scaffold [6, 7], which are independent of the subgraphs reflecting the properties. For superpixel graphs from [5], invariant and variant subgraphs represent the edges corresponding to the digit itself and other edges from the background, respectively, where the digit and background information are also independent. Therefore, the first condition of Thm 4.1 commonly holds in real world.\n\nThe second condition of Thm 4.1 means that **the distribution of training graphs consists of enough diverse environments**. This condition is also widely adopted in the literature [1-5] for practical solutions. For example, as one of the representative baselines in our comparisons, DIR [5] conducts interventions on the training distribution to create multiple interventional distributions. Likewise, for the real-world molecule graphs, there exist some samples whose invariant subgraphs (i.e., the part reflecting specific properties) are the same but variant subgraphs (i.e., carbon structure or scaffold) are different, rather than one type of invariant subgraph only has unique and single type of variant subgraph. \n\n+ **Q4. Why the number of clusters for the k-means is chosen from [2, 4]?**\n\n**R4:** Thank you for this question. The number of clusters $|\\mathcal{E}\\_{infer}|$ in Eq. (5) is a hyper-parameter, whose sensitivity analysis **in a large range** is provided in Section 5.5. Overall, although it is an important hyper-parameter to the performance, our model is not very sensitive to the choice of cluster number. From Figure 8(a), we can observe that our model does not need to be specified the ground truth number of latent environments and can achieve promising results in a wide range of this hyperparameter. Although the performance can reach a peak when $|\\mathcal{E}\\_{infer}|$ matches the ground truth, our model still outperforms the most competitive baselines when $|\\mathcal{E}\\_{infer}|$ does not equal the ground truth or when the ground truth is unknown. We leave the automatic search for the best choice of this hyper-parameter as the future work.\n\n+ **Q5. Methodological focus on shared subgraph/scaffold structure between environments.**\n\n**R5:** Thank you for this comment. In this work, we capture environment-agnostic (i.e., invariant among environments) graph patterns to achieve good OOD generalization. We find that this assumption is widely adopted in the literature [1-5] and also commonly satisfied for real-world graphs. For example, as present in Figure 1 in Appendix, for molecule graphs labeled by specific properties, the invariant subgraphs represent the \"hydrophilic R-OH group\"/\"non-polar repeated ring structures\", whose relationship with the label solubility/anti-solubility is truly predictive and invariant across different environments. And variant subgraphs denote the shared carbon structure or scaffold [6, 7]. For superpixel graphs from [5], invariant and variant subgraphs represent the edges corresponding to the digit itself and other edges from the background, respectively.", " We thank the reviewer for the valuable positive feedback. We addressed all the comments. Please kindly find the detailed responses below.\n\n+ **Q1.1. Thm 4.2 can likely be moved to the Appendix.**\n\n**R1.1:** Thank you for this suggestion. We have moved Thm 4.2 into Appendix in the revised version. \n\n+ **Q1.2. Clarifications on Thm 4.1. It is stated generally rather than in the specific setting.**\n\n**R1.2:** We would like to explain Thm 4.1 for better clarification. The RHS of Eq. (10) in Thm 4.1 is the objective for OOD generalization via the invariance principle [1-4], which is also described in the problem formulation (Eq. (1) in Section 2). It means that we aim to solve the OOD problem by learning invariant predictors that can generalize across environments. In this paper, we study invariant learning for graph representation learning under a mixture of latent environments. Considering the challenges on graphs (as present in lines 45-53 in Section introduction), **we transform the graph OOD generalization problem into finding the optimal invariant subgraphs**, which is indicated by the LHS of Eq. (10) in Thm 4.1, showing that our proposed method effectively solves our targeted problem, i.e., LHS = RHS.\n\nIn addition, since the distribution shifts on graphs could exist in both feature-level and structure-level, we expect that our model through finding the optimal invariant subgraphs can handle general and diverse distribution shifts. The empirical experiments validate that our model indeed can achieve good performance on various types of distribution shifts. \n\n+ **Q2. The three components of the method are not particularly novel on their own.**\n\n**R2:** Thank you for this comment. We would like to clarify the technical novelty and contributions of this work as follows. **(1)** We focus on a novel and challenging problem, i.e., learning invariant graph representations in a mixture of latent environments. To the best of our knowledge, this problem is not explored by the existing works and its formulation is also not well established in the literature due to the unique challenges for graphs under distribution shifts. **(2)** Since graph data usually comes from a mixture of latent environments and most existing invariance regularizers rely on accurate environment labels, they cannot be directly applied to graphs. It also remains a new research direction and is more challenging to identify the complex invariant patterns on graphs among latent environments. **(3)** Our model non-trivially fuses the advantages of the three components rather than simply combining them through engineering. The environment inference module and invariant learning module focus on two complementary parts of the input graphs, i,e., variant subgraphs and invariant subgraphs, respectively. Therefore, the two modules can mutually promote each other to identify more accurate invariant and variant patterns during training process. This is also a new insight for graph OOD generalization. **(4)** We design a theoretically grounded learning scheme to find the maximal invariant subgraph generator for solving the graph OOD generalization problem. Our model achieves performance gains on both synthetic and real-world benchmarks with significant improvements against various baselines. We believe these technical contributions are non-trivial and have potential impacts to the community. ", " We thank the reviewer for the insightful comments. Please kindly find the detailed responses to the comments below.\n\n+ **Q1.1. The difference between $\\mathcal{E}$ and $\\mathcal{E}\\_{infer}$.** \n\n**R1.1:** Thank you for this comment. We would like to clarify the difference and connection between $\\mathcal{E}$ and $\\mathcal{E}\\_{infer}$. According to the definitions, $\\mathcal{E}$ is a random variable on indices of the ground-truth environments that are latent, and $\\mathcal{E}\\_{infer}$ is a random variable on indices of the inferred environments. We follow the invariant learning literature [1] to define invariant subgraph generator set $\\mathcal{I}$ with respect to the ground-truth environment $\\mathcal{E}$ and further derive the Theorem 3.2 (i.e., Equation (7)). However, it is often impossible to characterize such latent ground-truth environments in practice, which is the common issue in the invariant learning literature [1-4]. One common practical solution is to infer the latent environments from data and further assume that the model capable of generalization across the inferred environments $\\mathcal{E}\\_{infer}$ can also generalize to the ground-truth environments. For example, the work [3] studies to discover environment labels by maximally violating the invariance principle and the work [4] proposes to create interventional distributions for generating multiple environments. Following similar schemes, we infer the latent environments $\\mathcal{E}_{infer}$ with the representations of variant subgraph and propose the invariant learning module across the inferred environments for invariant and accurate predictions.\n\n+ **Q1.2. Does $\\mathcal{E}\\_{infer}$ approximate $\\mathcal{E}$? When does $\\mathcal{E}\\_{infer}$ achieve good OOD generalization?**\n\n**R1.2:** Since we do not have ground-truth for $\\mathcal{E}$, it is infeasible to directly constrain $\\mathcal{E}\\_{infer}$ to approximate $\\mathcal{E}$. Instead, we require $\\mathcal{E}\\_{infer}$ to be inferred by only capturing the ground-truth environment-discriminative features while leaving the environment-agnostic (invariant) features. The model can achieve good OOD generalization performance under this requirement, which is consistent with the invariant learning literature [1-4]. We also observe from Figure 4 in the experiments that when invariant subgraphs are accurately discovered, the inference of latent environments can also be promoted by better capturing the environment-discriminate features which further enhances learning invariant subgraphs. The two modules mutually enhance each other, leading to good OOD generalization performance in the experiments. These empirical results show the reasonableness and feasibility of this scheme. \n\n+ **Q2. Whether the given environment partition always leads to the best OOD generalization performance.**\n\n**R2:** Thanks for this insightful comment. We conduct empirical analyses to investigate this problem. Since the ground-truth environment labels are unavailable for the real-world datasets, we compare our original model (GIL with $\\mathcal{E}\\_{infer}$) with the model directly using the ground-truth environments (GIL with $\\mathcal{E}$) on the synthetic SP-Motif dataset ($r_{test}=1/3$). The results are as follows.\n\n| $r_{train}$ | $r=1/3$ | $r=0.5$ | $r=0.6$ | $r=0.7$ | $r=0.8$ | $r=0.9$ |\n| ------------------------------- | :----------: | :----------: | :----------: | :----------: | :----------: | :----------: |\n| GIL with $\\mathcal{E}\\_{infer}$ | 55.44 ± 3.11 | 54.56 ± 3.02 | 53.60 ± 4.82 | 53.12 ± 2.18 | 51.24 ± 3.88 | 46.04 ± 3.51 |\n| GIL with $\\mathcal{E}$ | 55.42 ± 2.98 | 54.63 ± 3.10 | 53.58 ± 4.67 | 53.18 ± 3.02 | 51.01± 4.02 | 46.23 ± 3.16 |\n\nWe can observe that **the performance of using the inferred environments by our model and the ground-truth environments are comparable**, even under different strengths of distribution shifts. The results are also expected since our inferred latent environments are largely aligned with the ground-truth labels, as shown in Figure 5 of the main paper. Nevertheless, we think it would be interesting to explore this problem further for real-world graphs when their environment labels are available. We have added the analyses above into the Appendix E.7.2 of the revised paper.\n\n\n\n**References:**\n\n[1] Koyama et al., When is invariance useful in an Out-of-Distribution Generalization problem? Arxiv, 2020. \n\n[2] Arjovsky et al., Invariant risk minimization. Arxiv, 2019.\n\n[3] Creager et al., Environment Inference for Invariant Learning. ICML, 2021.\n\n[4] Wu et al., Discovering Invariant Rationales for Graph Neural Networks. ICLR, 2022.\n", " + **Q3. Adding baselines including EERM, GSAT, and HRM to the experiments.**\n\n**R3:** Thank you for the comment. We have added comparisons with the methods mentioned above. Notice that EERM is not designed for graph-level OOD generalization problems, so we have modified its graph structure editers to create graph-level multiple environments. \n\nThe results on the synthetic dataset SP-Motif (Scenario 1: $r_{test}=1/3$) are as follows. \n\n| $r_{train}$ | $r=1/3$ | $r=0.5$ | $r=0.6$ | $r=0.7$ | $r=0.8$ | $r=0.9$ |\n| ---- | :--: | :--: | :--: | :--: | :--: | :--: |\n| HRM | 51.43 ± 4.08 | 50.34 ± 3.96 | 46.34 ± 5.91 | 38.94 ± 4.12 | 38.20 ± 3.71 | 37.10 ± 4.80 |\n| EERM | 52.89 ± 3.20 | 51.97 ± 2.87 | 50.87 ± 4.97 | 45.38 ± 2.90 | 42.98 ± 3.63 | 42.42 ± 3.67 |\n| GSAT | 53.67 ± 3.65 | 53.34 ± 4.08 | 51.54 ± 3.78 | 50.12 ± 3.29 | 45.83 ± 4.01 | 44.22 ± 5.57 |\n| GIL | 55.44 ± 3.11 | 54.56 ± 3.02 | 53.60 ± 4.82 | 53.12 ± 2.18 | 51.24 ± 3.88 | 46.04 ± 3.51 |\n\nThe results on the synthetic dataset SP-Motif (Scenario 2: $r_{test}=0.2$) are as follows. \n\n| $r_{train}$ | $r=1/3$ | $r=0.5$ | $r=0.6$ | $r=0.7$ | $r=0.8$ | $r=0.9$ |\n| ---- | :--: | :--: | :--: | :--: | :--: | :--: |\n| HRM | 51.79 ± 3.18 | 40.91 ± 4.06 | 35.89 ± 5.10 | 33.08 ± 4.11 | 28.18 ± 2.93 | 23.65 ± 4.12 |\n| EERM | 51.07 ± 2.75 | 48.65 ± 3.65 | 44.28 ± 5.02 | 38.39 ± 4.91 | 33.01 ± 3.89 | 31.08 ± 2.85 |\n| GSAT | 51.36 ± 4.21 | 50.48 ± 3.98 | 46.93 ± 5.03 | 43.55 ± 3.67 | 40.35 ± 4.21 | 33.87 ± 5.19 |\n| GIL | 54.80 ± 3.93 | 52.48 ± 4.41 | 50.08 ± 5.47 | 47.44 ± 2.87 | 46.36 ± 3.80 | 35.80 ± 5.03 |\n\n\nThe results on the real-world datasets are as follows. \n\n| | MNIST-75sp | Graph-SST2 | MOLSIDER | MOLHIV |\n| ---- | :----------: | :----------: | :----------: | :----------: |\n| EERM | 17.13 ± 1.89 | 81.39 ± 0.63 | 58.92 ± 1.03 | 76.27 ± 1.48 |\n| HRM | 17.05 ± 2.84 | 81.15 ± 0.71 | 57.41 ± 1.86 | 75.12 ± 1.17 |\n| GSAT | 20.12 ± 1.35 | 82.95 ± 0.58 | 60.82 ± 1.36 | 76.47 ± 1.53 |\n| GIL | 21.94 ± 0.38 | 83.44 ± 0.37 | 63.50 ± 0.57 | 79.08 ± 0.54 |\n\nBesides the results above, we **have also updated the results** of comparisons shown in Figures 3, 6 of the main paper and Figures 4, 5, 6 of Appendix in the revised paper.\n\nThese results show that: (1) Although outperforming ERM on most datasets, EERM [1], as one node-level OOD generalization method, cannot well handle graph-level distribution shifts for promising results. (2) Directly adopting HRM [3] that is proposed for raw feature data on more complex graph-structured data produces poor OOD generalization performance. (3) GSAT [2], one very recent baseline, achieves competitive results in most comparisons. Nevertheless, **our proposed method still consistently achieves the best performance**, demonstrating the effectiveness of the proposed method for learning invariant graph representations under a mixture of latent environments. We have added these comparisons to the revised paper.\n\n+ **Q4. The applicability of the assumptions.**\n\n**R4:** We believe the assumptions can be easily satisfied for various types of real-world graphs. For Assumption 3.1, it was reasonably introduced in [4] and can be verified for real-world graphs. For example, as present in Figure 1 in Appendix, for molecule graphs labeled by specific properties [5, 6], the optimal invariant subgraphs represent the \"hydrophilic R-OH group\"/\"non-polar repeated ring structures\", whose relationship with the label solubility/anti-solubility is truly predictive and invariant across different environments. And the corresponding variant subgraphs denote the shared carbon structure or scaffold [5, 6], which could change across different environments. Similarly, for superpixel graphs from [4], invariant and variant subgraphs represent the edges corresponding to the digit itself and other edges from the background, respectively. The label is only dependent on the digit and the background information could change while having no influence on the labels. And for the Graph-SST2 datasets (as shown in Figure 7 in the main paper), invariant subgraphs are the positive/negative words in the sentences that are salient for sentiment labels. Therefore, the assumptions commonly hold in real world.\n\n\n\n**References:**\n\n[1] Wu et al., Handling Distribution Shifts on Graphs: An Invariance Perspective. ICLR, 2022.\n\n[2] Miao et al., Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism. ICML, 2022.\n\n[3] Liu et al., Heterogeneous risk minimization. ICML, 2021.\n\n[4] Wu et al., Discovering Invariant Rationales for Graph Neural Networks. ICLR, 2022. \n\n[5] Hu et al., Open Graph Benchmark. NeurIPS, 2020.\n\n[6] Duvenaud et al., Convolutional networks on graphs for learning molecular fingerprints. NeurIPS, 2015.", " We thank the reviewer for the valuable comments. Please kindly find the detailed responses below.\n\n+ **Q1. Some literature is not discussed in this paper.** \n\n**R1:** Thank you for sharing with us these up-to-date works. We discuss the differences between these works and ours to clarify the novelty and significance of this paper as follows. EERM [1] focuses on **node-level** OOD prediction problem while our model studies graph-level OOD prediction problem, which are two different tasks (see Q3 for the experimental comparisons). GSAT [2] is a very recent **interpretable** graph learning method, which is just accepted by ICML2022. GSAT and our model have different basic assumptions and training schemes. Specifically, GSAT is mainly to build inherently interpretable GNNs and expect GNNs to be more generalizable by penalizing the amount of information from the input data. Our model, based on the invariance principle, targets directly learning invariant graph representations under a mixture of latent environments for OOD generalization. We have cited these works and added relevant discussions in the revised version.\n\nWe would also like to clarify the technical novelty and contributions of this work as follows. **(1)** We focus on a novel and challenging problem, i.e., learning invariant graph representations in a mixture of latent environments. To the best of our knowledge, this problem is not explored by the existing works and its formulation is also not well established in the literature due to the unique challenges for graphs under distribution shifts. **(2)** Since graph data usually comes from a mixture of latent environments, most existing invariance regularizers rely on accurate environment labels, so they cannot be directly applied to graphs. It also remains a new research direction to explore the environments on graphs based on the environment-discriminative (variant) graph patterns and encourage the graph encoder to capture environment-agnostic (invariant) graph patterns more accurately for OOD generalization. **(3)** The environment inference module and invariant learning module can mutually promote each other by focusing on two complementary parts of the input graphs, i.e., variant subgraphs and invariant subgraphs, respectively. Identifying the invariant graph patterns among latent environments raise unique and critical challenges. **(4)** We design a theoretically grounded learning scheme to find a maximal invariant subgraph generator for solving the graph OOD generalization problem. Our model achieves performance gains on both synthetic and real-world benchmarks with significant improvements against various baselines. We believe these technical contributions are non-trivial and have potential impacts to the graph community. \n\n+ **Q2. Missing the invariant learning literature [3].**\n\n **R2:** Thank you for this suggestion. The key difference between [3] and ours is that although the work [3] considers inferring latent environments during learning process, it is designed for dealing with the simple scenario where data is raw feature as that paper claims, while the problems of jointly conducting environment inference and invariant learning on graph-structured data remains unexplored. We further formulate the problem as finding maximal invariant subgraph in a mixture of latent environments and propose a corresponding algorithm for graphs whose effectiveness is demonstrated in theory and practice. Detailed experimental comparisons are provided in the response to the following question. ", " The paper investigates a new research problem of learning invariant graph representations under distribution shifts by considering the latent environment labels. The proposed method, graph invariant learning (GIL), is a joint learning framework combing three different GNNs of various functions. With good empirical results on several datasets and related theoretical analyses, the paper justifies the effectiveness of the proposed GIL. Pros\n\n- The investigated problem is new and challenging, especially for real-world graph learning scenarios. With the well-defined research problem, the paper proposed to take the latent environment labels into consideration, which sheds an interesting direction on the current graph learning area.\n- The gained empirical improvement is quite significant with the proposed GIL framework. Several ablation studies are shown.\n- The paper is with good mathematical groundings for its provided theoretical analysis. Complexity analysis is also presented, including every single module of the proposed GIL framework.\n\nCons\n\n- The importance of the latent environment labels is not fully justified, which should be the major contribution of the paper. It seems unclear for its motivation and significance and what benefits it can bring. In addition to the hyper-parameter study, there lacks instruction to extract, utilize, and analyze the latent environment labels.\n- The technical novelty and contributions of the paper are neutral. The designed yet complicated framework seems to be the combination of several existing methods like the k-means and the invariance regularizer without presenting more insights into the new problem. 1. It might be somehow confusing for the invariance property in Assumption 3.1 for its connection with the true labels of graphs. Two graphs of different environments are likely to be classified correctly even though they are not perfectly samples. So, what is the main difference between the invariance property and $P^e(\\Phi^*(G))=P^{e^{\\prime}}(\\Phi^*(G))$ as in the last condition of Theorem 4.1? Is it harder to be achieved? Or, can it provide a better guarantee?\n2. The optimization process of the three used GNNs is unclear, which could be an essential design of the proposed system. I.e., how do the GNN$^M$, GNN$^V$, and GNN$^I$ be jointly optimized? \n3. Besides, since the adjacent matrixes $A_I$ and $A_V$ are discrete, how are the gradients calculated and back propagated among these GNNs in an end-to-end manner?\n4. The clustering effectiveness in Figure 5 seems quite perfect for its accurate partition, but the Score in Figure 4 is lower than 0.75. So, where is the gap come from? It would be better for the paper to introduce more about the Silhouette score, e.g., its basic calculation and range of values.\n5. Regarding the prediction of latent environment labels are one of the main contributions, I would suggest the paper show the clustering cases without environment inference to better justify the effectiveness of such a designed module. The READOUT functions, which shall be important for the proposed method, are not fully discussed and compared in the paper, including in the appendix part. Since the paper has mentioned the usage and the desired properties of READOUT functions many times, it would be better for the paper to provide a further discussion and related ablations. Besides, the influence of different GNN architectures, which served as backbones in the paper, is also missing.", " This paper studies OOD generalization with a mixture of graph environments without environment labels. The method first generates the invariant subgraph and the variant subgraph based using a graph generator model. It then infers the environment labels by conducting k-means clustering on the variant subgraphs. This step assumes that the environment labels are correlated with variant subgraphs and are irrelevant to the invariant subgraph. The method then conducts invariant learning optimization across the different inferred environments. This paper firstly studies OOD generalization for graphs without environment labels. The problem is clearly defined and discussed in the paper. Based on the assumptions, the author proposes a method that aligns with the identified challenges. The method later shows significant improvement over the baselines.\n\nAs for the weakness, I find gaps between the steps in theoretical analysis. In Equation (7), the optimization uses the ground truth environment $\\mathcal{E}$, while Equation (8) uses the inferred environments $\\mathcal{E}\\_{infer} $ . The difference between $\\mathcal{E}$ and $\\mathcal{E}\\_{infer} $ should be discussed to understand the proposed algorithm. The discussion should answer questions like \"does $\\mathcal{E}\\_{infer}$ approximate $\\mathcal{E}$?\" and \"what are the properties of $\\mathcal{E}_{infer}$ for good OOD generalization performance?\" As shown in [1], the ground truth environment partition does not always lead to the best OOD generalization performance. Is this also true in the graph domain? If yes, how does the proposed kmeans clustering lead to better environment partitions for OOD generalization? \n\nReference:\n[1] Environment Inference for Invariant Learning. In ICML 2021. The authors have well addressed the limitations in the paper.", " In this work the authors propose a three part method for learning graph representations that are robust to distribution shifts. First their method identifies _invariant_ subgraphs, then clusters the _variant_ complements to the invariant subgraphs to infer latent environment codes, and finally learns a representation function and linear predictor over the extracted invariant subgraphs and latent environment codes. This system is optimized end to end and they perform an empirical evaluation over synthetic and real world data to demonstrate the effectiveness of the approach. ### Strengths\n\n**Quality**: The empirical evaluation is well scoped in its choice of datasets as well as the diversity of the shift parameter settings and types of analysis showcased. In particular the SP-Motif and OGB selections are well suited to their method's focus on shared subgraph structures. The agreement between the performance rankings of GIL/DIR on SP-Motif and MOLHIV is a good sign since shared subgraphs and scaffolds are similar concepts. The finding that ERM generally outperforms most methods matches prior work for MOLHIV and this then underscores the performance of GIL (and the competitor DIR). Finally, the fact that the performance of the method is also demonstrated on datasets derived from different underlying signals (MNIST and SST2) that are not molecule based, serves as a good generality sanity check.\n\n**Clarity**: The combination of Figure 2 and Section 3 describing their method and training procedures is well presented and easy to understand. Section 5 is broken down well into its subsections and the figures 6 and 7 are appreciated as visual references of the in/variant structures being identified.\n\n**Significance**: Performance suggests that the approach is minimally competitive, and potentially SOTA pushing on the OGB data, though the way in which SP-Motif is used at the sweep of parameter settings makes it slightly harder to directly compare to other work. Method is simpler than some competitor approaches for OOD generalization, which is preferred.\n\n### Weaknesses\n\n**Clarity**: Theorems in Section 4, especially Thm 4.2 can likely be moved to the Appendix, as they don't add very much to the impact of the work compared to the empirical results. Further, Thm 4.1, especially the RHS, is stated very generally rather than in the specific setting of subgraph-based inter-environment invariance, which is not the _only_ kind of environment differentiation one expects in real world data (i.e. feature shift or label imbalance etc. are also options - these are in fact explored in the empirical results).\n\nWork would generally benefit from a close editing pass from a native english speaker, _but this does not factor into my assessment_.\n\n**Originality/Significance**: The three components of the method are not particularly novel on their own - in particular, the invariant regularized learner is very derivative of IRM in its objective. As such the work overall feels like a DL engineering solution for an E2E system, which is not without value by any means, but overall not very methodologically novel.\n\n 1. Relating to the comment above about environments sharing subgraph structure, this is very much a broad assumption in the entire work. It is _fine_ because though it is a narrow focus it is a fundamental type of distribution shift for graphs. However, it feels like conditions such as $P^{e}(\\phi(G)) = P^{e'}(\\phi(G))$ would generally be too strong in practical settings. Especially for the real world data, did you attempt to empirically verify that some of the conditions for Thm 4.1 approximately hold?\n\n2. Can you elaborate on why the number of clusters for the k-means component was limited to $[2,4]$? Having to explicitly choose this parameter is one of the obvious points of difficulty for the model with datasets comprising an unknown number of latent environments. The few limitations are relatively minor:\n\n1. Methodological focus on shared subgraph/scaffold structure between environments\n2. System bottleneck on the effectiveness of the unsupervised environment identification module. Ideally they would explore different choices for the clustering algorithm.\n3. The analysis in Figures 4 and 5 of Silhouette score and the TSNE projection of clusters should be extended beyond the synthetic dataset where the environments are perfectly separable, as the claims they were used to support may not be very generalizable.\n4. Hyperparameter sensitivity analysis should be extended beyond the synthetic dataset, even though it is plausible that the trends generalize.", " This paper studies the out-of-distribution (OOD) generalization problem on graphs under a mixture of environments. \n\nIt proposes a graph invariant learning (GIL) solution to learn a maximally invariant graph predictor, which composes an environment inference module and an invariant subgraph identification module. Strengths:\n\n1. The problem this paper studied is important, i.e., OOD generalization of GNNs, and it proposes an out-of-distribution generalization framework for GNNs, which composes environment inference and invariant subgraph identification/generalization.\n2. The presentation logic flow is clear.\n\nWeaknesses:\n\n1. The novelty and significance of this paper are a concern as some important literature is not discussed in this paper. There are several works also studying OOD generalization on graphs but corresponding discussions are missing in the paper. \n\t1. Wu et al. 2022 [1] study the OOD generalization in node classification, and takes similar assumptions and solutions in this paper.\n\t2. Miao et al., 2022 [2] also discussed the application of graph information bottleneck criteria for OOD generalization.\n2. Especially, in the literature on invariant learning, the key method in this paper shares many similarities with HRM[3], which should be discussed and compared substantially.\n3. In experiments, [1,2] should all be included as baselines, and direct applying [3] to graph data should also be included as a baseline.\n\n\n[1] Qitian Wu, Hengrui Zhang, Junchi Yan, David Wipf. Handling Distribution Shifts on Graphs: An Invariance Perspective. ICLR 2022.\n\n[2] Siqi Miao, Miaoyuan Liu and Pan Li. Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism. ICML 2022.\n\n[3] Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, and Zheyan Shen. Heterogeneous risk minimization. ICML 2021. As mentioned above, the main concern is about the novelty and lack of adequate literature discussions. No obvious potential negative societal impact. This paper should also discuss a bit about the applicability of the assumptions made in this paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "YGodvJhIrvJ", "848hHcGECpW", "ARzdGQHR9hm", "S4ICvql11X6", "S4ICvql11X6", "848hHcGECpW", "AtIJ3WLKRzU", "9RUXsIGR37", "nips_2022_acKK8MQe2xc", "GaZr8YurMY7", "RznFYbDCwiF", "ARsuAsGzlXM", "9RUXsIGR37", "N15tmanQ9_b", "cqc-PU8UmAP", "koKszwiuI5", "848hHcGECpW", "AtIJ3WLKRzU", "Z46UlsqZsy", "S4ICvql11X6", "nips_2022_acKK8MQe2xc", "nips_2022_acKK8MQe2xc", "nips_2022_acKK8MQe2xc", "nips_2022_acKK8MQe2xc" ]
nips_2022_VgOw1pUPh97
SegNeXt: Rethinking Convolutional Attention Design for Semantic Segmentation
We present SegNeXt, a simple convolutional network architecture for semantic segmentation. Recent transformer-based models have dominated the field of se- mantic segmentation due to the efficiency of self-attention in encoding spatial information. In this paper, we show that convolutional attention is a more efficient and effective way to encode contextual information than the self-attention mech- anism in transformers. By re-examining the characteristics owned by successful segmentation models, we discover several key components leading to the perfor- mance improvement of segmentation models. This motivates us to design a novel convolutional attention network that uses cheap convolutional operations. Without bells and whistles, our SegNeXt significantly improves the performance of previous state-of-the-art methods on popular benchmarks, including ADE20K, Cityscapes, COCO-Stuff, Pascal VOC, Pascal Context, and iSAID. Notably, SegNeXt out- performs EfficientNet-L2 w/ NAS-FPN and achieves 90.6% mIoU on the Pascal VOC 2012 test leaderboard using only 1/10 parameters of it. On average, SegNeXt achieves about 2.0% mIoU improvements compared to the state-of-the-art methods on the ADE20K datasets with the same or fewer computations.
Accept
Four knowledgeable referees reviewed this submission. The reviews raised concerns about the novelty of the proposed approach (rY1T, S4w5), the motivation of the model design and properties (mij9, r2Bq), and the empirical evidence to support some of the effectiveness and efficiency claims (rY1T, r2Bq, S4w5). The rebuttal addresses the reviewers' concerns by (1) highlighting the differences of the proposed approach with ConvNext, (2) providing additional comparisons with state-of-the-art methods as suggested by the reviewers, (3) performing ablations of the MSCA design which empirically emphasize its advantages, and (4) partially clarifying the motivation. The authors engage in discussion with the reviewers and provide additional clarifications (e.g. what will be introduced in the main body of the paper, and whether the code will be released). During the discussion phase, the reviewers show some hesitations wrt novelty of the proposed approach which is perceived as incremental wrt ConvNext. However, the reviewers agree that the paper is well written, the approach is simple and appears effective, and the experimental evidence is extensive and supports the claims made in the manuscript. The reviewers appreciate the benchmarking efforts of this work and lean towards acceptance. The AC agrees with the reviewers' assessment that the strength of this paper lies in its extensive experimental validation, and recommends to accept.
train
[ "Jah_yH_Ep8V", "HzXXm-JpQg", "aAiqJ6W9pGD", "dSjLhFb2lcA", "hHebw5_hwR", "7alClLbYRpd", "1_X16sO8p3A", "Hh7sdxOekoB", "IQfBgAE7AEr0", "8GnrZubWs85", "0gRf9j85oje", "bc4_lyvRrjr", "-bw-925kVgV", "i-kDMUUMvDI", "TaROyTlWYj", "nHFfaH-IhX6", "8Aqlk1BEqCk", "TxMqvoogAjt", "__Q5h7420zi" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In fact, we had organized the related code like SegFormer[1], which can be released at any time.\n\nDue to the anonymity rules of NeurIPs 2022, we can not provide it now.\n\nTrust us, the code will be public as mentioned in abstract. \n\n[1]: Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34, 12077-12090.\n\n", " Thanks for your positive response and raising your rating. \n\n\nWe are sure that the extensibility of proposed operation will be discussed in the discussion section and the code will be public as mentioned in abstract.\n\nSincerely,\n\nAuthors\n\n", " * I would like to express my appreciation to the authors to take the time to my questions. Most of my concerns were resolved in the rebuttal phase; mainly related to the novelty and experimental improvement. Therefore, I \bincrease my rating from four to six through the discussion phase. In addition, I still agree that this proposed operation is significantly similar to the previous study (ConvNext). However, experimental results could express the novelty of the proposed operator, I agree that the proposed operator could not be restricted due to its similarity to the previous study. The empirical analysis could be an alternative way to exhibit the novelty of the paper. My remaining concern is about the extensibility of the proposed network as a generalized operation for the improvement of CNN compared to transformers. This could be discussed in the Discussion section.\n\n* Furthermore, for reproducibility and the improvement of the deep learning society, I would strongly address that the code would be published in public. In hopeful expectation, I will adjust my rating in good faith.", " Dear reviewer rY1T:\n\nWe sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. \n\nAs you mentioned in limitations, you expects that the limitations could be improved in the discussion periods. We also hope to further discuss with you to address the misunderstanding of our work. Now, it has been less than 12 hours to the end of the discussion, could we have a discussion during this period? \n\n\nBest,\n\nAuthors", " Dear Reviewer rY1T:\n\nWe are sorry to bother you. Firstly, thanks for your effort for reviewing our paper and giving some valuable suggestions. We fully understand and appreciate reviewers‘ selfless contributions to the community. Similarly, we hope reviewers can understand paper authors.\n\nOur team has made great efforts and spent lots of time and resources to conduct qualitative and quantitative experiments, answer related questions, and refine paper in the rebuttal stage. \n\nSince the author-reviewer discussion is close to deadline, we hope we can get your response before the deadline. If you have other concerns, we are always willing to discuss.\n\nSincerely,\n\nAuthors", " Thanks for approving our design and rising your rating. ", " In the rebuttal stage, the authors address most of my concerns. I approve of designing a simple yet effective network for the segmentation task in this paper. Hence, I raise my score. ", " Thanks for appreciating our paper and rising your rating.\nWe will add above resuls (Q2, Q4) and cite the related papers in the main paper.", " The rebuttal solves most of my concerns. I raise my rating as weak accept. \n\nDespite the novelty of paper is a little weak, I appreciate the benchmarking effort of this work for various segmentation datasets. \n\nAlso, make sure to put these results(Q2. Q4) on the main paper rather than reporting benchmark results. \n\nMoreover, several paper on multi-scale fusion should also be cited. \n\n[1] Context contrasted feature and gated multi-scale aggregation for scene segmentation CVPR-2018\n \n[2] Gated Fully Fusion for Semantic Segmentation, AAAI-2020\n\n[3] Hierarchical Multi-Scale Attention for Semantic Segmentation Arxiv-2020", " * For Question3 \"In Tab10. Why not list the FPS of other method for better comparison.\"\n\n There are three main reasons.\n\n * The first reason is the difference in hardware. Different hardware will produce different throughputs.\n * The second one is that different methods adopt different software optimization such as SFNet[7] using TensorRT to accelerate algorithm. The above reasons cause an unfair comparison if we listed FPS. \n * The last reason is that we consider 25FPS a threshold for real-time applications. Thus, we compared methods that can achieve above 25 FPS.\n\n* For Question4 \"Missing ablation on MSCA design on both ImageNet or other segmentation datasets.\"\n\n We add ablation study on MSCA design on both ImageNet and ADE20K dataset. K x K branch contains a depth-wise 1xk convolution and a kx1 depth-wise convolution. 1x1 conv means the channel mixing operation. Attention means the element-wise product, which makes the network obtain the adaptive ability. \n\n | 7x7 branch | 11x11 branch | 21x21 branch | 1x1 conv | Attention | Top-1 Acc. (%) | mIoU |\n | :--------: | :----------: | :----------: | :------: | :-------: | :------------: | :--: |\n | ✔ | **X** | **X** | ✔ | ✔ | 74.7 | 39.6 |\n | **X** | ✔ | **X** | ✔ | ✔ | 75.2 | 39.7 |\n | **X** | **X** | ✔ | ✔ | ✔ | 75.3 | 40.0 |\n | ✔ | ✔ | ✔ | **X** | ✔ | 74.8 | 39.1 |\n | ✔ | ✔ | ✔ | ✔ | **X** | 75.5 | 40.5 |\n | ✔ | ✔ | ✔ | ✔ | ✔ | 75.9 | 41.1 |\n\n​\t\n\n[1]: Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 11976-11986).\n\n[2]: Peng, C., Zhang, X., Yu, G., Luo, G., & Sun, J. (2017). Large kernel matters--improve semantic segmentation by global convolutional network. In *Proceedings of the IEEE conference on computer vision and pattern recognition* (pp. 4353-4361).\n\n[3]: Geng, Z., Guo, M. H., Chen, H., Li, X., Wei, K., & Lin, Z. (2021). Is attention better than matrix decomposition?. *arXiv preprint arXiv:2109.04553*.\n\n[4]: Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. *Advances in Neural Information Processing Systems*, *34*, 12077-12090.\n\n[5]: Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., ... & Zhang, L. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* (pp. 6881-6890).\n\n[6]: Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In *Proceedings of the IEEE international conference on computer vision* (pp. 618-626). \n\n[7]: Li, X., You, A., Zhu, Z., Zhao, H., Yang, M., Yang, K., ... & Tong, Y. (2020, August). Semantic flow for fast and accurate scene parsing. In *European Conference on Computer Vision* (pp. 775-793). Springer, Cham.\n", " Dear Reviewer S4w5:\n\nThanks for your effort for reviewing our paper and giving some kind suggestions. We are happy to see your positive comments on writting, experiments and method. We hope the following responses could solve your concerns.\n\n* For Question1: \"The technical novelty is very limited.\"\n\n This paper is not a simple combination of ConvNeXt[1] and segmentation task. We do not deny we are inspired from ConvNeXt[1], large kernel matters[2] and HamNet[3], which are included in our reference list. Here, we illustrate the difference between SegNeXt and others from three aspects: analysis, visualization and experimental results.\n\n * Analysis: The goal of this paper is to find the simple and effective operation / network for segmentation task. To achieve the target, we analyze and summarize the expected properties of segmentation task such as multi-scale information, strong encoder, attention mechanism (dynamic process) and low complexity. For common CNNs such as ConvNeXt[1] and Large Kernel matters[2], they only satisfy the strong encoder and low complexity, but ignores the multi-scale information and attention mechanism, which are critical for segmentation task. For common transformers such as SegFormer[4] and SETR[5], they also ignore multi-scale information and low complexity especially when dealing with high resolution images. As for our MSCAN, it satisfies all listed properties for segmentation, which is a simple, suitable and new network for semantic segmentation. Besides, we also present visualizations and numerical experiments to support our claims in the following response. Besides, as we known, CNN-based encoder is lacking in global receptive field and we hope our architecture can avoid it in the decoder stage. So, we choose one of global module Ham[3] to solve this problem. A simple and suitable encoder for segmentation task is what we claim for our main contribution.\n * Visualization: We adopt Grad-CAM[6] to conduct visualization to prove the effectiveness. Due to the limitation of openreview, we add visualization results in the supplementary material. From the visualization results, we can easily find two shortcomings of convnext: Insufficient receptive field and lack of multi-scale information.\n * Experimental results: We replace our backbone with ConvNeXt and fairly compare ConvNeXt with our MSCAN on ADE20K dataset. As shown in following table, experiments also indicate the superiority of our method.\n\n | Method | Params.(M) | mIoU(SS) | mIoU(MS) |\n | ------------------- | ---------- | -------- | -------- |\n | SegNeXt w/ ConvNeXt | 28.4 | 43.2 | 44.5 |\n | SegNeXt w/ MSCAN | 27.6 | 48.5 | 49.9 |\n | SegNeXt w/ ConvNeXt | 50.0 | 46.2 | 47.6 |\n | SegNeXt w/ MSCAN | 48.9 | 51.0 | 52.1 |\n\n* For Question2 \"A better comparison with HRFormer or HRnet is needed to show the effectiveness of spatial attention.\"\n\n We conduct experiments to compare HRNet and HRFormer with our MSCAN on ADE20K dataset. Results are shown in the following table. Our method significantly surpasses HRNet and HRFormer with similar size, which demonstrate the superiority of our method.\n\n | Method | Params.(M) | mIoU(SS) | mIoU(MS) |\n | ---------------- | ---------- | -------- | -------- |\n | SegNeXt w/ HRNet | 9.9 | 38.1 | 39.4 |\n | SegNeXt w/ MSCAN | 4.3 | 41.1 | 42.2 |\n | HRFormer-B | 56.2 | 48.7 | 50.0 |\n | SegNeXt w/ HRNet | 65.7 | 43.0 | 44.6 |\n | SegNeXt w/ MSCAN | 48.9 | 51.0 | 52.1 |\n\n\nTo be continued.", " Dear Reviewer *r2Bq*:\n\nThanks for your effort for reviewing our paper and giving some kind suggestions. We are happy to see your positive comments on writting and experiments. We hope the following responses could solve your concerns.\n\nWe will try to solve your concerns from four aspects: motivation, advantages, visualization and experimental results.\n\n* Motivation: The goal of this paper is to find the simple and effective operation / network for segmentation task. The original idea comes from recent convolution neural network ConvNeXt[1]. How to design a successful CNN-sytle network for segmentation task ? We find three shortcomings of ConvNeXt: (1) Insufficient receptive field, especially for processing high-resolution segmentation images; (2) without multi-scale information; (3) no adaptability. To achieve above properties, we adopt three simple strategies to achieve them. (1) using larger kernel convolutions to enlarge the receptive field; (2) Introducing a multi-branch structure to obtain multi-scale information; (3) Introducing self-multiplication to achieve adaptability. After the above three improvements, our method outperforms the previous method by a large margin.\n* Advantages: Compared with the self-attention mechanism, we believe that our method has the following advantages:\n * lower complexity: self-attention has quadratic complexity, which limits its applications for processing high-resolution segmentation images such as 2,048 x 1,024 images in cityscapes dataset. For our MSCA, it has linear complexity, which is more suitable for semantic segmentation task. In our paper, figure1(left) clearly demonstrates the advantages when dealing with high-resolution images.\n * Multi-scale Information aggregation: For semantic segmentation task, it requires to process multi objects with various scale at the same time. Thus, it is critical to achieve multi-scale Information aggregation in this case. The self-attention mechanism ignores this, while our MSCA takes it into account.\n * Channel attention: Channel attention has been proven important for vision tasks[2],[3]. For self-attention, it only considers the spatial attention. For our MSCA, it considers the self-adaptive property in both channel and spatial dimensions and achieves spatial and channel attention[3],[4] in a simple yet effective way. \n* Visualization: We adopt Grad-CAM[5] to conduct visualization to prove the effectiveness. Due to the limitation of openreview, we add visualization results in the supplementary material. From the visualization results, we can easily find two shortcomings of ConvNext: Insufficient receptive field and lack of multi-scale information.\n\n* Experimental results: SegFormer[6] and SETR[7] are common transformer-based models. Due to the above superiority, SegNeXt significantly surpasses them in various datasets including ADE20K, Cityscapes and COCO-Stuff, which is shown in our Table 8.\n\n\n[1]: Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 11976-11986).\n\n[2]: Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In *Proceedings of the IEEE conference on computer vision and pattern recognition* (pp. 7132-7141).\n\n[3]: Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In *Proceedings of the European conference on computer vision (ECCV)* (pp. 3-19).\n\n[4]: Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual attention network for image classification. In *Proceedings of the IEEE conference on computer vision and pattern recognition* (pp. 3156-3164).\n\n[5]: Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In *Proceedings of the IEEE international conference on computer vision* (pp. 618-626). \n\n[6]: Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. *Advances in Neural Information Processing Systems*, *34*, 12077-12090.\n\n[7]: Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., ... & Zhang, L. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* (pp. 6881-6890).\n", " Dear Reviewer mij9:\n\nThanks for your effort for reviewing our paper and giving some kind suggestions. We are happy to see your positive comments and score. We hope the following responses could solve your concerns.\n\n* For Q1: \"Why does the model need to use spatial attention? \"\n\n We believe that the attention mechanism is effective for semantic segmentation mainly because of two aspects.\n\n * Firstly, attention mechanism can make the network focus on critical features and ignore noisy features automatically, which is an important property for vision tasks such as semantic segmentation [1]. It is also a key part of the human visual system [2],[3].\n * The second point is our own understanding for attention mechanism. Attention mechanism is a dynamic process, which means it can adjust its output features according to its input features. Compared with traditional convolutional neural networks, which is a static process, it demonstrates better transfer learning ability [4],[5]. For example, We have trained a model on the ImageNet dataset, and we are going to transfer it to downstream tasks. During transfer learning, the network needs to process lots of unseen objects/scenes in ImageNet. For a static process, we learn the capability to deal with seen objects, which can not handle unseen objects/scenes well. For a dynamic process, we can learn the self-adaptive capability. It means the network can adjust output features based on input features. This allows the network to quickly adjust output when it deals with unseen objects, which is an important property for transfer ImageNet pre-trained weight to new benchmarks.\n\n* For Q2: \"What is the “advanced training strategy”? Does this paper use it?\"\n\n The advanced training strategy is proposed by DeiT[6], which mainly contains more data augmentation, longer training epochs and strong regularization. It can improve the performance of the network on ImageNet and downstream tasks. Of course, we adopted it when training models on ImageNet.\n\n\n\n[1]: Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., ... & Bengio, Y. (2015, June). Show, attend and tell: Neural image caption generation with visual attention. In *International conference on machine learning* (pp. 2048-2057). PMLR.\n\n[2]: Rensink, R. A. (2000). The dynamic representation of scenes. *Visual cognition*, *7*(1-3), 17-42. \n\n[3]: Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. *Nature reviews neuroscience*, *3*(3), 201-215.\n\n[4]: Guo, M. H., Xu, T. X., Liu, J. J., Liu, Z. N., Jiang, P. T., Mu, T. J., ... & Hu, S. M. (2022). Attention mechanisms in computer vision: A survey. *Computational Visual Media*, 1-38.\n\n[5]: Wang, K., Gao, X., Zhao, Y., Li, X., Dou, D., & Xu, C. Z. (2019, September). Pay attention to features, transfer learn faster CNNs. In *International conference on learning representations*.\n\n[6]: Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., & Jégou, H. (2021, July). Training data-efficient image transformers & distillation through attention. In *International Conference on Machine Learning* (pp. 10347-10357). PMLR.\n", " * For Question3: \"The experiments should be improved.\"\n\n We improve the experiments according to the comments.\n\n * We adopt boundary-oriented intersection over union (B-IoU)[5] as evaluation metric to compare our method with SegFormer on cityscapes dataset. As shown in following Table, we also achieve better performance than SegFormer[1], which is a common transformer-based method.\n\n | Method | Params.(M) | GFLOPs | BIoU (SS) |\n | ------------ | ---------- | ------ | --------- |\n | SegFormer-B0 | 3.8 | 126.6 | 20.1 |\n | SegNeXt-T | 4.3 | 50.5 | 21.3 |\n | SegFormer-B1 | 13.7 | 243.7 | 21.7 |\n | SegNeXt-S | 13.9 | 124.6 | 22.7 |\n | SegFormer-B2 | 27.5 | 717.7 | 23.3 |\n | SegNeXt-B | 27.6 | 275.7 | 24.2 |\n | SegFormer-B3 | 47.3 | 962.9 | 23.6 |\n | SegNeXt-L | 48.9 | 577.5 | 25.1 |\n\n \n\n * As shown in the response for Question1, visualization results based on Grad-CAM[2] are added in the supplementary material. \n\n* For Question4 \"about paper writing\":\n\n We will check our writing about definition of different concept, detailed description about figures and tables, grammar and typo errors carefully for camera ready version.\n\n[1]: Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 11976-11986).\n\n[2]: Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In *Proceedings of the IEEE international conference on computer vision* (pp. 618-626). \n\n[3]: Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. *Advances in Neural Information Processing Systems*, *34*, 12077-12090.\n\n[4]: Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., ... & Zhang, L. (2021). Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* (pp. 6881-6890).\n\n[5]: Lee, K., Kim, J. H., Lee, H., Park, J., Choi, J. P., & Hwang, J. Y. (2021). Boundary-oriented binary building segmentation model with two scheme learning for aerial images. *IEEE Transactions on Geoscience and Remote Sensing*, *60*, 1-17.", " Thanks for your effort for reviewing our paper and giving some kind suggestions. We are happy to be affirmed in performance, reproducibility and writing. We hope the following responses could solve your concerns. \n\n* For Question1: \"The novelty and the contribution of this paper.\"\n\n This paper is not a simple combination of ConvNeXt[1] and segmentation task. We do not deny we learn something from ConvNeXt, which is included in our reference list. Here, we illustrate the difference between SegNeXt and ConvNeXt from three aspects: analysis, visualization and experimental results.\n\n * Analysis: The goal of this paper is to find simple and suitable operation / network for segmentation task. To achieve the target, we analyze and summarize the expected properties of segmentation task such as multi-scale information, strong encoder, attention mechanism (dynamic process) and low complexity. For ConvNeXt, it only satisfies the strong encoder and low complexity, but ignores the multi-scale information and attention mechanism, which are critical for segmentation task. \n\n * Visualization: We adopt Grad-CAM[2] to conduct visualization to prove the effectiveness. Due to the limitation of openreview, we add visualization results in the supplementary material. From the visualization results, we can easily find two shortcomings of ConvNext: Insufficient receptive field and lack of multi-scale information. Meanwhile, our method makes up for its shortcomings.\n\n * Experimental results: We replace our backbone with ConvNeXt and fairly compare ConvNeXt with our MSCAN on ADE20K dataset. The results is shown in following table. Experiments also indicate the superiority of our method and confirm our view. \n\n | Method | Params.(M) | mIoU(SS) | mIoU(MS) |\n | ------------------- | ---------- | -------- | -------- |\n | SegNeXt w/ ConvNeXt | 28.4 | 43.2 | 44.5 |\n | SegNeXt w/ MSCAN | 27.6 | 48.5 | 49.9 |\n | SegNeXt w/ ConvNeXt | 50.0 | 46.2 | 47.6 |\n | SegNeXt w/ MSCAN | 48.9 | 51.0 | 52.1 |\n\n \n\n* For Question2: \"Effectiveness of the proposed operation.\"\n\n In this paper, we compare our method with common transformer-based method such as SegFormer[3] and SETR[4]. As summarized in Table 1, transformer-based methods lack of multi-scale information and have a high computational complexity. These two drawbacks cause two problems: low performance and high computation. \n\n * On the one hand, lacking of multi-scale information causes performance reduction. It is demonstrated in Table 8, which SegNeXt outperforms SegFormer and SETR significantly. \n * On the other hand, quadratic complexity causes high computing cost especially for high resolution images. As shown in Figure 1 left, SegNeXt significantly surpasses SegFormer when processing 2,048 x 1,024 images in cityscapes dataset.\n\nTo be continued.\n", " This paper proposes a novel convolution-based architecture that could utilize convolutional attention in an effective way to encode contextual information for the segmentation task. The main contributions of this paper are (1) a new tailored network architecture (SegNeXt) that envokes spatial attention via multi-scale convolution features, (2) the illustration that the encoder with simple and cheap convolutions can exhibit improved performance than the vision transformer, and (3) the proposed network exhibit the state-of-the-art semantic segmentation performance. \n1. Strengths\n- The reviewer significantly understands the significance and importance of the task proposed in this paper. Segmenting various objects in real-world images is significantly important for many applications. Recent studies have researched the Vision-Transformer to improve the segmentation performance in vision fields, but it exhibits many limitations such as requirements a large number of images and datasets, and even heavy cost. However, this paper address that still the convolution and attention-based operations could exhibit significantly improved performance at a low cost. \n\n- Reproducibility of the manuscript. The manuscript is well organized in terms of exhibiting the hyper-parameters and model architecture. \n\n- The manuscript is well organized and well written.\n\n2. Weakness\n- More detailed descriptions are illustrated in the “Question” and “Limitation” sections. Please see below.\n\n * Questions & Discussions\n1. The novelty and the contribution of this paper should be more discussed. In recent years, the new methods related to vision transformer and ConvNext are popularly studied, and they showed extremely improved accuracy in terms of classification and segmentation. At the first gland, this paper just addresses the application of the ConvNext to the segmentation task. At this point, the reviewer is curious that the combination of the ConvNext to the segmentation task could be a novel contribution to this Neurips society. \n\n2. Effectiveness of the proposed operation. The authors addressed in the abstract that “convolutional attention is a more efficient and effective way to encode contextual information than the self-attention mechanism in transformers”. However, the manuscript could not effectively exhibit this property which is the main contribution of this paper. The mathematical proof/illustration or experimental analysis for contextual information should be discussed to improve the quality of the manuscript.\n 1. Major issues\n- The limited novelty should be discussed. As illustrated above (Question), the proposed operation exhibits limited novelty. The reviewer expects that the limitations could be improved in the discussion periods.\n\n- The experiments should be improved. As the authors commented in the contribution 2, the proposed module can perform better than vision transformers, “especially when processing object details”. As the reviewer already understand, the evaluation metric of “mean Intersection over Union (mIoU)” can qualitatively measure the object details. However, recent studies [1-3] proposed new evaluation metrics to measure the object detail (especially boundaries of the target objects) quantitatively. Otherwise, the visualization of the feature map could illustrate the novel feature extraction when processing object details. Please refer the activation maps [4]. To clear the authors’ addresses, more experimental or mathematical evidence should be justified.\n\n\n[1] Fernandez-Moral, Eduardo, et al. \"A new metric for evaluating semantic segmentation: leveraging global and contour accuracy.\" 2018 IEEE intelligent vehicles symposium (iv). IEEE, 2018.\n\n[2] Lee, Kyungsu, et al. \"Boundary-oriented binary building segmentation model with two scheme learning for aerial images.\" IEEE Transactions on Geoscience and Remote Sensing 60 (2021): 1-17.\n\n[3] Cheng, Bowen, et al. \"Boundary IoU: Improving object-centric image segmentation evaluation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[4] Selvaraju, Ramprasaath R., et al. \"Grad-cam: Visual explanations from deep networks via gradient-based localization.\" Proceedings of the IEEE international conference on computer vision. 2017.\n\n\n2. Minor issues\n- Definition of the contextual information\n\n- The detailed description should be improved. For instance, what does the arrow indicate in figure 3? The reviewer recommends authors to review the figures to improve the explanation details for all readers.\n\n- The reviewer recommends reviewing the grammar and typo errors to improve the quality of the manuscript.\n", " This paper presents SegNeXt, a new convolution-based model for semantic segmentation. The authors first propose 4 properties that constitute a successful semantic segmentation model: 1) A strong encoder backbone, 2) Multi-scale information interaction, 3) Spatial attention, and 4) low computational complexity. Based on these requirements, the authors propose a convolutional encoder based on a new multi-scale convolutional attention module (MSCA). This module uses multi-branch convolutions of different sizes to generate an attention mask, which is then used to weight the input (via element-wise multiplication) to generate the output of the module.\n\nAs a decoder, the authors propose to concatenate the convolutional features from the last input stages and feed them as input to a Hamburger module.\n\nThe authors evaluate their proposed architecture on several datasets like ADE20K, Cityscapes, Pascal VOC, Pascal context, COCO-Stuff and iSAID. The results of the experiments show that SegNeXt matches or outperforms other models with similar number of parameters while requiring less computation. ## Strengths\n+ The paper is well written and the new concepts and ideas are explained well and with enough detail. The paper is well structured and the content is presented in a clear and logical way.\n\n+ The paper proposes an extensive evaluation to show the performance of the proposed model and how it compares against previous state-of-the art approaches. Additionally, the authors perform some ablation experiments to backup their choices of multi-scale in the MSCA module, Hamburger as a decoder and the decoder structure.\n\n+ The proposed model outperforms previous state of the art approaches while also requiring less computation.\n\n## Weaknesses\n- A weakness of this paper is that it doesn’t justify enough the four properties argued to be necessary for a successful semantic segmentation model. The authors mention that by looking at previous work (DeepLabV3+, HRNet, SETR and SegFormer) they are able to conclude that a good model needs these four properties (strong backbone, multi-scale, attention and low complexity), but that might just indicate recent trends, not necessarily important properties. For example, a stronger justification to claim that the model needs to use spatial attention would be to design an experiment (or a citation to another paper) that shows why this property is needed, instead of assuming that because recent papers used attention, this property becomes necessary.\n\n- The authors explicitly mention that \n> “taking the aforementioned **analysis** into account, … we propose an efficient yet effective encoder-decoder architecture of semantic segmentation”. \n \n However, as I mentioned before, a more detailed analysis of the different properties of recent models would make the claims more significant.\n\n- Just so the authors are aware, there is already another model called SegNext [1].\n\nTo summarize, the paper proposes a novel architecture for semantic segmentation which outperforms previous approaches. The authors perform a thorough evalution to empirically show the superior performance. However, some of the justifications are a bit weak or not supported enough with data.\n\n[1] T. Forbes and C. Poullis, \"Deep Autoencoders with Aggregated Residual Transformations for Urban Reconstruction from Remote Sensing Data,\" 2018 15th Conference on Computer and Robot Vision (CRV), 2018, pp. 23-30, doi: 10.1109/CRV.2018.00014.\n 1) Why does the model need to use spatial attention? What is the property of spatial attention that makes it a key component for semantic segmentation models? (Be it self-attention or computed attention masks like in this paper)\n2) In Section 1, Table 1, the authors mention that “Strong encoder denotes strong backbones, and adopts the advanced training strategy”. What is the “advanced training strategy”? Does this paper use it? Yes", " This paper presents SegNeXt, a simple convolutional network architecture for semantic segmentation. The main contribution of this paper is replacing the standard convolutions and self-attention with the spatial attention proposed in this paper. Experimental results demonstrate that SegNeXt surpasses current state-of-the-art transformer-based methods by a considerable margin. **Strengths**: \n\n1) This paper is well written, and I can easily understand the overall flowchart of this paper.\n\n2) The experimental results of this paper are good.\n\n**Weaknesses**: \n\nAlthough the idea of this work seems interesting, the good performance is not sufficiently documented. \n\nFirstly, the motivation of this paper is not clearly explained. In Line.112 to Line.137, the authors directly tell us that the pipeline proposed MSCA, while they do not tell us why to design such architecture. This makes me confused that would multi-scale convolutional attention be really better than the self-attention mechanism?\n\nSecondly, the authors do not provide any visualization results to show the shows the superiority of MSCA compared with self-attention.\n\nIn conclusion, I think the main drawback of this paper is the authors do not tell us that what defects of self-attention are solved by MSCA? Please see the weaknesses. This work has described some limitations and potential negative social impact.", " \nThis paper proposes a simple convolution network SegNeXt for semantic segmentation which include three important properties : strong encoder, multiscale interaction and spatial attention. It shows convolutional attention is a more efficient and effective way to encode contextual information than the self-attention mechanism in transformers. SegNeXt improves the performance of previous state-of-the-art methods on popular benchmarks, including ADE20K, Cityscapes, COCO-Stuff, Pascal VOC, Pascal Context, and iSAID.\n \nStrength:\n1, The paper is well written and easy to follow for both method and experiment part. The summary of Table.1 is interesting. \n\n2. The proposed approach achieve new state-of-the-art results on iSAID, ADE-20k, COCO-stuff and Pascal Context dataset compared with recent transformer based approach. The result in Figure.1 looks good. \n\n3, The proposed approach: convolutional attention is simple yet effective. MSCA is an improved version of large kernel design for segmentation. \n\n4, The results on Pascal VOC dataset rank the first. \n\nWeakness:\n\n1, The technical novelty is very limited. For example, despite the MSCA is simple, it follows the large kernel design which has been proposed in previous works. [1] [2]\n\nMoreover. The decoder adopts multi-scale feature fusion via Hamburger which is a previous work. \n\nThus the technical novelty makes the submission looks like a report. \n\n2, Could not find new insights. The ablation studies are not good which lacks design analysis of component. A better comparison with HRFormer or HRnet is needed to show the effectiveness of spatial attention.\n\n\n3, In Tab10. Why not list the FPS of other method for better comparison. \n\n\n4, Missing ablation on MSCA design on both ImageNet or other segmentation datasets.\n\n\n\n[1] Large kernel matters–improve semantic segmentation by global convolutional network. CVPR-2017. \n\n[2] Convnext -CVPR-2022\n\n[3] Is attention better than matrix decomposition? ICLR-2021.\n See the weakness part. None " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "aAiqJ6W9pGD", "aAiqJ6W9pGD", "i-kDMUUMvDI", "hHebw5_hwR", "nHFfaH-IhX6", "1_X16sO8p3A", "bc4_lyvRrjr", "IQfBgAE7AEr0", "8GnrZubWs85", "0gRf9j85oje", "__Q5h7420zi", "TxMqvoogAjt", "8Aqlk1BEqCk", "TaROyTlWYj", "nHFfaH-IhX6", "nips_2022_VgOw1pUPh97", "nips_2022_VgOw1pUPh97", "nips_2022_VgOw1pUPh97", "nips_2022_VgOw1pUPh97" ]
nips_2022_Iqm6AiHPs_z
Active Labeling: Streaming Stochastic Gradients
The workhorse of machine learning is stochastic gradient descent. To access stochastic gradients, it is common to consider iteratively input/output pairs of a training dataset. Interestingly, it appears that one does not need full supervision to access stochastic gradients, which is the main motivation of this paper. After formalizing the "active labeling" problem, which focuses on active learning with partial supervision, we provide a streaming technique that provably minimizes the ratio of generalization error over the number of samples. We illustrate our technique in depth for robust regression.
Accept
This paper studies “active labeling”, which can be seen as active learning with weak supervision, and proposes an active labeling algorithm based on SGD. The reviewers found that the idea of this paper is innovative. After author response and reviewer discussion, the paper receives generally unanimous support from the reviewers. Thus, I recommend acceptance.
val
[ "NWoMrpbyBP1", "FedXKjQM_fJ", "nxuvBw3JqD", "7jwYjcs81H", "XK4H1LW998d", "SFPzDkNMZFx", "lItw4QUXmR3K", "6OYDHlwaCA-", "sgPwtBGF6L5", "rofJrX8bm2K", "07kDQVmWBS4", "i5dt1_rcr7wX", "5DKHTxflsyt7", "k1GIGXfH-L", "Pqf8oY1mEaY", "WkD60q7HR35", "3aSiRauBid" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your productive comments, we highly appreciate your inputs.\nBe sure that we will do our best to integrate them in order to help future readers to get the most of our work.\nIn particular, we will make section 7 more precise, discuss the ranking example, and give more motivations for the vector-valued regression setting.\nThinking back of this setting, we thought that beyond compress sensing, pricing products through bundles could be another interesting way to flesh out practical use cases of our algorithm.\n\nTo be more concrete, $x$ could be some features characterizing a consumer, and $m$ a number of products.\nSuppose that you want to sell them some baskets of products represented by weight vectors $w \\in \\mathbb{N}^m$ where $w_i$ represent the number of instances of product $i$ in the basket (similarly, $x$ could be some context for some advertising company and web page, and $m$ could represent the number of advertising spots on this web page; knowing that this web page will be displayed by several anonymous users, you can sell lots with $w_i$ times the advertising spot number $i\\in[m]$).\nNow, assume that $w_i$ could be fractional (*e.g.* you can buy fractional shares in order to replicate the S\\&P500 index; or you sell a probability of being displayed at spot $i$), and could also be negative (*e.g.* you are trading derivatives, and can be both short or long).\nThe consumer $x$ is associated with a value $y \\in \\mathbb{R}^m$ where $y_i$ corresponds to the price they are ready to pay for product $i$.\nWhen pricing the basket $w$ at $c$, you observe if the consumer buys it or not, *i.e.* $1_{w^\\top y > c}$.\n\nBest regards", " You are right, we will happily add a formal proposition about exponential convergence rates, and it is true that the renaming \"active labeling\" into \"active weakly supervised learning\" might help people interested in active weakly supervised learning to more easily find our work. Thank you for this suggestion.", " I agree that ''active weakly supervised learning'' delivers the message better (even if a bit wordy)", " .", " Thank you for your response. The exponential speedup under Massart noise certainly makes sense; however, I still think it's better to formally add a proposition/corollary in the paper regarding the exponential speedups. \n\nAlso, I wonder if it's better to formally introduce the problem as ''active weakly supervised learning'' rather than the ambiguous term ''active labeling'' (as discussed in footnote 1; I also see other reviewers using wordings like ''re-coin the term 'active labeling''', which can lead to confusions).", " The compressed sensing example is a good one.\n\nI like proposition 4 and think this is a good example, though I agree with Reviewer Byvs that Section 7 could use more precision.\n\nI hope that for the next version of the paper, the authors will incorporate some of our discussion above. In particular, fleshing out more examples, connecting sections to a central story, and making sections (especially Section 7) more precise.\n\nThank you for answering my questions. I still think this paper is borderline, but I will change my score to \"borderline accept\".\n\n", " Thank you for your response.", " Thank you for your responsiveness, let us try one more time to convince you of the usefulness of our work.\n\n> I'd be surprised if all \"linear cuts of those polytopes\" are interpretable to humans.\n\nSince a linear cut of a polytope is a union of faces of different dimensions, linear cuts of the permutohedron are made of unions of weak orderings.\nNote also that our imaging strategy Eq. (5) can be adapted to cases where $U$ is not uniform on the sphere but satisfies some isotropic properties.\nAlthough out of scope of this paper, we believe that one can leverage isotropy to reduce queries to small unions of weak orderings that are easy to interpret for humans.\nWe indeed give some concrete details in our answer to Reviewer 1mW2 on one way to approach ranking with our algorithm in order to learn based on pairwise orderings.\n\n> I cannot easily think of any cases where one can measure if a weighted combination of outputs meets a threshold.\n\nWe understand that the example of oxygen concentration plus saturation we built from the CalCOFI dataset might sound a bit superficial: of course, no one will measure oxygen concentration level with bacteria since there are excellent cheap ways to measure oxygen with titration methods.\nSadly, we are not knowledgeable enough in chemistry and biology to know of an analogical sensor that can measure any weighted sum of two (or more) quantities like on our oxygen concentration plus saturation example.\nYet, compressed sensing photography gives a \"reasonable\" example where one can measure if any weighted combination of outputs meets a threshold (*e.g.* Dadkhah *et al.* (Sensors, 2013) *Compressive Sensing Image Sensors-Hardware Implementation*).\nThis setting consists in acquiring an image made of $m$ pixels with a unique optical sensor.\nUsually a camera is made of $m$ optical sensors that measure the light intensity of each pixel.\nInstead, one can put small reflectors where the pixel sensors were, that all lead to a unique optical sensor.\nThe sensor will then only collect information about the summation of light intensity of pixels whose reflectors were on.\nBy switching on and off the reflectors, or by adding some opacity filter (*e.g.* mirrors that only cast a part of the light beam on the sensor), one can easily measure any weighted sum of light intensity by pixels. \nReplacing the optical sensor that measures a value, by a threshold-triggered metal–oxide–semiconductor gives an example for our vector-valued regression setting.\n\n> I would like to point out that the optimality of the algorithm is only in terms of the dependence on T.\n\nYou are completely right, \"minimax optimality\" tends to focus only on the number of samples and forget about constants. We were keen in this work to avoid results in $O(T^{-1/2})$ without explicit constants since it is a usual drawback in learning to hide some curse of dimensionality in constants. We were keen to show that our constants do not explode as the output dimension grows.\n\n> It seems to me that perhaps the \"active labeling\" general problem lacks enough structure to create effective algorithms, and thus might not be the right abstraction. For example, the algorithms and methods presented require rather general query sets while the practical use cases have strong restrictions on the query sets.\n\nWe understand your concern. \nAs said above, we try different formalization and the active labeling one seems to us to be the right one (as opposed to *e.g.* formalization were measurements are cast as sigma-algebra, and convergence is studied with tools as in *A Note on the Strong Convergence of $\\Sigma$-Algebras* by Hirokichi Kudo (1974)). \nIt is true that it is quite generic, but we do not think it is too abstract to be hopeless to try to tackle it in a generic fashion - at a similar level of abstraction lies research on structured prediction which has been recognized as useful by the community.\nOnce again, it is true that the SGD solution we suggest do not easily integrate query constraints, but work-around as in Proposition 4 seem reachable.\n\n> I concede that the setting of linear models and one-dimensional outputs (observing thresholds) is reasonable, but this is quite narrow.\n\nAlthough we hope that our paper will reach beyond the one-dimensional problem, we do not see this problem as narrow since it has had important applications in pricing (see the work of Maxime Cohen or Renato Paes Leme).\n\n> I still find the story/scope of the paper scattered without a clear, useful message.\n\nIn our view, the message is \"there is this interesting data collection problem that we call \"active labeling\" and we have a consistent algorithm to tackle it when there is no query restrictions. It leverages the fact that we do not need full information to do SGD\".\n\nThanks for your implication during this discussion period, and for your comments that help us better understand your concerns.", " Thank you for the example of ranking problems. Although I didn't work it out, I'd be surprised if all \"linear cuts of those polytopes\" are interpretable to humans (such as an ordering of a few items as mentioned as Example 2 in the paper).\n\nThank you for the concrete examples of tissue samples, oxygen levels in water, and apparent temperature. After thinking about it a bit, I think if the output space is one-dimensional, then thresholding by a value is a reasonable practical setting. However, I think that observing a halfspace for a multi-dimensional output space is not reasonable. I cannot easily think of any cases where one can measure if a weighted combination of outputs meets a threshold.\n\nFor the tissue samples, it is written: \"Suppose that they can proceed for this measurement by cutting the tissue in a few pieces and putting them into different levels of reactive solvent that would turn to a specific color if a threshold is met. This exactly fits into our regression framework with the observation of half spaces.\" I'm not sure I quite understand the setup, but my best interpretation either yields observing thresholds on each output (axis-aligned half spaces) or some sort of weighted sum chemistry that seems implausible.\n\nFor the oxygen levels in water, it is written: \"We assume that we can measure if any weighted sum of oxygen concentration and saturation is above a threshold by letting some population of bacteria evolves in the water sample and checking if it survives after a day.\" Is there a species of bacteria that has a clean linear classifier for death as a function of oxygen concentration and oxygen saturation? Furthermore, is there a species of bacteria for every linear classifier (or at least an epsilon-net or something)? I find this somewhat implausible.\n\nI liked the apparent temperature setting is a good use case, but perhaps only because it involves a one-dimensional output space.\n\n-------------------------------------\n\nI would like to point out that the optimality of the algorithm is only in terms of the dependence on T. It looks like the dependence on the dimension is worse, which I think is expected because of the less informative types of queries.\n\n-------------------------------------\n\nIt seems to me that perhaps the \"active labeling\" general problem lacks enough structure to create effective algorithms, and thus might not be the right abstraction. For example, the algorithms and methods presented require rather general query sets while the practical use cases have strong restrictions on the query sets. I concede that the setting of linear models and one-dimensional outputs (observing thresholds) is reasonable, but this is quite narrow.\n\n--------------------------------------\n\nI still find the story/scope of the paper scattered without a clear, useful message.\n", " Thank you for the time you took to read, understand and review this paper, and for spotting typos.\nWe are happy that you have appreciated it.\n\nWe would like to point out that our scope is not restricted to the noiseless case, indeed SGD deals really well with noise, which can be seen as an advantage in comparison to zeroth-order methods such as the work of Cohen *et al.* (2020).\n\nWe are glad you asked us about the variances of our SGD. \nThey are to relate with the inverse of the constants $c_1$ and $c_2$. \nMaking sure to have small variance was crucial as convergence rates strongly depend on it: the higher the variance, the slower the convergence.\nWe were keen to come up with strategies where those variances do not suffer from the curse of dimensionality (which is a standard drawback for practical applications). They typically scale in $M m^{3/2}$ and not in $M^m$ or $2^m$.\nAn exact computation of those constants is given in Appendix B.2.\n\nThe difference on Figure 2 between passive and active is due to the fact that the passive and active strategy behave differently with respect to the step size for SGD, we optimize this step size for the last iterate and not for the whole trajectory.\nOn Figure 2, the step size seems to be too conservative for the active baseline for the first 100 steps.\n\nRegarding other formats of questions, we did not extend on the ranking example in the paper, we benefit from your last question to do it.\nRanking is the setting where the output space is the space of permutation over $m$ elements ${\\cal Y} = \\mathfrak{S}(m)$.\nOne way to approach ranking is through the Kendall loss $\\ell(y, z) = -\\phi(y)^\\top \\phi(z)$ with $\\phi(y) = (1_{y(i) > y(j)})$ for $i < j\\leq m$ (the permutation $y$ can be understood as a function from {$1, \\cdots, m$} to itself).\nIn this setting, the least-square surrogate of Ciliberto *et al.* (2020) consists in learning $g(x) = \\mathbb{E}[\\phi(Y)\\vert X=x]$ as a least-squares problem.\nHence, our half-spaces translate into the questions $\\sum_{i < j\\leq m} w(i,j) 1_{y(i) > y(j)} > c$ for some $(w(i,j)), c$ in $\\mathbb{R}$. In particular, if we choose $U$ to be uniform on the canonical basis (and not on the sphere), those questions translate into pairwise preferences (*e.g.* does user $x$ prefer movie $i$ or movie $j$?).\nIn terms of guarantee akin to Theorem 1, retaking the calibration inequality (Theorem 7) of Ciliberto *et al.* (2020), we get convergence rates of the form $m^{3/2} T^{-1/4}$.\nIn terms of guarantee akin to Theorem 2, since we need as least $\\log_2(m!) \\simeq m\\log(m)$ binary queries to discriminate between $m!$ permutations, we can expect a lower bound in $m^{1/2} \\log(m)^{1/2} T^{-1/2}$.\n\nWe hope to have answers positively to your concerns and stay at your disposal for further questions!", " We would like to warmly thank you for your constructive comments and your detailed concerns that will help us improve our draft.\nWe hope that we will succeed in convincing you of the quality and usefulness of our work.\n\nWe are happy to read that we succeed in presenting our message in a clear and easy-to-understand manner.\nFollowing your comment as well as the one of reviewer yumH, we have rephrased the technical equations of Section 3 with words in the rebuttal file.\nYou are right that we do not generalize active learning *stricto sensu*, we rather provide a variant based on \"weak supervision\". We have modified our wording in the abstract.\n\nIt is true that our SGD procedure could have strong applications for privacy preserving issues, yet our main motivation was to define the \"active labeling\" problem (which has been on our mind for a while).\nHence, we would be keen to keep the paper as it is, that is a presentation of this \"active labeling\" setup, that we found abstract enough for theory to be generic, but concrete enough to have a clear impact (plus a consistent algorithm to tackle it).\nWe indeed believe that the definition of the \"active labeling\" problem is already a fair contribution.\n\nAmong the different approaches we have investigated, the SGD solution has the advantages of being robust to noise, easy to implement, quite generic and to satisfy some minimax optimality properties, but you are right that it does not easily deal with restriction on the sets to query (although Proposition 4 provides an option for classification with attributes). We have made this point clearer in the rebuttal file.\nTo extend on the \"arbitrary queries\", let us stress out that our SGD procedure leads to queries that follow the output geometry induced by the loss. To take a concrete example, ranking problems can be approached with correlation losses (Kendall's tau, Spearman's rho, Hamming loss) and tackled through surrogate regression problems where the output spaces are the convex hulls of some well-known polytopes (see the work of Nir Ailon or Anna Korba for example), such as the Birkhoff polytope or the permutohedron.\nAlthough their descriptions is out-of-scope out this paper, linear cut of those polytopes are not completely arbitrary set.\nFor example, the faces of all dimensions of the permutohedron correspond, in a one-to-one fashion, to strict weak orderings (*e.g.* Thompson (1993) *Generalized Permutation Polytopes and Exploratory Graphical Methods for Ranked Data*).\n\nWe understand that, since we mainly focus on the regression case, and in order to provide realistic use cases of our algorithmic contributions, we could have motivated it with more real-world examples beside the pricing example.\nTo do so, let us give a practical example from real life : our setting could be useful in a situation where one acquire tissues through an invasive biopsy in order to quantify the concentration of some specific elements (such that the amount of connective tissue in parenchymal tissue to check for fibrosis scarring).\nSuppose that they can proceed for this measurement by cutting the tissue in a few pieces and putting them into different levels of reactive solvent that would turn to a specific color if a threshold is met.\nThis exactly fits into our regression framework with the observation of half spaces.\nYou can generalize this example to any situations where you have sensors that only give a binary answer with respect to the measure of a continuous quantity.\nThere is many other examples where one has to measure continuous values and can only acquire partial information, this has been a long-standing problem in economy following the seminal work of James Tobin (one can also excavate works in physics that date back to the 19th century such as the paper of William Sheppard *On the calculation of the most probable values of frequency constants, for data arranged according to equidistant division of a scale.*).\n\nWe also hear that the experimental part seems too much like a proof-of-concept and not enough like real experimental validation.\nTherefore, we provide empirical validation on two real-world datasets found on Kaggle (without curating dataset, only looking for the first results on the Web).\nThe results have been added in the Appendix E.4 of the rebuttal revision.\n\nWe would like to thank you again for having committed to review our work, and we stay at your disposal to discuss or answer any question you might have.", " We are deeply thankful for the valuable time you took to review our work, we are happy to read that you have appreciated it.\n\nWe have happily conducted an experiment to illustrate the difference of Algorithm 1 with SGD. It is in the rebuttal file (in Appendix E.1).\nIn theory, this difference can be read in the constant in front of the convergence rates in Theorem 1 (in particular our constant reads $\\kappa M m^{3/2}$ when it reads $\\kappa M$ for full SGD).\n\nWe are glad you asked for precision about the exponential convergence rates.\nSuch rates of convergence can be proven under the following low-noise assumption (known as Massart noise condition, or Tsybakov hard margin): there exists a threshold $\\delta > 0$ such that for almost all $x\\in{\\cal X}$ and $z\\in {\\cal Y}$, and $\\mathbb{P}(Y = f(x) \\vert X=x) - \\mathbb{P}(Y = z\\vert X=x) \\notin (0, \\delta)$.\nFor example, this assumption is met when for each input $X$ the most probable class has always more than 60\\% of chance to be the target $Y$. Arguably, this is true for well-curated images dataset such as ImageNet or CIFAR10.\nTogether with Assumption 1, that is if the surrogate target $g^*$ belongs to the RKHS and the kernel is bounded, then the right hand-side of equation (6) can be replaced by $\\exp(-cT)$ for some constant $c$.\nThe proof would be a simple adaptation of the work of Pillaud-Vivien *et al,* *Exponential Convergence of Testing Error for Stochastic Gradient Methods* (2018) to our case.\n\nWe stay at your disposal for any further concerns or questions.", " Thank you very much for your appreciation of our work, as well as for the time you took to review and understand our work.\n\nWe are sorry if some passages were a bit too dry: stated with words, lines 121-123 states that if you have a way to image a vector $\\theta$ from partial measurement $1_{\\theta\\in T}$ such that you can reconstruct this vector in a linear fashion, this is equation (3), then it provides you a generic strategy to get an unbiased stochastic estimate of this vector from a partial measurement, this is equation (2). \nTo do so, we voluntarily introduce randomness over our measurements.\nWe have integrated this comment into our manuscript. \nThank you for spotting this passage and helping us improve the readability of our work.\nPlease do not hesitate to point us to other parts of the manuscript that could be hard to parse for the reader.\n\nThe main reason why we did not want to extend too much on discrete output problems is because we hope to design future algorithms that would better leverage the discrete output structure more smartly (retaking ideas from combinatorial bandit for classification and from active ranking in the context of preference learning).\nHowever, nothing prevents us from using our algorithm on those problems.\nWe understand that this paper would have a greater impact had we shown that this is not only a proof-of-concept but that it does work on real-world data.\nAs a consequence, we provided real world experiments in the rebuttal file in order to convince you that this work is not only theory that sounds good, but also a practical algorithm that can be helpful in the real world.\nYou will find those in Appendix E.\n\nAs a conclusion, we would like to thank you for your appreciation of this work, and we stay available for further discussion or precision.", " This paper generalizes the standard active learning setup to \"active weakly supervised learning,\" where partial supervision is available instead of full supervision. This paper provides a way to estimate the stochastic gradient when a query of any half-space is given. Statistical analysis and numerical simulation are conducted for simple regression and classification problems to demonstrate the superiority of active strategy over passive strategy. - Originality: This paper is well-motivated and focuses on an interesting problem: how to query the label in a weakly-supervised scenario.\n- Quality: The proposed method is novel and supported by extensive theoretical analysis and proof-of-concept numerical simulation.\n- Clarity: This paper is well-written with minor flaws.\n- Significance: This paper's scope is narrow, mainly focusing on the noiseless median regression problem in a stream setting. But it depicts a potentially new way for the annotator to provide a label, which may lower the bar for the task where an expert is needed. - The weak information can give an unbiased gradient estimation. How about the variance?\n- Figure 2 (Left) shows that the passive strategy is better than the active strategy for the first 100 steps. Does it mean using a passive strategy is better at the beginning?\n- From a practical point of view, the half-space query can be transformed into a binary question, and \"Example 1 (Classification with attributes)\" can be transformed into multiple-choice questions. What other formats of questions can you think of to facilitate the data annotation process?\n\nMisc\n- Line 115: Typo, \"we aim at to minimizing\" -> \"we aim to minimize\"\n- Line 253: Typo: \"ot\" -> \"to\" The authors discuss their limitations in the paper and point out several future directions.", " This paper has two main contributions. First, the paper defines a problem setting known as \"active labeling\", that is, given a large output space $\\mathcal{Y}$, we can query an index $i \\in [n]$ of a dataset with a subset $S \\subset \\mathcal{Y}$ and receive the binary response of whether $y_i \\in S$ or $y_i \\not\\in S$. The paper gives a few compelling examples that fit in this setting and a generic way to produce sets $S$ to enable SGD. Second, the paper investigates two specific settings, least-squares regression and median regression, and provides an algorithm with analysis and a matching (in terms of dependence on number of samples) lower bound. Strengths:\n - This paper provides a nice, meaningful analysis of median regression in the \"active labeling\" setting.\n - The paper is overall clear and easy to understand (perhaps with the exception of Section 3 which I feel could be expanded).\n\nWeaknesses:\n - While the motivation for \"active labeling\" is compelling, the algorithmic contributions seem to require arbitrary sets $S$ while the compelling examples hinge on \"a specified set of subsets of $\\mathcal{Y}$ \". For example, the classification with attributes example does not allow all 2^{number of attributes} subsets to be queried. It appears that the construction of stochastic gradients in Section 3 almost requires arbitrary queries.\n - It seems to me that a concrete use case of the algorithm is not fleshed out. In most papers, the experiments section ensures this, but the experiments are quite synthetic (there is \"real-world\" data in appendix E but perhaps not a realistic query model).\n - I'm not sure the setting \"generalizes active learning based on partial supervision\" and furthermore, the \"streaming technique\" is in a rather different setting (requiring information from each point in the stream) compared to standard streaming active learning.\n I understand the \"active labeling\" setting is general and well-motivated. Can the authors provide a realistic use case of the algorithmic contributions?\n\nI wonder if this paper would be better written as a privacy paper without mention of \"acquiring the most informative dataset\". In the privacy preserving setting, the \"streaming setting\" is very realistic (as the authors point out) and the arbitrary sets $S$ is realistic as a user's device can evaluate if $y_i \\in S$ (no need for human computation). As written, the paper's area and scope seem a bit scattered. Yes", " This paper introduces the ``active labeling'' problem, which aims at learning with weakly-supervised labeling information. An specific focus of the paper is to conduct SGD but without the full gradient. The authors show that how to query weakly-supervised information in the cases of least-squares and the median regression. The authors also provide minimax results regarding the convergence rate (in the online setting). Empirical evaluations show the efficacy of the proposed method compared to the passive counterpart. Strengths:\n1. A newly proposed ``active labeling'' problem looks interesting and important.\n2. The authors provide an algorithm to handle the proposed problem under median regression, and prove its minimax optimality in the kernel/linear case.\n\nWeakness:\n1. While I found the discussion in Section 7 quite interesting, I believe it will be better if precise statements are provided as theorems/propositions, e.g., on line 267, the authors claim exponential convergence rates can be achieved under margin conditions but no formal theorems are provided. Since theorem 2 essentially says that the provided Algorithm 1 works as good as the SGD with full gradient information, I wonder if the authors can conduct an experiment to compare the performance of Algorithm 1 and standard SGD with full gradient information? The authors discussed the limitation in Section 8.", " The authors (re-)coins the term \"active labeling\" as a general case of Active-Learning where to the case of weak/partial supervision.\nDuring training, the proposed method requires access only to the (stochastic) gradients deduced from the annotation, which allows some degree of privacy preserving training if used in a streaming regime.\nSince the problem statement might seem too abstract, the authors provide three concrete real-life examples for partial annotation in lines 78-98.\nThe authors choose, however, to exemplify the approach for the specific case of (robust) median regression where the supervision partiality is half-planes. The paper is rigorous and very well written, even if some parts are too math-intensive than needed (e.g. lines 121-123 might be easier to understand with some context or more verbosity). \nThe theoretic section seems sound - I read through some of the proofs and did not find any corrections or mistakes.\n\nThis is a good paper, imho, so any weaknesses I will point our might seem superficial:\nAs mentioned, the examples in lines 78-98 really give motivation. However, the particular case that the authors chose to address (median regression) is a bit underwhelming in comparison, and a bit harder to apply to real-life. Even more so, I think any one of the three examples are easy to test by holding out data from open datasets (like imagenet). The median regression problem is indeed a worthy one.\nJust to gain some sense on the next steps - what is the gap to present results on one of the three examples (or any other similar real-life problem)? Even though the authors claim they state the limitations of their method in the discussion section, I did not find any that apply to the specific method in the text, but rather to the lager problem of active learning or continuous gradient based optimization." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "SFPzDkNMZFx", "nxuvBw3JqD", "XK4H1LW998d", "5DKHTxflsyt7", "i5dt1_rcr7wX", "6OYDHlwaCA-", "rofJrX8bm2K", "sgPwtBGF6L5", "07kDQVmWBS4", "k1GIGXfH-L", "Pqf8oY1mEaY", "WkD60q7HR35", "3aSiRauBid", "nips_2022_Iqm6AiHPs_z", "nips_2022_Iqm6AiHPs_z", "nips_2022_Iqm6AiHPs_z", "nips_2022_Iqm6AiHPs_z" ]
nips_2022_lTKXh991Ayv
Practical Adversarial Attacks on Spatiotemporal Traffic Forecasting Models
Machine learning based traffic forecasting models leverage sophisticated spatiotemporal auto-correlations to provide accurate predictions of city-wide traffic states. However, existing methods assume a reliable and unbiased forecasting environment, which is not always available in the wild. In this work, we investigate the vulnerability of spatiotemporal traffic forecasting models and propose a practical adversarial spatiotemporal attack framework. Specifically, instead of simultaneously attacking all geo-distributed data sources, an iterative gradient guided node saliency method is proposed to identify the time-dependent set of victim nodes. Furthermore, we devise a spatiotemporal gradient descent based scheme to generate real-valued adversarial traffic states under a perturbation constraint. Meanwhile, we theoretically demonstrate the worst performance bound of adversarial traffic forecasting attacks. Extensive experiments on two real-world datasets show that the proposed two-step framework achieves up to 67.8% performance degradation on various advanced spatiotemporal forecasting models. Remarkably, we also show that adversarial training with our proposed attacks can significantly improve the robustness of spatiotemporal traffic forecasting models.
Accept
The authors have made a significant effort to address reviewer concerns in their rebuttal. They are strongly encouraged to include these additional results and observations to either the main body of the paper or the supplement.
train
[ "M-MxVjuosoD", "_lSpVkYHyDd", "u9GhkcyuUY", "GxFdze3ideb", "FrJn64E5k5u", "L9vcLSPSDV3", "L_QEX6OyCxu", "aX2Q9Fnc2WQ", "8JED3JT3ga1", "wlI11gp_bv9", "xUMTdnYaFTD", "vL_4Mf6QpgA", "pCsdlz5zQG", "ApKtVd6oTyV", "fdEpIPjCWaY", "YctMpq3Nzgf" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer rQBx:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nNeurIPS 2022 Conference Paper312 Authors", " Dear reviewer 3JHx:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nNeurIPS 2022 Conference Paper312 Authors", " Dear reviewer 7aqp:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nNeurIPS 2022 Conference Paper312 Authors", " I thank the authors for their clarification. My overall score has not changed (it is still a 7), but I am more confident of the paper's contribution so I will take their response into account for any future discussion.", " The full access to the system refers to the attackers can read the model including the whole model architecture, model parameters, gradients, model outputs, and the attackers can access the traffic forecasting input data as well as the corresponding labels in the testing stage. We agree with the reviewer that an attacker with read and write access can apply more dangerous attacks. We have revised the description of accessibility in the manuscript from Line 88 to 96.\n\nThe system refers to the entire ML pipeline of the traffic forecasting model after deployed, including the geo-distributed traffic data, the feature constructor, and the well trained forecasting model.\n\nThe input data refers to the data collected from geo-distributed sensors in the testing stage. For a time step t, the input data including the traffic states of all sensors from t-T+1 to t-1 time steps.\n\nFor the threat model, we have added a more precise definition in Section 2.2 and 2.3.\n\n_Three types attack._\n“\nNote the adversarial attack happened in the testing stage, and the attackers cannot manipulate the forecasting model or its output. On the benign testing set, the forecasting model can perform well. Based on the amount of information the attacker can access in the testing stage, the adversarial attack can be categorized into three classes. \n\nWhite-box attack. The attacker can fully access the target model, including the model architecture, the model parameters, gradients, model outputs, the input traffic states, and the corresponding labels. \n\n Grey-box attack. The attacker can partially access the system, including the target model and the input traffic states, but without the labels.\n\nBlack-box attack. The attacker can only access the input traffic states, query the outputs of the target model or leverage a surrogate model to craft the adversarial examples.\n“\n\n\n_Attack goal._\n” The attacker aims to craft adversarial traffic states to fool the spatiotemporal forecasting model to derive biased predictions. ” Formally, given a spatiotemporal forecasting model $f_\\theta(\\cdot)$, the adversarial attack against spatiotemporal traffic forecasting is defined as in Equation 4. More detailed definition of the attack goal, please refer to line 106-111", " The rebuttal has addressed some of my concerns, but the most important one still remains.\n\nSpecifically, I invite the authors to clearly state what they imply by \"full access to the system\" and, specifically, \"the input data\". \n\nThis is because \"full access\" implies both \"read and write\" access -- and having \"write access\" also to the \"label\" means that an attacker can technically just change the output of the ML model after a prediction has been made. In other words: an attacker with such a 'power' would launch a different a much more dangerous attack than the one described in the paper. Finally, what is the \"system\"? Is it just the ML model, or the entire pipeline?\n\nWith regards to \"input data\", do the authors imply the training dataset or the input data sent after the model has been deployed?\n\nThese details are crucial to gauge the realistic value of the envisioned scenario, and I endorse the authors to be extremely precise in defining the considered threat model. In fact, I strongly oppose using the \"white/black-box\" terminology to define a threat model, as such terms only focus on the knowledge of the attacker and can be misleading.", " We highly appreciate your high-quality review and valuable suggestions. We are pleased that the reviewer recognized our contributions to the adversarial attack on the traffic forecasting domain. \n\n\n\n***\n> [Q1] “Unclear definitions of White/Black-box attacks”\n\n[Response] \nWe have presented the detailed definition of white-box, grey-box, and Black-box attacks in section 2.2.\n“Based on the amount of information the attacker can access, the adversarial attack can\nbe categorized into three classes. (1) White-box attack. The attacker has full access to the system, including parameters and gradients of the target model, the input data, and the label. (2) Grey-box attack. The attacker can partially access the system, including the target model and training input data, except the labels. (3) Black-box attack. The attacker can only access training input data but cannot access the target model and labels.”\n***\n\n> [Q2] ”Realistic Feasibility.”\n\n[Response] \nIn real-world traffic systems, the traffic data is generated from geo-distributed data sources (e.g., sensors). Perturbing different geo-located data requires hacking different sensors, which may be expensive. Therefore, we consider the attack budget as a critical constraint in adversarial attacks on traffic forecasting models. For the attack strategy, our method achieves (15.80%, 15.39%) global performance improvement and (23.35%, 17.19%) local performance improvement on the PeMS-BAY dataset compared to the random attack baseline. By carefully selecting victim nodes, the attacker can achieve more effective attack performance with less attack budget. We have clarified the above concerns in the Appendix F .1.\n\n***\n> [Q3] “The following statement in the Introduction requires additional back-up: “Machine learned spatiotemporal forecasting models have been widely adopted in modern Intelligent Transportation Systems (ITS) to provide accurate and timely prediction of traffic dynamics, e.g., traffic flow [1], traffic speed [2], and the estimated time of arrival [3].” The problem is that [1,2,3] are research papers, and cannot be used to substantiate the claim that ML-related proposals are “widely adopted in modern ITS”. At best, they are well-studied in research.”\n\n[Response] \nThanks for your suggestions. We introduced more spatiotemporal forecasting models deployed in the industry, including Google Maps [2] and Baidu Maps [1] in Section 1. As reported by the company development team, the described models have been deployed in the production environment. Besides, Google Maps also applied the graph-based model to predict the estimated arrival times (DeepMind Blog: https://www.deepmind.com/blog/traffic-prediction-with-advanced-graph-neural-networks).\n\n\n***\n> [Q4] ”Training/testing time? The paper reports that “All experiments are implemented with PyTorch and performed on a Linux server with 4 RTX 3090 GPUs.”. I am genuinely curious of how long it took to train the corresponding models on such hardware. Were the GPUs used in parallel, or did the experiments consider a single GPU “per run”?”\n\n[Response] \nFor the training, each epoch takes about 27 minutes. For testing, it will take about 5 minutes to test an attack method. The experiments are conducted on a single GPU.\n***\n\nReferences:\n\n[1] Liao, Binbing, et al. \"Deep sequence learning with auxiliary information for traffic prediction.\" Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018.\n\n[2] Derrow-Pinion, Austin, et al. \"Eta prediction with graph neural networks in google maps.\" Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021.\n\n***\n", " We highly appreciate your positive opinions on the methodology and the insightful comments.\n\n\n\n***\n> [W1] ”When the attacks are extended to black-box setting, they employ a surrogate model, which is trained via querying the threat models. This will lead to a large number of queris, which reduces the efficiency. The authors should discuss this point.”\n\n[Response]\nThanks for the insightful comment. In practice, the gradient of the models might be estimated using a surrogate model (transfer-based attack) [1] or zeroth-order optimization approaches (query-based attack) [2]. It is true that it may require a massive number of queries in the query-based attack. However, in the training stage, we train the surrogate model in a similar way with the threat model without querying the threat models in the transfer-based attack setting. Furthermore, in the inference stage, we use a surrogate model to estimate the gradient of the threat models to generate the adversarial examples. We left the query-based attack overhead under the black-box setting as future work.\n***\n\nReferences:\n\n[1] \tYanpei Liu, et al. “Delving into Transferable Adversarial Examples and Black-box Attacks”. In International Conference on Learning Representations. 2017.\n\n[2] Cheng, Shuyu, et al. \"Improving black-box adversarial attacks with a transfer-based prior.\" Advances in neural information processing systems. 2019.\n***", " We appreciate the reviewer recognized that our problem is well-motivated and the paper is well-written. We also thank the reviewer’s detailed questions. Please find a point-to-point response to the reviewer’s comments below.\n\n\n***\n> [W1] ”This paper lacks originality in that it proposes no novel or new concepts but just simply modify the existing adversarial attack methods little to deal with spatiotemporal forecasting models. The only novel part of the paper is the newly proposed method of TDNS, but it is still a mere extension of PGD algorithm and taking top k nodes as the victim nodes. Other than the lack of novelty, the paper was nicely written and easy to follow. I personally do not think the methods proposed in this paper are either groundbreaking or significant for future research.\n”\n\n[Response] Spatiotemporal traffic forecasting models have become a cornerstone of Intelligent Transportation Systems (ITS) and modern online maps, e.g., Google Maps [1] and Baidu Maps [2]. However, existing methods assume a reliable and unbiased forecasting environment. As also supported by Reviewer EYep, in this paper, we considered a completely different but still realistic setting compared with adversarial attacks, and proposed a generic framework of adversarial attacks against such spatiotemporal systems. Empirically analysis successfully proved the vulnerability of existing traffic forecasting models.\n\nMore in detail, the proposed framework is not a simple extension of PGD, but a generic framework that other gradient-based methods such as MIM [3] , DIM [4] can also be integrated. In this paper, we use PGD to demonstrate the effectiveness of our framework. Furthermore, how identifying the time-dependent Top-k victim node set is a non-trivial problem and rarely considered by previous models. In this paper, we proposed an iterative gradient-guided node saliency method that incorporates both spatial and temporal information to identify the time-dependent set of victim nodes dynamically.\n***\n> [W2] \"I find there is no special attack designed for the time-series graph data. If we consider every time step as an observation, the attack design is very similar to PGD. And selecting the most sensitive node is similar to other data types.”\n\n[Response] \nWe agree with the reviewer that one major drawback of most adversarial attack methods such as PGD, MIM, and DIM are static. In this work, we introduced the iterative gradient-guided node saliency method to identify the set of victim nodes in a time-dependent way, which implicitly incorporates the temporal dynamics in networked traffic data.\n\nMoreover, the adversarial attack tasks on other data types such as images are mostly classification tasks, where choosing the sensitive pixel has a clear classification function and target. However, traffic forecasting is naturally a regression problem, which requires a newly designed function (adversarial loss function) and target (adversarial nodes ).\n***\n> [W3.1] \"The proposed method is only applicable for traffic forecasting models.\"\n\n[Response] \nOur framework can be generalized to attack other spatiotemporal tasks, such as air quality prediction [5] and weather prediction [6]. Please refer to the response to Reviewer 7aqp [W1].\n\n***\nReferences:\n\n[1] Derrow-Pinion, Austin, et al. \"Eta prediction with graph neural networks in google maps.\" Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 2021.\n\n[2] Liao, Binbing, et al. \"Deep sequence learning with auxiliary information for traffic prediction.\" Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018.\n\n\n\n[3] Dong, Yinpeng, et al. \"Boosting adversarial attacks with momentum.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n\n[4] Xie, Cihang, et al. \"Improving transferability of adversarial examples with input diversity.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.\n\n[5] Cheng, Weiyu, et al. \"A neural attention model for urban air quality inference: Learning the weights of monitoring stations.\" Proceedings of the AAAI Conference on Artificial Intelligence. 2018. \n\n[6] Li, Yuanpeng, et al. \"Weather forecasting using ensemble of spatial-temporal attention network and multi-layer perceptron.\" Asia-Pacific Journal of Atmospheric Sciences. 2021.\n", " ***\n\n\n> [W3.2] \"I cannot find the explicit expression of L_{adv}.\"\n\n[Response] Thanks for your detailed comments to help us improve the quality of the paper. The L_{adv} is calculated similarly to the loss function L (such as MAE and RMSE ). Specifically, the input will be updated by gradient-based attack methods [7] after multi-step updates. The final adversarial loss is the distance between the predictions of adversarial examples $f_{\\theta }(\\mathbfcal{\\tilde{H}}\\_{t-T+1:t})$ and the target $\\mathbf{\\tilde{Y}}\\_{t+1:t+\\tau}$. Besides, we found that the subscription ‘adv’, which is used to emphasize that this is the adversarial loss, is indeed not coordinated with $\\mathcal{L}$ in the previous sections, so we have updated the paper accordingly by changing $\\mathcal{L}_{adv}$ to $\\mathcal{L}$. The explicit expression is now discussed from line 133 to line 140.\n\n***\n\n> [W4] “The deviation of \\phi from the pre-trained forecasting model was not well described. Suppose \\phi is exactly the same with \\theta^{\\asterisk}. In that case, the derivative of the loss w.r.t its input is just the derivative of random noise (\\delta) at the output, making the prediction far away from its first prediction after some iterations, which is not an effective attack approach.”\n\n[Response] Thanks for your detailed question. The clean examples will be added with adversarial perturbation with multi-steps. In addition, random noise (\\delta) is used to increase the diversity of the attack direction. So its input is always the derivative of adversarial examples at the output. Besides, the explicit process is discussed from line 139 to line 143. \n\n\n***\n\n> [W5] & [W7] & [Q2] & [L4] “There is no exploration of the edge perturbation, or at least compared with the method that perturbation the edge of graph data.”, “ I wonder why they kept the topology (adjacency matrix, connectivity of the road network) immutable,” and “Why do we leave the topology of G_{t} immutable, when its adjacency relationship can be learned by the model parameter, which is mutable? (line 102)\n”\n\n[Response] Thanks for your insightful questions. The reason we don’t consider edge perturbation in this work is two-fold. \n\nFirst, the connectivity of the traffic network in the physical world is usually considered static, while the time-varying traffic dynamic features are collected from geo-distributed sensors. For generating adversarial examples, we argue modifying the traffic network topology in the physical world is difficult and easy to be detected. It is much more practical and meaningful to perturb time-dependent node features.\n\nSecond, for state-of-the-art spatiotemporal forecasting models such as Graph Wave Net [9], the adjacency matrix of the graph is regarded as a part of the parameter and learned by the model in an end-to-end way [9]. In such a scenario, based on the definition of adversarial attack, the graph topology is a part of the model parameter and is fixed in the inference stage [10]. The attackers cannot perturb the model parameter but craft some adversarial examples to fool the model [8].\n\n\n\n\n\n***\n> [W6] “Estimating the effectiveness of the attack method on more target models should be brought to the main table to show its generality.”\n\nThanks for your suggestion. Due to the page limit, we have reported the experimental results on other target models in Appendix F.1. We have added an explanation in line 514 to line 517.\n***\n\n> [Q1] “Why do we only keep the negative saliency score (line 143)? What is different if we take the absolute values?”\n\n[Response] \nThe saliency score in line 143 is defined as non-negative, where larger score indicates more salient node. \n***\nReferences:\n\n[7] Madry, Aleksander, et al. \"Towards Deep Learning Models Resistant to Adversarial Attacks.\" International Conference on Learning Representations. 2018.\n\n[8] Xu, Han, et al. \"Adversarial attacks and defenses in images, graphs and text: A review.\" International Journal of Automation and Computing. 2020.\n\n[9] Zonghan Wu, et al. “Graph WaveNet for Deep Spatial-Temporal Graph Modeling. “ International Joint Conference on Artificial Intelligence. 2019.\n\n\n[10] Yu, Bing et al. \"Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting.\" Proceedings of the 27th International Joint Conference on Artificial Intelligence. 2018.\n", " ***\n> [Q3] “I am wondering if there is any difference in performance depending on the time interval (e.g., 5 minutes, 10 minutes, 1 hour, … ), can you explain more?”\n\n[Response] Thanks for the insightful question. We have conducted more experiments at different time intervals, ranging from 5 minutes to 60 minutes. The performance of the traffic forecasting model under different time intervals in terms of G-MAE is reported below. Overall, as the time interval increases, the forecasting and adversarial attack performances decrease. \n\nFor example, the G-MAE increases from 3.9458 to 6.1329 from a time interval of 5 minutes to a time interval of 60 minutes, with the attack performance degradation from 75.93% to 67.80%. One possible reason is that as the time interval increases, the forecasting error of the spatiotemporal model will increase. It is more challenging for the adversarial attack methods to estimate the target label to generate effective adversarial examples.\n\n\n| | 5 minutes | 10 minutes | 15 minutes | 30 minutes | 45 minutes |60 minutes |\n| :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n| non-attack | 0.9496 | 1.1367 | 1.2747 | 1.6154 | 1.8872 | 1.9750|\n| STPGD-TDNS (ours) | 3.9458 | 4.2924 | 3.6028 | 4.6629 | 5.2931| 6.1329| \n| performance degradation | 75.93 % | 73.46 % | 64.62 % | 65.36 % | 64.34 % | 67.80 % | \n\nMoreover, the experiments result under 5 minutes compared to other baselines is shown as follows. The results on more time interval is reported in Appendix F.\n\n| Attack methods | G-MAE | L-MAE | G-RMSE | L-RMSE |\n| :----: | :----: | :----: | :----: | :----: |\n| non-attack | 0.9496 | - | 1.7694 | - |\n| PGD-Random | 3.7926 | 3.0507 | 10.1258 | 9.9924 |\n| PGD-PR | 3.8226 | 3.0885 | 10.1880 | 10.0526 |\n| PGD-Centrality | 3.7901 | 3.0586 | 10.1208 | 9.9950 |\n| PGD-Degree | 3.8302 | 3.0839 | 10.1733 | 10.0395 |\n| STPGD-TDNS | 3.9458 | 3.2351 | 10.7429 | 10.6116 |\n\n***\n\n> [L1]“There is no significant difference between AT and AT-TDNS, which is less than 0.03 G-MAE score. Multiple runs to produce mean+-std should be examined. (Table 4)”\n\n[Response] In traditional adversarial training (AT), all the features (nodes in our venues) would be added with the adversarial perturbations. However, traditional adversarial training (adding all features with adversarial perturbations) this would lead to severe overfitting problems. To solve this problem, we only chose a few nodes as the target nodes (adding adversarial perturbations) by our method TNDS. The experiments in section 4.6 demonstrate that using our strategy to choose nodes can achieve the best performance compared to randomly selecting nodes.\n\nAccording to the reviewer's suggestion, we have re-run the experiment 10 times and calculated the mean and std, as reported below. The results demonstrate AT-TDNS is more stable.\n\n\n| Attack methods | Non-attack | PGD-Random | PGD-PR | PGD-Centrality | PGD-Degree|\n| :----: | :----: | :----: | :----: | :----: | :----: |\n| Non-defense | 2.0288 | 6.1477 | 6.1586 | 6.1723 |6.1507 |\n| AT | 2.1156 | 2.5436 (0.0249) | 2.5539 (0.0375) | 2.5660 (0.0281) | 2.5394 (0.0279)|\n| Mixup | 2.3090 | 2.7482 (0.0126) | 2.7573 (0.0241) | 2.7501 (0.0088) | 2.7788 (0.0234)|\n| AT-TDNS | 2.0935 | 2.4695 (0.0036) | 2.4463 (0.0075) | 2.4549 (0.0023) | 2.4474 (0.0069) |\n\n\n***\n> [L2] ”Minor suggestion: Legends of Figure 1 b-c should be brought outside, as they interfere with the contents.”\n\n[Response] \nThanks for the suggestion. We have reorganized the image in the manuscript accordingly.\n***\n>[L3] ” I don’t think the white-box attack is realistic, since the target model is a time-series model that predicts the upcoming states”\n\n[Response] \nWe strongly agree with the reviewer! Since the ground truth (i.e., future traffic states) under the spatiotemporal traffic forecasting setting is unavailable at run-time, the practical adversarial spatiotemporal attack primarily falls into the grey-box attack setting. However, little research has discussed the adversarial attack on traffic forecasting models. To this end, we began with the white-box attack setting and discussed how to apply adversarial attacks under the grey-box and black-box settings in Section 3.2. Moreover, investigating white-box attacks is still beneficial to help us understand how adversarial attack works and can help improve the robustness of spatiotemporal traffic forecasting models (e.g., applying adversarial training).\n\n\n\n\n***", " We are encouraged that the reviewer found our motivation and idea to be interesting and novel. Thanks for your positive opinions on the methodology and the insightful comments.\n\n\n***\n> [W1] “I don't see clear weakness of the paper, although I wonder the generalizability of the method to other spatiotemporal-based applications beyond traffic forecasting models.”\n\n\n[Response] Thanks for the insightful comments. Indeed, our framework can be generalized to attack other spatiotemporal tasks based on geo-distributed data sources, such as air quality prediction [1] and weather prediction [2]. Similar to attacking traffic forecasting systems, we can fool the whole system by adding time-dependent adversarial perturbations to a few geospatially distributed monitoring stations. We will try to extend our framework to more spatiotemporal applications in the future.\n\nReferences:\n\n[1] Cheng, Weiyu, et al. \"A neural attention model for urban air quality inference: Learning the weights of monitoring stations.\" Proceedings of the AAAI Conference on Artificial Intelligence. 2018. \n[2] Li, Yuanpeng, et al. \"Weather forecasting using ensemble of spatial-temporal attention network and multi-layer perceptron.\" Asia-Pacific Journal of Atmospheric Sciences. 2021.\n***\n", " This paper presents a novel adversarial attack for spatiotemporal traffic forecasting models. Moreover, theoretically analysis are conducted to demonstrate the worst performance bound of the attack. Comprehensive experiments on real-world datasets show the effectiveness of the effectiveness of the attack, and how the robustness of the models enhances when combined with the corresponding adversarial training. Strengths:\n- The paper is well-written and easy to understand.\n- Introducing adversarial robustness to spatiotemporal models is interesting and novel.\n- The theoretical analysis backs up the proposed attack well.\n- The experiments are comprehensive.\n\nWeakness:\n- I don't see clear weakness of the paper, although I wonder the generalizability of the method to other spatiotemporal-based applications beyond traffic forecasting models. See weakness. Yes for limitations and no for potential negative societal impact.", " The paper proposes a gradient-based adversarial attack method on traffic forecasting models. They propose two-stage frameworks for practical adversarial spatiotemporal attacks, demonstrating the performance degradation. The proposed method can selectively attack the most sensitive nodes in the graph, based on the magnitude of the gradient to generate more effective attack results. Strength\"\n- The problem is well motivated and described, and the papers are well written. \n- Extensive experiments are conducted to demonstrate the effectiveness of an attack. \n\n\n\nWeaknesses:\n\n- This paper lacks originality in that it proposes no novel or new concepts but just simply modify the existing adversarial attack methods little to deal with spatiotemporal forecasting models. The only novel part of the paper is the newly proposed method of TDNS, but it is still a mere extension of PGD algorithm and taking top k nodes as the victim nodes. Other than the lack of novelty, the paper was nicely written and easy to follow. I personally do not think the methods proposed in this paper are either groundbreaking or significant for future research.\n\n- I find there is no special attack designed for the time-series graph data. If we consider every time step as an observation, the attack design is very similar to PGD. And selecting the most sensitive node is similar to other data types.\n\n- The proposed method is only applicable for traffic forecasting models; And, I cannot find the explicit expression of L_{adv}.\n\n- The deviation of \\phi from the pre-trained forecasting model was not well described. Suppose \\phi is exactly the same with \\theta^{\\asterisk}. In that case, the derivative of the loss w.r.t its input is just the derivative of random noise (\\delta) at the output, making the prediction far away from its first prediction after some iterations, which is not an effective attack approach.\n\n- There is no exploration of the edge perturbation, or at least compared with the method that perturbation the edge of graph data.\n\n- Estimating the effectiveness of the attack method on more target models should be brought to the main table to show its generality.\n\n- I wonder why they kept the topology (adjacency matrix, connectivity of the road network) immutable, when perturbing the test data to generate adversarial samples. I acknowledge the point made in the paper that since a graph-based network diffuses the node features across the network, making perturbations to a subset of nodes will be sufficient for successful adversarial attack. As far as I know, graph construction is a major part of the GNN-based spatiotemporal forecasting. So the authors should include the generation of adversarial samples perturbing the original adjacency relation, at least provide an acceptable reason for dropping out that approach.\n\n\n - Why do we only keep the negative saliency score (line 143)? What is different if we take the absolute values?\n\n- Why do we leave the topology of G_{t} immutable, when its adjacency relationship can be learned by the model parameter, which is mutable? (line 102)\n\n- I am wondering if there is any difference in performance depending on the time interval (e.g., 5 minutes, 10 minutes, 1 hour, … ), can you explain more?\n - There is no significant difference between AT and AT-TDNS, which is less than 0.03 G-MAE score. Multiple runs to produce mean+-std should be examined. (Table 4)\n\n- Minor suggestion: Legends of Figure 1 b-c should be brought outside, as they interfere with the contents. \n\n- I don’t think the white-box attack is realistic, since the target model is a time-series model that predicts the upcoming states.\n\n- The authors didn’t explicitly address the limitations of the work. As said previously, the authors could include connectivity-based adversarial samples to improve the work. Also, the authors are encouraged to propose more novel ideas for acceptance for NeurIPS.\n", " This paper explores the vulnerability of spatiotemporal traffic forecasting models. They proposed a practical adversarial spatiotemporal attack framework. Experiments are conducted to verify the effectiveness. \nStrengths:\n1. The first attemp to attack spatiotemporal traffic forecasting models.\n2. The code is released, and thus readers can reproduce the results. \n3. The demonstration is given from empirical and theoretical view. \n\nWeakness:\n1. When the attacks are extended to black-box setting, they employ a surrogate model, which is trained via querying the threat models. This will lead to a large number of queris, which reduces the efficiency. The authors should discuss this point. please see the weakness please see the weakness", " The paper tackles the problem of adversarial attacks against “traffic forecasting models”, i.e., models that must predict the conditions of “physical traffic” (e.g., cars) in real world settings. The main contribution is a generic framework of adversarial attacks against such systems, which can be declined to cover both white- and black-box adversarial settings. A comprehensive experimental evaluation on real world data validates the proposal. The paper is also enriched by an ablation study, and by strong theoretical analyses that support the overall findings **Originality:** high. I agree that there is limited work done in this specific domain, and the strong analysis and comprehensive evaluation are novel contributions.\n\n**Quality:** high. Despite lacking in some minor details, the arguments are well-supported.\n\n**Clarity:** average. The English text is good enough to allow a reader to understand the paper, but not exceptional. Presentation-wise, the paper is also appreciable.\n\n**Significance:** high. Albeit the findings are not counterintuitive (ultimately, the paper shows that “yet another application of ML can be thwarted via adversarial examples”), the considerations of a under-investigated deployment scenario of ML is commendable, and future work can greatly benefit from this paper.\n I thank the authors for their paper, of which I particularly appreciated the research direction: instead of focusing on the (overused) attacks against image classifiers, the paper considers a completely different (but still realistic) setting. I have some comments that the authors can address in a rebuttal, which are reported below.\n\n*Unclear definitions of White/Black-box attacks.* In Section 3.2 the authors describe the potential “variations” of the proposed attack by using the well-known “white/black-box” terminology. However, it is not clear what are the actual assumptions of the considered attacker. For example, the white-box scenario is described as follows: “White-box attack. Since the adversaries can fully access the data and labels under the white-box setting, we directly use the real ground truth traffic states to guide the generation of adversarial traffic states.”. Does this mean that the “white-box” attack assumes an attacker with access to the training data and all labels? Or does this also imply that the attacker knows everything about the target model (i.e., learned parameters, weights and architecture)? Similarly, for black-box attacks the paper states that: “The most restrictive black-box setting assumes limited accessibility to the target model and labels. Therefore, we first employ a surrogate model, which can be learned from the training data or by querying the traffic forecasting service [15, 16]. Then we generate adversarial traffic states based on the surrogate model to attack the targeted traffic forecasting model.” The authors should elucidate if they assume an attacker who can “query” the target model (and whether such queries are constrained or not) or who has no access whatsoever to the targeted model, but only to a subset of the training data (in which case, it is more like a “no-box” attack [A]). For completeness, I have also checked the appendix and there is no mentioning of such details.\n\n*Realistic Feasibility.* I would be delighted if the authors could integrate (perhaps in an appendix) an analysis of the realistic feasibility of the proposed attacks. For example, consider the following statement in the introduction: “How to identify the subset of salient victim nodes with a limited attack budget to maximize the attack effect is the first challenge.” Why would a realistic attacker do that? If, as shown in Figure 1, just by injecting perturbations on “randomly selected” nodes results in a successful attack, then why going further? Would an attacker really opt for such a strategy? What is the cost and potential benefit of such a strategy?\nIndeed, I stress that real attacker operate with a cost/benefit mindset. Considered that the paper tackles a new application domain, it would be impactful to show that some attacks may require little preparation, but can lead to significant performance degradation. Of course, it is still valuable to analyze “worst-case” scenarios, but it’s important to differentiate from attacks that are more likely to occur (e.g., because they are cheaper to stage) from those who are less likely to appear (e.g., because they require a huge resource investment, which can potentially be superior than what the attacker stands to gain).\n\nSome additional issues:\n\n•\tThe following statement in the Introduction requires additional back-up: “Machine learned spatiotemporal forecasting models have been widely adopted in modern Intelligent Transportation Systems (ITS) to provide accurate and timely prediction of traffic dynamics, e.g., traffic flow [1], traffic speed [2], and the estimated time of arrival [3].” The problem is that [1,2,3] are research papers, and cannot be used to substantiate the claim that ML-related proposals are “widely adopted in modern ITS”. At best, they are well-studied in research.\n\n•\t*Training/testing time?* The paper reports that “All experiments are implemented with PyTorch and performed on a Linux server with 4 RTX 3090 GPUs.”. I am genuinely curious of how long it took to train the corresponding models on such hardware. Were the GPUs used in parallel, or did the experiments consider a single GPU “per run”? \n\nEXTERNAL REFERENCES\n\n[A]: Li, Qizhang, Yiwen Guo, and Hao Chen. \"Practical no-box adversarial attacks against dnns.\" Advances in Neural Information Processing Systems 33 (2020): 12849-12860.\n I did not see any limitation mentioned in the main paper, but I **do** have one comment on this matter: the suitability of the considered datasets. I acknowledge that finding proper data is difficult in this domain, and I also acknowledge that the theoretical arguments are well-founded. However, the two considered datasets are either 5 or 10 years old. Such \"old-age\" can (slightly) impair the real world implications of the paper findings." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 4 ]
[ "fdEpIPjCWaY", "ApKtVd6oTyV", "pCsdlz5zQG", "FrJn64E5k5u", "L9vcLSPSDV3", "L_QEX6OyCxu", "YctMpq3Nzgf", "fdEpIPjCWaY", "ApKtVd6oTyV", "ApKtVd6oTyV", "ApKtVd6oTyV", "pCsdlz5zQG", "nips_2022_lTKXh991Ayv", "nips_2022_lTKXh991Ayv", "nips_2022_lTKXh991Ayv", "nips_2022_lTKXh991Ayv" ]
nips_2022_Pyd6Rh9r1OT
Fast Vision Transformers with HiLo Attention
Vision Transformers (ViTs) have triggered the most recent and significant breakthroughs in computer vision. Their efficient designs are mostly guided by the indirect metric of computational complexity, i.e., FLOPs, which however has a clear gap with the direct metric such as throughput. Thus, we propose to use the direct speed evaluation on the target platform as the design principle for efficient ViTs. Particularly, we introduce LITv2, a simple and effective ViT which performs favourably against the existing state-of-the-art methods across a spectrum of different model sizes with faster speed. At the core of LITv2 is a novel self-attention mechanism, which we dub HiLo. HiLo is inspired by the insight that high frequencies in an image capture local fine details and low frequencies focus on global structures, whereas a multi-head self-attention layer neglects the characteristic of different frequencies. Therefore, we propose to disentangle the high/low frequency patterns in an attention layer by separating the heads into two groups, where one group encodes high frequencies via self-attention within each local window, and another group performs the attention to model the global relationship between the average-pooled low-frequency keys from each window and each query position in the input feature map. Benefiting from the efficient design for both groups, we show that HiLo is superior to the existing attention mechanisms by comprehensively benchmarking FLOPs, speed and memory consumption on GPUs and CPUs. For example, HiLo is 1.4× faster than spatial reduction attention and 1.6× faster than local window attention on CPUs. Powered by HiLo, LITv2 serves as a strong backbone for mainstream vision tasks including image classification, dense detection and segmentation. Code is available at https://github.com/ziplab/LITv2.
Accept
Initially, this paper received diverging reviews. The authors did a good job addressing the reviewers' concerns, by adding additional comparisons to more SOTA ViT backbones and benchmarking throughput on a variety of GPU platforms. The AC agrees with the reviewer that the concerns have been sufficiently addressed and recommends acceptance.
train
[ "xqsZ-aWVZK", "zlAsVLwaqB4", "eVKnRobml6E", "d6nmErtW3nq", "5Fg8D_Qw81M", "AdK3j7luoXf", "gzNUNuC10cK", "hQ01F0k3wu-", "893K1zwx0ds", "oDdvL-jUdWE", "qAsfQuasBpi", "sUA9DglLox-", "b-NSXlEZCBi", "wZipyx_eCfk", "HJG86cW29Nj", "ceDxNLbuWa3" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable feedback and suggestions! We are glad to address your questions and appreciate your constructive reviews for improving our work. ", " Dear authors,\n\nThank you for your comments, added experiments (table IV) and explanations. You have addressed my main concerns as I wrote in detail in the edit of the official review. \n\nBest regards, \n\nReviewer xTew", " Thanks for your very detailed reviews and for appreciating our work! Please feel free to ask any further questions.", " Thank the authors for the response. \n\nGiven the clear novelty of the HiLo attention and the strong results of the proposed architecture, I initially voted to accept this paper. After the rebuttal, I think the authors have addressed my main concerns. The speed-accuracy trade-off of window size on COCO object detection is reasonable (not significantly hurting the performance under different window sizes). The explanation of the difference in RPE between Swin and LITv1 is also evident. \n\nI have also carefully read the comments from other reviewers and the corresponding author responses: \n\n- Reviewer zE7q mainly questioned the importance of high-frequency attention in HiLo, for which the authors have provided additional results and shown more performance gain of Hi-Fi on ADE20K. The results on both ImageNet-1K and ADE20K have shown a clear advantage of HiLo over other attention mechanisms in ViTs.\n\n- Reviewer xTew mainly criticized the novelty and the additional comparisons with more ViTs, for which I think the main novelty of this paper is a new efficient co-design of local and global self-attention from a frequency perspective (disentangle frequency into low/high components and then customize redundancy draining respectively in a self-attention layer), as well as a fast and accurate SOTA ViT backbone for mainstream CV tasks. The research challenge of aligning the theoretical FLOPs with real speed is fundamental. In sharp contrast, existing attention mechanisms either separately process the local and global self-attention in ViTs or are much slower on GPUs than HiLo. For example, Focal and Quadtree have a clear gap between theoretical complexity and the direct speed metric (i.e. throughput). The advantage of the proposed architecture over other existing ViTs is more pronounced with additional comparisons (e.g. XCiT, CaiT). \n\n- Reviewer Rjxy is mainly concerned about the relation between high/low frequency and local/global features in HiLo as well as the visualization of Figure 5. From my point of view, the provided discussion is helpful and the explanation regarding Figure 5 is also clear.\n\nOverall, I think this paper is well-written, with a strong novelty and promising experimental results, solving a critical research gap. The revision has also reflected the changes. Therefore, I decided to keep my original rating.", " Dear Reviewer Rjxy,\n\nWe sincerely thank you again for your great efforts in reviewing this paper. We have addressed your main concerns regarding more discussions on the connection between high/low frequency and local/global features in the proposed HiLo, as well as the further explanation of Figure 5 and the importance of high-frequency attention on ADE20K. Please don’t hesitate to let us know if there are still some concerns/questions.\n\nBest regards,\n\nAuthors of #310", " Dear Reviewer xTew,\n\nWe sincerely thank you again for your great efforts in reviewing this paper. We have addressed your main concerns regarding our technical novelty and more comparisons with recent ViTs. Please don’t hesitate to let us know if there are still some concerns/questions.\n\nBest regards,\n\nAuthors of #310", " \n**Q3. The throughput of a certain kind of GPU (RTX 3090) doesn't lead to a strong conclusion.**\n\nThanks for your advice. Following the same settings of Table 1, we compare the inference speed with more models and on more types of GPUs. The table below reports the results. It shows that LITv2-S still achieves consistent faster throughput (images/s) than many ViTs on NVIDIA A100, Tesla V100, RTX 6000, and RTX 3090. It is also worth noting that under similar performance (~82.0%), LITv2-S is $2.1\\times$ faster than PVTv2-B2, $1.7\\times$ faster than XCiT-S12 and ConvNext-Ti, and $3.5\\times$ faster than Focal-Tiny on V100, which is another common GPU version for speed test in previous works [30,29,41] ([25,26,36] in the initial submission). We have added this comparison into the revised supplementary file.\n\n| Model | Params (M) | FLOPs (G) | A100 | V100 | RTX 6000 | RTX 3090 | Top-1 (%) |\n| ------------- | ---------- | --------- | --------- | --------- | -------- | --------- | --------- |\n| ResNet-50 | 26 | 4.1 | 1,424 | 1,123 | 877 | 1,279 | 80.4 |\n| PVT-S | 25 | 3.8 | 1,460 | 798 | 548 | 1,007 | 79.8 |\n| Twins-PCPVT-S | 24 | 3.8 | 1,455 | 792 | 529 | 998 | 81.2 |\n| Swin-Ti | 28 | 4.5 | 1,564 | 1,039 | 710 | 961 | 81.3 |\n| TNT-S | 24 | 5.2 | 802 | 431 | 298 | 534 | 81.3 |\n| CvT-13 | 20 | 4.5 | 1,595 | 716 | 379 | 947 | 81.6 |\n| CoAtNet-0 | 25 | 4.2 | 1,538 | 962 | 643 | 1,151 | 81.6 |\n| CaiT-XS24 | 27 | 5.4 | 991 | 484 | 299 | 623 | 81.8 |\n| PVTv2-B2 | 25 | 4.0 | 1,175 | 670 | 451 | 854 | 82.0 |\n| XCiT-S12 | 26 | 4.8 | 1,727 | 761 | 504 | 1,068 | 82.0 |\n| ConvNext-Ti | 28 | 4.5 | 1,654 | 762 | 571 | 1,079 | 82.1 |\n| Focal-Tiny | 29 | 4.9 | 471 | 372 | 261 | 384 | **82.2** |\n| LITv2-S | 28 | **3.7** | **1,874** | **1,304** | **928** | **1,471** | 82.0 |\n\n\n**Q5. Compared with CSwin.**\n\nCSwin Transformer proposes a Cross-Shaped Window Self-Attention (CSwin), which divides the feature maps by horizontal and vertical stripes and uses self-attention to capture local dependencies. Compared to it, the proposed HiLo-based LITv2 is much easier to train and more scalable in terms of both the throughput and the training time memory consumption on GPUs.\n\nFor example, the table below compares the training time memory consumption (MB) under different input image resolutions with a total batch size of 64 on one RTX 3090. \"OOM\" means out of memory.\n\n| Model | Params (M) | FLOPs (G) | 224x224 | 256x256 | 288x288 | 320*320 | 352x352 | 384x384 |\n| ------- | ---------- | --------- | --------- | --------- | --------- | ---------- | --------- | ---------- |\n| CSwin-T | 23 | 4.3 | 9,224 | 13,207 | 16,375 | OOM | OOM | OOM |\n| LITv2-S | 28 | 3.7 | **5,211** | **6,671** | **8,390** | **10,350** | **12564** | **15,045** |\n\nThe table below compares the throughput (images/s) under different input image resolutions with a total batch size of 64 on one RTX 3090. Results are averaged over 30 runs.\n\n| Model | Params (M) | FLOPs (G) | 224x224 | 256x256 | 288x288 | 320*320 | 352x352 | 384x384 |\n| ------- | ---------- | --------- | --------- | --------- | ------- | ------- | ------- | ------- |\n| CSwin-T | 23 | 4.3 | 814 | 591 | 481 | 376 | 297 | 262 |\n| LITv2-S | 28 | 3.7 | **1,471** | **1,128** | **895** | **718** | **591** | **493** |\n\nAlso note that CSwin adopts a different training strategy compared with common practice [45,29,11] ([40,25,9] in the initial submission), e.g. more training epochs (310 v.s. 300 epochs) and larger batch size training (2,048 v.s. 1,024), as shown in their official released code. Therefore, directly comparing the model performance with CSwin can be inappropriate.\n", " \nWe thank you for your valuable feedback and address your questions as follows.\n\n**Q1. Connection between low/high frequency and local/global features. Fig. 5 is confusing.**\n\nEssentially, the low-frequency attention branch (Lo-Fi) is to capture the global dependencies of the input (image/features), which does not need a high-resolution feature map but requires global attention. On the other hand, the high-frequency attention branch (Hi-Fi) is to capture the fine detailed local dependency, which requires a high-resolution feature map but can be done via local attention. Our idea at a high level is generally similar to the low/high frequency concepts in the classic digital image processing, where low-frequency components that captures global structure tend to have long-range correlations while high-frequency components that captures local sharp changes such as edges tend to have more short-range correlations. We have added more discussions in Section 4.1 in the revision.\n\nFig. 5 is obtained by simply applying Fast Fourier Transform (FFT) on both the Hi-Fi branch output and the Lo-Fi branch output and visualising the magnitudes of their frequency components. As stated in Line 314 (Line 307 of the initial submission), from Fig. 5 we can see that the Hi-Fi output contains more high frequencies (local features) while the Lo-Fi output contains more low frequencies (global features). We have added the visualisation code for Fig. 5 in the revised supplementary file.\n\n\n**Q2. Is global attention (or low-frequency attention) useless?**\n\nWe believe the reviewer misunderstood the results in Fig. 4. To clarify, alpha = 1.0 means only the Lo-Fi branch is left (Lines 165-167, or Lines 163-164 in the initial submission). Thus, we assume the question should be \"Is the local attention or high-frequency attention useless?\". To answer this question, we conducted an experiment with alpha=1.0 on ADE20K. We show that comparing with alpha=0.9, simply using Lo-Fi results in more performance drop (0.6%), as shown in our response to Reviewer zE7q Q2. This experiment demonstrates that both high frequencies and low frequencies are essential in CV tasks. In particular, we speculate that image classification mainly focuses on the global information of the entire image, and thus low frequencies perform favourably as they capture global structure. However, dense prediction tasks usually require fine object details, for which high frequencies play an important role.", " \nThanks for your very positive comments! We address your questions as follows.\n\n**Q1. Directly evaluating a trained model (e.g. trained with window size 2) with a different window size?**\n\nWe agree that the window size still needs to be manually determined at the current stage. In this case, future work may consider automatically searching for a better window size by NAS. Moreover, we directly test the trained model of LITv2-S with RetinaNet under different window sizes. In this setting, the model is pretrained on COCO with a window size of 2. As shown in the table below, directly evaluating our LITv2-S based RetinaNet with different window sizes does not significantly hurt the performance. Instead, it brings different speed and accuracy trade-offs. This implies that in practice, one can set different window sizes for different speed requirements under a single trained model.\n\n| Window Size | FLOPs (G) | FPS | AP | AP_50 | AP_75 | AP_S | AP_M | AP_L |\n| ----------- | --------- | ---- | ---- | ----- | ----- | ---- | ---- | ---- |\n| 2 | 242 | 18.2 | 44.0 | 65.2 | 47.1 | 27.2 | 48.1 | 58.0 |\n| 3 | 234 | 19.6 | 43.7 | 64.7 | 47.1 | 27.0 | 47.5 | 58.0 |\n| 4 | 230 | 19.8 | 43.4 | 64.3 | 46.5 | 26.6 | 47.1 | 57.9 |\n| 5 | 228 | 20.0 | 43.0 | 63.9 | 46.0 | 27.0 | 46.7 | 57.3 |\n| 6 | 229 | 20.3 | 42.6 | 63.4 | 45.2 | 26.0 | 46.3 | 56.5 |\n| 7 | 229 | 20.4 | 42.4 | 63.0 | 45.1 | 25.9 | 46.1 | 56.4 |\n\n\n\n**Q2. Why removing RPE can significantly improve the speed of LITv1 on COCO.**\n\nThe main reason is that Swin uses fixed-size local windows (e.g. 7$\\times$7). Thus it does not need to interpolate the parameters of the fixed-size relative positional embedding. However, the global self-attention in LITv1 requires frequently interpolating the fixed-size relative positional embedding to adapt different sizes of the attention maps, as indicated in Lines 135-137 ( 132-134 in the initial submission). Therefore, removing RPE can significantly improve the speed of LITv1 on dense prediction tasks.\n\n**Q3. LITv2-B COCO results can be moved into the main manuscript.**\n\nThanks for the suggestion. We have moved this result into Table 2 in the revised main manuscript.", " \nThanks for taking the time to review our paper and we address your questions as follows.\n\n**Q1. Questions about the technical contribution.**\n\n**1)** As discussed in Lines 90-98 (Lines 90-96 of the initial submission), existing attention mechanisms in ViTs suffer from different problems, e.g. lacking either local or global attention, or having slow throughput on GPUs. Therefore, designing a fast attention mechanism that simultaneously captures local and global dependencies for ViTs is non-trivial. To this end, the main novelty of this paper is **a new efficient co-design of local and global self-attention, along with a novel fast ViT backbone**, as recognized by Reviewer VgYF, rather than improving local window attention or global attention separately. Besides, as mentioned in the strength, we also provide an architecture design with \"hardware friendliness\" in mind, which “is an important contribution of the paper”. \n**2)** Furthermore, the motivation and design of HiLo come from a novel perspective where it efficiently disentangles low/high frequencies in a self-attention layer (Lines 48-54). We have conducted comprehensive benchmarking and shown that HiLo outperforms representative attention mechanisms on both the ImageNet pretraining and the downstream semantic segmentation task (Table 4 in the revised submission).\n\n\n**Q2. Compared with more ViT architectures.**\n\nThanks for your advice. We agree that there are many ViTs in the literature. However, we would like to point out that XCiT, CaiT, and CoAtNet do not achieve better performance than ours **under a similar model size and the same experimental setting (i.e. image resolution of 224 $\\times$ 224)**. For example, as shown in the table below, LITv2 performs favorably against others while achieving faster speed and lower model complexity. Note that we also believe further improvement can be achieved via model scaling, pretraining on larger datasets (e.g. ImageNet-22K), and finetuning on higher image resolution (e.g. 384, 512). However, such experiments are computationally expensive which is beyond our current hardware capacity. At this stage, we have adopted fair and standard experimental settings and compared LITv2 with many recent strong ViTs across different model sizes and tasks. In this case, we believe our experiments are quite comprehensive, which is recognized by Reviewer zE7q and Reviewer VgYF. We have added comparisons with more ViTs in the revised supplementary file.\n\n\n| Model | Params (M) | FLOPs (G) | Throughput (images/s) | Top-1 (%) |\n| ----------- | ---------- | --------- | --------------------- | --------- |\n| XCiT-S12 | 26 | 4.8 | 1,068 | 82.0 |\n| CaiT-XS24 | 27 | 5.4 | 623 | 81.8 |\n| CoAtNet-0 | 25 | 4.2 | 1,151 | 81.6 |\n| **LITv2-S** | 28 | **3.7** | **1,471** | **82.0** |\n| XCiT-S24 | 48 | 9.1 | 612 | 82.6 |\n| CaiT-S24 | 47 | 9.4 | 454 | 82.7 |\n| CoAtNet-1 | 42 | 8.4 | 582 | 83.3 |\n| **LITv2-M** | 49 | **7.5** | **812** | **83.3** |\n\n\n \n**Q3. Directly computing queries from pooled feature maps.**\n\nIn self-attention, the number of queries determines the spatial size of the output feature maps. When computing the queries from pooled feature maps, the spatial size of the output feature maps is inconsistent with that of the original input. One solution is to use interpolation (e.g. bilinear) and concatenate the interpolated feature maps with the outputs from Hi-Fi. However, as shown in the table below, this approach (denoted as \"pooled queries\") brings inferior performance and much slower throughput than our proposed design. Note that although computing queries from pooled feature maps can slightly achieve a lower theoretical model complexity, frequently applying interpolation on GPUs results in a high memory access cost (MAC). Therefore, it instead slows down the inference speed on GPUs. We have added this ablation study in the revised supplementary file. \n\n| Model | Params (M) | FLOPs (G) | Throughput (images/s) | Top-1 (%) |\n| ----------------------- | ---------- | --------- | --------------------- | --------- |\n| LITv2-S, this paper | 28 | 3.7 | **1,471** | **82.0** |\n| LITv2-S, pooled queries | 28 | 3.5 | 1,084 | 81.9 |", " **Q3. Some references are missing.**\n\nThanks for pointing it out. We have included the following discussions in Section 2 in the revision. Specifically,\n\n- MixFormer [a] mixes local window attention with depthwise convolution, while the proposed HiLo is a novel attention mechanism that simultaneously captures local and global dependencies without introducing additional model parameters.\n- TNT [b] relies on additional tokens in the architecture to achieve global interaction, while the proposed HiLo directly captures both local and global dependencies of the original feature map in a single self-attention layer as a drop-in-replacement module. We also show in our response to Reviewer Rjxy Q3 that LITv2-S beats TNT-S [b] in terms of faster throughput on GPUs and better performance.\n- Similar to shifted window attention (Swin) which depends on window shifting to mix tokens among different windows, Shuffle Transformer [c] applies token shuffling among windows. However, both methods focus on local attention at the same self-attention layer, unlike HiLo which simultaneously captures local and global dependencies.\n- Octave convolution [d] shares a similar motivation with HiLo, i.e. disentangling different frequencies in a feature map. However, we are different in both the underlying problem target and the concrete approach. Specifically, Octave convolution targets convolutional layers, which is a type of convolution that applies locally on high/low-resolution feature maps separately. On the other hand, HiLo aims to boost the efficiency of self-attention in ViTs, which is a novel attention mechanism that captures both local and global relationships with self-attention.\n\n**Q4. How to design the number of multi-head in attention when splitting the channels.**\n\nIn our implementation, we use the hyperparameter of alpha to control the number of heads in Hi-Fi and Lo-Fi. For example, with `alpha = 0.9` and `num_heads = 12` , we allocate `round(12 * 0.9) = 10 ` heads for Lo-Fi and another `12 - round(12 * 0.9) = 2` heads for Hi-Fi. We have provided this code in our supplementary material (Lines 39-40 in supp_code/models/attentions.py).", " Thanks for your constructive comments and we address your questions as follows.\n\n**Q1. Compared to adding ConvFFN, the performance improvements HiLo attention brings are minor.**\n\nWe would like to point out that ConvFFN and HiLo are proposed to address different bottlenecks in the proposed architecture, as described in Lines 133-138 (Lines 130-135 in the initial submission). Specifically, ConvFFN improves the **efficiency of positional encoding** in LITv1 while simultaneously enlarging the early receptive fields, thus improving the performance. On the other hand, HiLo mainly focuses on improving the **efficiency of self-attention**, especially when handling high-resolution dense prediction tasks, and thus it brings a better speed and accuracy trade-off. For example, in Table 5, we have shown that HiLo helps to reduce 27% FLOPs and achieve a $1.4\\times$ speedup on COCO object detection. Moreover, in our response to Q2 below we show that HiLo achieves more performance gain than other attention mechanisms on semantic segmentation.\n\n**Q2. HiLo brings weak improvement over SRA on ImageNet-1K.**\n\nIt is worth noting that HiLo is different from SRA in both the motivation and the concrete approach, even with alpha = 1.0. Specifically, SRA applies **conv-layernorm** to reduce the spatial size of keys and values for complexity reduction, which introduces more model parameters and neglects the importance of local fine details as well as the different frequencies in natural images. Compared to it, HiLo is motivated from a novel perspective on the frequency domain, as clearly indicated in Lines 48-54. Moreover, the Lo-Fi branch in HiLo applies **average-pooling** in order to obtain the low-frequency signals in each window, which is parameter-free and complementary to the Hi-Fi branch which captures high-frequency information in each window.\n\nFurthermore, we agree that the pure Lo-Fi branch can achieve competitive results on ImageNet-1K. However, **we would like to point out that high-frequency signals play an important role in capturing fine object details, which is particularly important for dense prediction tasks such as semantic segmentation**. For example, we train LITv2-S with Semantic FPN under different attention mechanisms on ADE20K. As the table below shows, HiLo with alpha=0.9 achieves superior performance compared to SRA (+1.5%) as well as other attention mechanisms. It indicates that both high/low frequencies are important in CV tasks and the effect of high frequencies is more significant in dense prediction tasks. We have added this comparison in Section 5.4 in the revision.\n\n\n\n| Backbone Attention in LITv2-S | Params (M) | FLOPs (G) | mIoU (%) |\n| ----------------------------- | ---------- | -------- | -------- |\n| MSA | 32 | 46.5 | 43.7 |\n| SRA (PVT) | 35 | 42.4 | 42.8 |\n| W-MSA (Swin) | 32 | 42.7 | 41.9 |\n| T-MSA (Twins) | 33 | 42.5 | 44.0 |\n| HiLo w/ Alpha = 1.0 | 32 | 42.5 | 43.7 |\n| HiLo w/ Alpha = 0.9 | **31** | 42.6 | **44.3** |", " This paper studies the efficient Vision Transformers. The authors proposed the HiLo attention which combines the window-based self-attention and spatial reduction self-attention. The experimental results on serval datasets demonstrate the effectiveness of the proposed method.\n + The proposed HiLo attention is somehow reasonable.\n+ Compared to the baselines, the proposed methods could bring constant improvements \n+ The abundant experiments are introduced to prove the effectiveness of the proposed method.\n\nThe main concerns are listed below. \n- The core contribution is HiLo attention, however, as shown in Table 5, Compared to adding ConvFFN, the improvements HiLo attention brings are minor. \n- In Figure 4, the HiLo attention is equivalent to spatial reduction attention (SRA) when the alpha=1.0. SRA could achieve ~81.9 Top-1 Accuracy on ImageNet with 3.7G FLOPs. Table 4 shows that HiLo (alpha=0.9) attention achieves 82.0 with 3.7G FLOPs. The improvement is just 0.1%, which is too weak.\n- some references are missing.\n For window-based self-attention, there are some methods [a][b][c] are not included.\n To study high/low frequencies in images, Octave Convolution[d] is heavily related to the proposed method.\n\n[a] MixFormer: Mixing Features across Windows and Dimensions, CVPR 2022.\n\n[b] Transformer in Transformer, NeurIPS 2021.\n\n[c] Rethinking Spatial Shuffle for Vision Transformer, arXiv 2021.\n\n[d] Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution, ICCV 2019. Please refer to the Questions in the Weaknesses.\n\nHow do you design the number of multi-head in attention when splitting the channels? Because the split features may not be divisible by the head_dim.\n Yes.", " The paper addresses efficient Vision Transformers (ViTs) design. The paper argues that while previous works on designing efficient ViTs have considered the theoretical asymptotic computational complexity and computational complexity measured in floating point operations (FLOPS) and memory, those metrics do not capture the actual running time and throughput. Specifically, the paper argues that previous methods might require low number of FLOPs (or lower asymptotic complexity), but in practise their implementation is not hardware friendly thus slow when running on GPU. The paper proposes to benchmark FLOPS, memory consumption and actual running time (on GPU) and further proposes a ViT design that performs favourably in those metrics while providing high accuracy when used as a backbone in various vision tasks, namely: image classification, object detection, instance segmentation and semantic segmentation. \n\nThe proposed ViT architecture is based on separating the MultiHead Self Attention (MSA) heads into 2 groups - one group performs local window self attention to capture local fine grained details characterised by high frequencies and the second group performs global self attention on a downscaled (in practice - average pooling in each high res window) version of the feature map to capture global structures characterised by low frequencies. The total number of MSA heads are divided between the groups such that 1-alpha of the heads belong to the first group (local windowed self attention on the full resolution feature map) and alpha of the heads belong to the second group (global attention on the downscaled feature map). Their method is thus dubbed HiLo to denote the different attention branches working on High and Low frequencies. Regarding the value of alpha - the authors provide an experiment to measuring the effect of different choices of alpha and when measuring on the various benchmarks alpha is set to 0.9, so in practice 10% and 90% of the MSA heads belong to the high and low frequencies branch, respectively. Note that in the low frequency brach, keys and values are computed on the downscaled feature map, but the queries still come from the high frequency branch. Also, to further speed up the method, the authors replace explicit positional encoding by adding a layer of 3x3 depth-wise convolution in each Feed Forward block. \n\nFinally, to demonstrate the effectiveness of their approach, the authors compare their methods to other ViT architecture in classification on ImageNet 1K as well as when using their architecture as a backbone (weights are initialised from the ImageNet trained model) in object detection and instance segmentation (measured on COCO) and semantic segmentation (measured on ADE20K). The experiments demonstrate relatively high speed, low number of FLOPS and high accuracy of the proposed method compared to other ViT architectures and efficient attention mechanisms. \n Strength: \n1. The paper highlights an important limitation of previous methods - that using previously proposed metrics such as memory footprint and number of FLOPs (and theoretic asymptotic complexity) are only proxies to the metric of running time (or throughput) and in practice methods that perform favourably in those metrics (FLOPs, memory) might actually have slow running time due to not being \"hardware friendly\". To the best of my knowledge, very few papers report throughput (an exception is the Swin paper) and addressing this limitation by providing an architecture design with\"hardware friendliness\" in mind is an important contribution of the paper. \n2. The proposed method is simple and tackles efficient ViT design from a viewpoint that has not been considered before. The architecture separates handling of local and global information is simple, yet novel and sound.\n3. The paper is well written and easy to follow. \n4. The paper provide extensive experiments and visualisations. \n\nWeaknesses:\n1. In terms of technical contribution - while the proposed architecture is new, all the ideas presented were previously explored. For example, computing local windowed self attention have been previously used in Swin and Focal transformer (in different ways). Computing global self attention on pooled versions of the feature map have been previously explored in PVT and Focal transformer (this is also true for the removal of positional encoding and replacing with depth-wise convolutions). The paper does combine those ideas in a way that was not previously suggested, but in my opinion this doesn't suffice for a Neurips paper (in terms of technical novelty). \n2. Regarding the experiments - the methods compared against the proposed architecture are relatively old and are missing several strong recent papers, for example: XCiT, CaiT and CoAtNet to name a few. While HiLo performs favourably agains older methods, more recent works (as mentioned above) report higher accuracy than HiLo. Of course, their speed might be lower, but this needs to be tested and compared against HiLo to support the authors claims. \n\n**Edit**: The rebuttal addresses my main concerns and the weaknesses I described above. I accept the authors claims that the proposed design principals have not been considered before and specially IMO designing an efficient architecture with high throughput is a significant contribution. Also, the experiments do demonstrate that the proposed method is on par with previous contributions (in my comment I was addressing performance when fine-tuning on 324x324) while being more efficient. I have changed the final rating accordingly, I would like to thank the authors again for the detailed rebuttal. 1. Suggestion - I would suggest to compare the proposed method against more recent ViT architectures. \n2. Question - In the low frequency branch, why do the queries (Q matrix) still computed from the high resolution feature map? did you try to completely separate the two branches such that the queries will also be computed from the pooled feature map, in addition to K and V? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper proposes a novel vision Transformer, LITv2, which directly targets the speed on GPUs, instead of theoretical FLOPs. To achieve this, the authors improve the efficiency of ViT by targeting the architecture design, attention mechanism and positional encoding. Specifically, they first adopt the same design principle of LIT [29] to get rid of the early MSAs, then propose a novel efficient self-attention mechanism, HiLo, which disentangles the high/low frequency patterns in an attention layer with two groups of heads, where one group captures high frequencies via local window self-attention and another group of heads model the global relationship between the average-pooled low-frequency keys from each window and each query position in the input feature map. Comprehensive benchmarking has shown the advantage of the proposed HiLo over other attention schemes. Furthermore, the authors also propose to replace the time-consuming relative positional encoding (RPE) with zero-padding positional information from convolutions for further speedup on dense prediction tasks. Extensive experiments on both ImageNet and downstream tasks have shown that LITv2 achieves a better speed-accuracy trade-off than previous SoTA ViTs. ### Strength\n\n1. The problem that the paper tackles for the current ViT design is significant. As most recent works focus only on theoretical FLOPs, the direct speed metric (e.g. throughput) on hardware is more important and informative for the community, especially for the efficient design of self-attention.\n\n2. The proposed HiLo attention is well-motivated. The idea of disentangling different frequencies is technically sound and has been well validated by visualisations. The complexity analysis of HiLo is clear and the comparison with other efficient attention mechanisms is also comprehensive.\n\n3. Simple architecture but works. The proposed model is backed by impressive results on both ImageNet and downstream tasks. The experiments are quite strong and the comparisons with other methods are comprehensive. Surprisingly, the proposed model achieves faster speed and uses less memory footprint than representative CNNs, with competitive performance on ImageNet. This is a significant step for general ViT backbones.\n\n4. The structure of this paper is well-written and easy to follow. Nice figures, which clearly present the idea and claims.\n\n### Weakness\n\n1. The authors have indicated that by training a slightly larger window size the model can achieve better efficiency. However, it could be difficult to determine the optimal window size on dense prediction tasks. How would the speed-accuracy trade-off change if directly evaluating a trained model (e.g. trained with window size 2) with a different window size? \n\n2. Swin Transformer also adopts RPE, but it is not slow on GPUs. So why removing RPE can significantly improve the speed of LITv1 on COCO?\n\n3. It would be better if the COCO results based on LITv2-B can be moved into the main manuscript. See the weakness. Yes.", " This work introduces HiLo Attention to vision transformers on top of LIT[1] for 2D images. The proposed HiLo is composed of High-frequency attention (Hi-Fi) and Low-frequency attention (Lo-Fi). By splitting two branches in every transformer block and combining the output of the two branches, these two attention modules process the high-frequency information and low-frequency information in the image. This work achieves competitive results in Image Classification on ImageNet-1K, Object Detection and Instance Segmentation on COCO, and Semantic Segmentation on ADE20K. The work also shows the visualization of low-frequency features and high-frequency features.\n\n\n\n\n[1] Pan, Zizheng, et al. \"Less is more: Pay less attention in vision transformers.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 2. 2022. Strengths:\n\n1. This work addresses the problem of modeling local and global features in vision transformers, which is good motivation from the perspective of Fourier transform.\n\n2. This work achieves state-of-the-art performance on different tasks e.g. Image Classification on ImageNet-1K, Object Detection and Instance Segmentation on COCO, and Semantic Segmentation on ADE20K.\n\n3. Many implementation details are mentioned in the paper e.g. positional encoding, and low-FLOP models not always being fast models\n\nWeaknesses:\n\n1. There is little discussion or proof to show the connection between low/high frequency and local/global features. The visualization of low/high frequency feature (fig 5) is confusing for reading to understand.\n\n2. Looking at the Figure 4, the best model with alpha=0.8 is only roughly higher than the model with alpha=1.0. In the case of alpha=1.0, there is no global attention (or low-frequency feature) is involved. This ablation study looks very confusing. Does it mean global attention (or low-frequency attention) is useless?\n\n3. [Minor concern] Although comparing FLOPs is not a fair speed comparison, the throughput of a certain kind of GPU (RTX 3090) doesn't lead to a strong conclusion. 1. What is the strict definition of low-frequency/high-frequency attention? How do you define the relationship between high/low attentions with local/global attentions\n\n2. How do you generate the figure 5 ? (visualization of high/low feature)\n\n3. [Minor] How do you compare your method and results with C-SWin[1]?\n\n\n\n\n[1] Dong, Xiaoyi, et al. \"Cswin transformer: A general vision transformer backbone with cross-shaped windows.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n None" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 5 ]
[ "zlAsVLwaqB4", "oDdvL-jUdWE", "d6nmErtW3nq", "893K1zwx0ds", "ceDxNLbuWa3", "wZipyx_eCfk", "ceDxNLbuWa3", "ceDxNLbuWa3", "HJG86cW29Nj", "wZipyx_eCfk", "b-NSXlEZCBi", "b-NSXlEZCBi", "nips_2022_Pyd6Rh9r1OT", "nips_2022_Pyd6Rh9r1OT", "nips_2022_Pyd6Rh9r1OT", "nips_2022_Pyd6Rh9r1OT" ]
nips_2022_rA2tItoRUth
LGDN: Language-Guided Denoising Network for Video-Language Modeling
Video-language modeling has attracted much attention with the rapid growth of web videos. Most existing methods assume that the video frames and text description are semantically correlated, and focus on video-language modeling at video level. However, this hypothesis often fails for two reasons: (1) With the rich semantics of video contents, it is difficult to cover all frames with a single video-level description; (2) A raw video typically has noisy/meaningless information (e.g., scenery shot, transition or teaser). Although a number of recent works deploy attention mechanism to alleviate this problem, the irrelevant/noisy information still makes it very difficult to address. To overcome such challenge, we thus propose an efficient and effective model, termed Language-Guided Denoising Network (LGDN), for video-language modeling. Different from most existing methods that utilize all extracted video frames, LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2--4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that our LGDN outperforms the state-of-the-arts by large margins. We also provide detailed ablation study to reveal the critical importance of solving the noise issue, in hope of inspiring future video-language work.
Accept
This paper proposes Language-Guided Denoising Network (LGDN) with the goal of addressing the redundancy and noisy alignment issues in video-language modeling, by dynamically filtering out misaligned or redundant frames under the language supervision and obtains only 2-4 salient frames per video for cross-modal token-level alignment. Experiments on multiple benchmarks obtain good results. Reviewers appreciated the intuitive salient frame proposal mechanism and well-written paper but also had several questions on more ablations and efficiency details and fair comparisons, some of which were answered by the authors but some remaining were left unanswered for some reviewers and they were promised in the final version (e.g., diversity loss, SOTA methods with CC12M, etc.) so it will be good to add these.
train
[ "17IPPG0Yp09", "Gzn5HqHDjK4", "RTqRCtwsvCI", "3eAAsEcMk_", "PaEG3FkewT", "4zwRQcSJo5d", "S8dhD7BFubk", "leRUHlfatRi", "jOvjAJSk7y2", "VZGEnni_fpqC", "BkDwzSybzPfP", "ea59EvfnAby", "w2ncheR8BHM", "xGrXlYFsG7A", "d9Q10_TUdMR", "pCgvuv_uFgJ", "c6WO4PtpRYv", "CfmlGTnpaDU" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for addressing all of my comments. And, the added limitations and potential negative societal impacts are reasonable to me. Thank you!", " Dear Reviewer HH2K:\n\nThanks again for spending a huge amount of time on our paper, which has helped us improve the quality and clarity of the paper! We are glad to see that our response has addressed most of your concerns. We will carefully revise our paper according to your suggestions.\n\nThanks for your time and efforts again!\n\nBest, \\\nAuthors", " Thanks for your reply. \n\nI still have some concerns about this redundancy issue, such as (1) choosing the highest relevance score does not equal easing the redundancy issue, and (2) subsampling or mean pooling is not the main contribution of this work even though they can ease redundancy. I think the diversity loss can address my question, so look forward to your results on this. Even though I still have the question, the current form of this work is worth being accepted in my opinion after most of my concerns are addressed. Therefore, I increase the soundness to 3, presentation to 3 (somewhere between 2 and 3), and the overall score to 6.\n\nThanks again for your time to discuss my questions.", " Thanks for your feedback again, which will help us further improve our final paper!\n\n**Q. [This is just an example. My intention was to try to show that reducing frames does not lead to reducing redundancy ... and the redundancy rate will still be 50%. I thought the model only computed loss over those selected frames, so the redundant frames, which might hurt the training quality, might have more portion if fewer frames are selected.]**\n\n**A:** Thanks for pointing it out. We agree with your definition of the redundancy rate. This \"relative\" redundancy rate may increase when selecting fewer frames in a simple video. To make it clearer, we reiterate our contributions: \n1. removing ambiguous frames to achieve higher accuracy; \n2. reducing some redundant frames for better efficiency. \n\nWe appreciate that you recognize our first contribution about removing ambiguous frames. We clarify your concerns on redundant frames as follows.\n- Considering the salient frame scores are aggregated by mean pooling for the video-level prediction (see Line 199 in Section 3.5), redundant frames with similar scores may have a slight impact on the model performance. In other words, it will not hinder the performance. \n- Now we again take Figure 1 as an example, choosing frame-sets (2, 3), (2, 4), or (2, 3, 4) does not have much impact on the training of the model as the average scores of these different frame-sets are very similar. Thus an increase in the relative redundancy rate is acceptable.\n- However, the *de facto* paradigm utilizes all frames (including all relevant frames) in the training/inference stage, which significantly affects the speed and memory efficiency. As our SFP chooses a few salient frames with the highest relevance scores for MFCL/LSFM, we say that our SFP is capable to alleviate this problem (in other words, the \"redundancy\" issue).\n- We also agree that your suggested diversity loss may further help reduce redundancy in this case. We will update the discussions in our paper. \n\nThanks for your time again! Please don’t hesitate to let us know if there are any additional clarifications we can offer, looking forward to your post-rebuttal rating!\n\nBest, \\\nAuthors", " **Q2**\n\n> Take Figure 1 as an example (selected from MSRVTT). In our practice, we select salient frames out of 16 frames from this video and show only 7 frames in Figure 1 for conciseness. The relevance scores of Frame 1-7 in Figure 1 are [0.051, 0.356, 0.372, 0.341, 0.097, 0.082, 0.313], respectively. As we only select salient frames, the two redundant frames will not be selected in this example.\n\nThis is just an example. My intention was to try to show that reducing frames does not lead to reducing redundancy. As you can see, the scores are very high for those redundant frames, so it is very possible that the model selects one salient frame and one redundant frame for some cases, and the redundancy rate will still be 50% even when $N_{salient}=2$.\n\n> Meanwhile, as redundancy significantly affects the speed and memory efficiency (instead of performance), the redundancy rate should be computed over the entire video frames. Then assuming the SFP selects 4 salient frames out of 7 frames (consistent with the reviewer), the redundancy rate is still 2/7. Moreover, in our case (only 2 salient frames are selected), the redundancy rate is actually reduced from 2/7 to 0.\n\nWhy should the redundancy rate be computed over the whole video? I thought the model only computed loss over those selected frames, so the redundant frames, which might hurt the training quality, might have more portion if fewer frames are selected.\n", " Dear Reviewer UdJY,\n\nThanks again for your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nIn our previous response, we have carefully studied your comments and made detailed responses. As the discussion period will end soon in 2 days, we are happy to provide any additional clarifications that you may need. Please do not hesitate to contact us if there are other clarifications we can offer. We appreciate your suggestions.\n\nThanks for your time and efforts!\n\nBest, \\\nAuthors", " Thanks for your detailed feedback. We are glad to see that our response address most of your concerns. For the newly-raised questions, our answers are as follows:\n\n**Q1. [However, I was trying to say, in Figure 2, only the multi-modal Fusion Transformer is trained with denoise data, and the vision and language encoders are still trained with noisy data. Given that 2 out of 3 sub-networks (and they are big and deep), I think the whole model still highly relies on the training on noisy data.]**\n\n**A:** Yes, as videos are naturally noisy, the noisy issue in the training data is inevitable. The key idea of our LGDN is to handle the noisy problem when only noisy training data is available rather than to clean the original data at the very beginning. Therefore, we try to alleviate the negative effect of noise in the training process. Although our MVCL is still trained on noisy data (full video), we apply the frame-level MIL-contrastive loss and SFP mechanism to reduce the impact of noise as much as possible for MFCL and LSFM. Note that the vision and language encoders are also trained with denoise data (i.e., filtered salient frames) in frame-level MIL-contrastive loss.\n\n\n**Q2. [I think reducing frames does not necessarily lead to reduce redundancy...Does this mean redundancy is actually not an issue for the video-language tasks?]**\n\n**A:** 1. On reducing frames and redundancy, there seems to exist some confusion. Our explanations are: \n* Take Figure 1 as an example (selected from MSRVTT). In our practice, we select $N_{salient} = 2$ salient frames out of 16 frames from this video and show only 7 frames in Figure 1 for conciseness. The relevance scores of Frame 1-7 in Figure 1 are [0.051, 0.356, 0.372, 0.341, 0.097, 0.082, 0.313], respectively. As we only select $N_{salient} = 2$ salient frames, the two redundant frames will not be selected in this example. \n* Meanwhile, as redundancy significantly affects **the speed and memory efficiency** (instead of performance), the redundancy rate should be computed over the entire video frames. Then assuming the SFP selects 4 salient frames out of 7 frames (consistent with the reviewer), the redundancy rate is still 2/7. Moreover, in our case (only 2 salient frames are selected), the redundancy rate is actually reduced from 2/7 to 0. \n\n2. Since reducing the redundancy of video frames leads to high speed and memory efficiency in real-world applications, redundancy is still a key issue for video-language tasks.", " Thank you for the replies. I have some follow-up thoughts for some questions.\n\n**W2: [… Please see Section 3.4 or Figure 2 in the main paper for more details.]**\n\nThank you for the clarification. However, I was trying to say, in Figure 2, only the multi-modal Fusion Transformer is trained with denoise data, and the vision and language encoders are still trained with noisy data. Given that 2 out of 3 sub-networks (and they are big and deep), I think the whole model still highly relies on the training on noisy data.\n\nBut I actually found you have mentioned “obtains only 2–4 salient 14 frames per video for **cross-modal token-level alignment**” in the L13, so only using denoise data on the multi-modal Fusion Transformer makes sense to me now.\n\n**Q1: To clarify the two stages of our sampling strategy, we have added a schematic figure (i.e., Figure 1) in the supplementary material...\nWe have added these results in Table 1 of the supplementary material.**\n\nThanks. The added figure helps a lot.\n\n**Q2: How does SFP help to ease the redundancy in the video? … loss might be necessary to address this problem).**\n\n> Most recent works utilize sparse sampling to sample 16 frames per video for video-language modeling. As we have mentioned in our response to Q1, our SFP further selects a few salient frames (e.g., 2 ones) out of 16. The use of much fewer frames by two-stage frame sampling indeed contributes to easing the redundancy in the video.\n\nI think reducing frames does not necessarily lead to reduce redundancy. Take Figure 1 as an example. There are two redundant frames out of 7 frames, so the redundancy rate is 28.6%. Assuming the SFP selects 4 salient frames, model may select the two salient frames (the second and the third frame) and the two redundant ones (because the two redundant frames both contain the witch and lady, and the similar score to the third frame might be high). Then the redundancy rate will increase to 2/4 = 50%.\n\n> In this work, the problem of sampling similar salient frames can almost be ignored.\n\nDoes this mean redundancy is actually not an issue for the video-language tasks? \n\n**Q3: … We have added these results in Table 2 of the supplementary material.**\n\nThank you for the table.\n\n**Q4: Thanks for pointing this out. In the inference phase, …,**\n\nThank you for the clarification.\n\n**Q5: …​​This indicates that the semantic correlation is as important as temporal information for most video tasks. Therefore, it is essential to exploit both semantic correlation and temporal information for video-language modeling…**\n\nThank you for the answer!\n\n", " Dear Reviewer HH2K,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are happy to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and made detailed responses summarized below:\n\n1. Clarified that learning the relevance between language and frames is based on the salient frames instead of the full video.\n2. Clarified our two-stage sampling strategy and conducted additional experiments to show the superiority of our salient sampling strategy in terms of speed and memory.\n3. Discussed how our SFP helped to ease the redundancy issue.\n4. Provided the model capacity as well as the model performance of each method.\n5. Explained how global and local alignments are used in the inference phase and why using global inference generates lower performance.\n6. Clarified the complementarity of our salient frame sapling strategy and temporal infromation. \n\nWe hope that the provided new experiments and additional explanations have convinced you of the merits of our submission.\n\nPlease do not hesitate to contact us if there are other clarifications or experiments we can offer. Thanks!\n\nThank you for your time!\n\nBest, \\\nAuthors", " Dear ACs and reviewers:\n\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\n\nThe discussion period will end soon in 4 days. We appreciate it if you take the time to read our rebuttal and give us some feedback. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. \n\n\nThanks for your time and efforts.\n\n\nBest, \\\nAuthors", " We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our contributions: \n\n* **Model.** Introducing a new way and cool idea to address the noisy problem in video [kV1h, HH2K, UdJY]; the proposed salient frame proposal mechanism is simple and effective [kV1h, HH2K, UdJY]; the impact can be extended to other domains [HH2K].\n* **Experiment.** Extensive experiments are performed on video-text datasets and the experimental results are promising [kV1h, HH2K, UdJY].\n* **Writing.** The paper is well organized and easy to read [kV1h, UdJY]; the Related Work section is clearly written [kV1h].\n\nAnd we also thank all reviewers for their insightful and constructive suggestions, which help a lot in further improving our paper. In addition to the pointwise responses below, we summarize supporting experiments added in the rebuttal according to reviewers’ suggestions.\n\n**New Experiments** \n\n* The speed and memory cost of different sampling strategies [HH2K].\n* Comparision among different methods in terms of the model capacity [HH2K].\n* More results by applying our SFP mechanism to other frame sampling techniques [UdJY].\n\nWe hope that our pointwise responses below could clarify all reviewers’ confusion and alleviate all of their concerns. We'd like to thank all reviewers’ time again.", " \nThank you for the positive comments and insightful suggestions.\n\n**Q1. [Provide more comparison between salient frame proposal (SFP) mechanism and sparse sampling or other frame sampling techniques.]** \\\n**A:** Thanks for the suggestion. Note that our SFP mechanism must be combined with a frame sampling technique since we adopt a two-stage sampling strategy in this paper (see our **response to Q1 of Reviewer HH2K**). We have already presented the comparison between sparse sampling (used in ClipBERT) and sparse sampling + SFP in Figure 3 of the main paper. Here, we apply our SFP mechanism to another two frame sampling techniques: Random sampling and Dense Uniform (equally interval sampling). The obtained results on the MSR-VTT 1kA test set are provided in the following table. It can be observed that our SFP significantly boosts different frame sampling strategies, further demonstrating the general applicability of our SFP mechanism. In addition, we have added these results in **Table 3 of the supplementary material**. We also provide speedup and memory efficiency results compared with sparse sampling strategy in our **Response to Reviewer HH2K Q1**.\n\n| | R@SUM | R@SUM | R@SUM | R@SUM | R@SUM | R@SUM |\n|-----------------|:---:|:---:|:---:|:---:|:---:|:---:|\n| #Frames | 4 | 8 | 16 | 4 | 8 | 16 |\n| + SFP | no | no | no | yes | yes | yes |\n| Random sampling | 164.7 | 169.5 | 172.8 | 168.0 | 174.3 | 179.4 |\n| Dense Uniform | 166.5 | 171.3 | 174.3 | 173.1 | 179.4 | 180.6 |\n| Sparse sampling | 168.0 | 171.9 | 174.0 | 179.1 | 180.3 | 181.1 |\n\n**Q2. [Some experimental results (e.g., Effect of SFP mechanism) in supplementary material can be included in the paper.]** \\\n**A:** Thanks for pointing this out. Due to the limited space, we have to put them in the supplementary material. We are glad to place them (including the above comparison results) in the main paper in any other form of publication (e.g., Arxiv).\n\n**Q3. [Table 2 shows the improvement of MVCL is marginal. Is it necessary for the LGDN model? Can MIL-NCE be used to compute the MVCL?]** \\\n**A:** Good question! \n1. MVCL is mainly designed for capturing global temporal information, and thus it only leads to marginal improvement for local alignment (**335.2 vs. 336.1** in Table 2 of the main paper). However, in our downstream tasks, MVCL is found to be complementary to the local alignment and helps our LGDN achieve much higher performance (**352.9 vs. 360.4** in Table 2 of the main paper). \n2. In our downstream datasets, each video annotated carefully by human is holistically (semantically) consistent with the paired caption, and thus a traditional NCE loss is suitable. However, if we pre-train our LGDN on web datasets with noisy narration (e.g., HowTo100M), a MIL-NCE loss is thus needed.\n\n**Q4. [Why LGDN (in Table 3) performs slightly worse than Clip4CLIP (in Table 8).]** \\\n**A:** Sorry for the confusion. Our LGDN is pre-trained on 5.2M/15.2M image-text pairs, while Clip4CLIP applies OpenAI's CLIP [2] as the backbone pre-trained on 400M image-text pairs (**75.9x/25.3x larger** than ours). Nevertheless, even with much less pre-training data, our LGDN (in Table 3 of the main paper) still performs comparably w.r.t. Clip4CLIP (in Table 8 of the main paper), indicating the effectiveness of our LGDN. Moreover, as shown in Table 8 of the main paper, when we apply CLIP as our backbone, our LGDN significantly outperforms Clip4CLIP, which further demonstrates the general applicability of our LGDN.\n\n**Q5. [It would be nice to compare some SOTA methods with CC12M.]** \\\n**A:** Good suggestion. To the best of our knowledge, we have for the first time applied CC12M to video-language pre-training (and also achieved great success) in this paper. Due to the limited computing resources and rebuttal time, we could not reproduce other competitors with CC12M as the pre-training data. We will do it in our future works.\n\n**Q6. [Lack of limitations and potential negative societal impacts.]** \\\n**A:** Thanks. We have added them in Section 1 of the supplementary material. \n\n[2] Alec Radford, et al. \"Learning Transferable Visual Models From Natural Language Supervision.\" ICML 2021.\n\n\nThanks for your time and effort again! For any other questions, please feel free to let us know during the rebuttal window.", " Thank you for the constructive comments and suggestions.\n\n**Weaknesses:**\n\n**W1. [It would be better if the authors can report the model capacity of each method (see Questions 3).]** \\\n**A:** Good suggestion. Please see our response to Q3 for detailed information.\n\n**W2. [LGDN still highly relies on learning on noisy data as the vision and language encoder still use full video to learn how to match the relevance between language and frames.]** \\\n**A:** Sorry for the confusion. Our MVCL (utilizing full video) is mainly designed for capturing global temporal information. As for matching the relevance between language and frames, we introduce MFCL which utilizes the salient frames filtered by the SFP Mechanism in each video (instead of utilizing full video) to form a set of positive candidate pairs for frame-level MIL-contrastive learning. Please see Section 3.4 or Figure 2 in the main paper for more details.\n\n**W3. [The redundancy issue has not been addressed by LGDN (see Question 2), and the salient sampling might introduce another problem (see Question 5).]** \\\n**A:** (1) The redundancy issue is discussed in our response to Q2. (2) Good question! And this is the reason why we propose MVCL for our LGDN in Section 3.2. Please see our response to Q5 for detailed information.\n\n**Questions:**\n\n**Q1. [ClipBERT samples frames before feeding them into the model while LGDN does the sampling only before fusion layers, which may cost more memory or be slower than other methods (e.g., ClipBERT, Frozen in Time).]** \\\n**A:** Sorry for the confusion. The sampling strategy of our LGDN includes two stages: \n1. We first adopt sparse sampling to sample 16 frames from each video before feeding them into the LGDN as introduced in Section 4.1 Implementation Details, which is the same as ClipBERT and Frozen in Time. \n2. We further utilize salient sampling (SFP) to select a few salient frames (from 16 frames per video) before fusion layers.\n\nTo clarify the two stages of our sampling strategy, we have added a schematic figure (i.e., **Figure 1**) in the supplementary material. Further, we present the speed and memory cost on the Didemo test set in the following table. For fair comparison, all experiments are conducted on 8 Tesla-V100 GPUs with mini-batch size 24. It can be seen that our salient sampling strategy is obviously faster and costs less memory, as compared with sparse sampling (utilizing all 16 frames for feature extraction and multi-modal fusion). We have added these results in **Table 1 of the supplementary material**.\n\n\n| | Speedup | Memory Cost | R@SUM |\n| :- | :-: | :-: |:-:|\n| Sparse sampling | 1.0x | 1.0x | 183.0 |\n| Salient sampling ($N_{salient}=1$) | 10.4x | 0.60x | 193.5 |\n| Salient sampling ($N_{salient}=2$) | 6.5x | 0.62x | **198.3** |\n| Salient sampling ($N_{salient}=4$) | 3.6x | 0.68x | 195.6 |\n\nIn fact, we have already presented the speedup of our salient sampling strategy in Figure 3 (a-b) of the main paper, which is computed w.r.t. the slowest case $N_{salient} = 16$ (i.e., sparse sampling).\n\n**Q2. [How does SFP help to ease the redundancy in the video? Meanwhile, the other frames that are similar to the salient frame may also be selected by the SFP mechanism (some diversity loss might be necessary to address this problem).]** \\\n**A:** Good questions! \n1. Most recent works utilize sparse sampling to sample 16 frames per video for video-language modeling. As we have mentioned in our **response to Q1**, our SFP further selects a few salient frames (e.g., 2 ones) out of 16. The use of much fewer frames by two-stage frame sampling indeed contributes to easing the redundancy in the video. \n2. In this work, the problem of sampling similar salient frames can **almost be ignored**. The sparse sampling strategy (16 frames per video) has alleviated the frame redundancy at the very beginning. Therefore, this problem is not necessary to be addressed in our current work. However, we agree that a well-designed diversity loss may help LGDN to handle more complex scenarios. Thanks for the valuable suggestion. We are working on it, but due to the limited time and the computing resources, we are expected to include the loss and experiments in our next revision.", " **Q3. [The model size compared with other methods. For better comparison, LGDN can use lesser layers in vision and language encoders.]** \\\n**A:** Sorry for the confusion. Our full LGDN model consists of ViT-Base (with 12 Transformer layers as the visual encoder) and BERT-Base (with the first 6 Transformer layers as the lingual encoder and the last 6 as multi-modal fusion layers). We have clarified this network structure in Section 2 of the supplementary material. Moreover, we also provide detailed comparison to other methods in terms of model capacity and R@SUM (on the MSR-VTT 1kA test set) in the following table. We have added these results in **Table 2 of the supplementary material**.\n| Methods | Visual Encoder | Lingual Encoder | Fusion Layer | Total | R@SUM |\n| :- | :-: | :-: | :-: | :-: | :-: |\n| UniVL | 110M | 110M | 46M | 266M | 133.9 |\n| TACo | 155M | 110M | 14M | 279M | 157.4 |\n| Support Set | 136M | 220M | - | 356M | 157.9 |\n| Frozen in Time | 114M | 66M | - | 180M | 161.0 |\n| LGDN (global) | 93M | 55M | - | **148M** | 164.6 |\n| LGDN (ours) | 93M | 55M | 68M | 215M | **181.1** |\n\nIt can be observed that:\n- When fusion layers are not used (i.e., only global alignment is adopted), our LGDN (global) outperforms the state-of-the-art method Frozen in Time, but with fewer model parameters. \n- Our full LGDN performs much better than all the competitors, but its parameter number (215M) is still comparable to that of Frozen in Time (180M) and even significantly smaller than those of the other competitors. These observations suggest that the performance gains obtained by our LGDN are not due to utilizing more model parameters. \n\n\n\n**Q4. [What are global and local alignments used in the inference phase? Why does using global inference generates such low performance?]** \\\n**A:**\n1. Thanks for pointing this out. In the inference phase, the global similarity scores are first obtained by computing the dot product between text embeddings and video embeddings from MVCL (i.e., global alignment). Further, the token-level similarity scores are extracted from the LSFM module as described **in Section 3.5** (i.e., local alignment). We simply add these two scores (i.e., global+local alignment) for the final prediction.\n2. The low performance of using global inference alone (see Table 2 of the main paper) can be explained as follows: \n * to maintain temporal information (which is needed for video tasks), the global alignment directly takes all 16 sampled frames of each video as input, which does not consider the noise issue; \n * the global alignment only adopts dot product as similarity scores before fusion layers, while the local alignment exploits token-level interaction for better performance through multi-modal fusion layers. \nHowever, as shown in Table 2 of the main paper, the global alignment still leads to significant improvements when it is combined with the local alignment (i.e., Ensemble).\n\n**Q5. [The salience sampling breaks the temporal dependency of the video (compared to uniform sampling), which is an important characteristic to solve most video tasks. Do you think this sampling will prevent LGDN from applying to other video tasks that highly depend on temporal information?]** \\\n**A:** Good question! We agree that the salience sampling breaks the temporal dependency of the video. However, from Table 2 of the main paper, we can observe that even without considering the temporal information (video-level MVCL is not used), our LGDN still largely outperforms the state-of-the-arts (in Table 3 of the main paper). This indicates that the semantic correlation is as important as temporal information for most video tasks. Therefore, it is essential to exploit both semantic correlation and temporal information for video-language modeling. Inspired by this, we thus propose SFP for fine-grained semantic alignment and adopt MVCL in our LGDN for capturing global temporal information. In our downstream tasks, these two modules are found to be complementary to each other, and their fusion leads to better performance (see Table 2 of the main paper). This suggests that our salience sampling does not prevent LGDN from applying to other video tasks that highly depend on temporal information. More importantly, we can adjust the weights of these two modules to pay more attention to temporal information in this case.\n\n\nWe wish that our response has addressed your concerns, and turns your assessment to the positive side. If you have any questions, please feel free to let us know during the rebuttal window. We appreciate your suggestions and comments! Thank you!\n", " Thank you for the positive comments and insightful suggestions.\n\n**Q1. [Lack of limitations and potential negative societal impacts.]** \\\n**A:** Thanks. We have added them in Section 1 of the supplementary material. We also show it below.\n- **Limitations.** The key idea of our LGDN is to propose the SFP mechanism to filter out noisy/redundant frames for fine-grained semantic alignment, along with MVCL for capturing global temporal information. In most downstream tasks, these two modules are complementary to each other. And we also observe that only a few salient frames (e.g., 2 ones) are enough for most downstream tasks, and thus we do not consider aggregating temporal information across salient frames.\nHowever, the SFP mechanism may need to be slightly changed when facing specified scenarios (e.g., long-term complicated videos over 30 minutes that highly rely on temporal information). \nOn the one hand, we could adjust the weights between the two modules (MVCL and SFP) according to the situation. On the other hand, we can split the full video into several clips (e.g., 3 minutes per clip), apply our SFP mechanism on each clip, and obtain the salient frames from all clips. In this way, we could consider aggregating temporal information across salient frames.\n- **Potential Negative Societal Impacts.** Video-language learning, especially large-scale video-language modeling, has developed rapidly over the past few years and led to the greatest advance in search engines, video recommendation, and multimedia data management. Despite its effectiveness, existing video-language pre-training models still face possible risks. As these models often rely on a large amount of web data, they may acquire biases or prejudices (especially in search engines and recommendation systems), which must be properly addressed before model deploying.\n\n**Q2. [Explain the pre-training setup alternative if more computation resources are available.]** \\\n**A:** Good question! If more computation resources are available, the pre-training setup alternative could be: \n- Larger pre-training datasets (e.g., HowTo100M and LAION-400M [1]) are utilized to obtain better performance. \n- A larger encoder (e.g., ViT-H) is used as our backbone. \n- The larger batch size and more modality data (e.g., video, image, and motion) are also good choices, which are beneficial for video-language learning.\n\n[1] Christoph Schuhmann, et al. \"LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs.\" NeurIPS Workshop 2021.\n\nThanks for your time and effort again! For any other questions, please feel free to let us know during the rebuttal window.\n\n", " This paper tackles one important issue in video-language modeling, that is, previous works made assumptions that video frames and the text descriptions for the video are semantically correlated. However, this assumption is not valid for the real world <video,text> pairs, since first, the video-level text descriptions may not cover all information in the video, and second, a raw video often contains noisy or meaningless information that does not appear in the text descriptions. The previous correlation assumption did not take account of these noises, and the self-attention mechanism still leaves mis-aligned frames which harm cross-modal alignment. \n\nThis paper proposed an E2E language-guided denoising network, LGDN, for video-language modeling. LGDN includes (1) a salient frame proposal (SFP) mechanism to dynamically filter out irrelevant or redundant frames and sparsely sample salient frames to improve video-language modeling, (2) cross-modal interactions at three levels, i.e., salient frame matching at the token-level guided by language, momentum frame-level MSL-contrastive learning (MFCL), and momentum video-level contrastive learning (MVCL). Experimental results showed that LGDN outperforms the current SOTA. Ablation studies further demonstrated the importance of tackling the noise issue.\n Strengths:\n\n1. This paper addressed an important limitation in previous works of video-language modeling, that is, the impractical assumption that video frames and the text descriptions for the video are semantically correlated. There is no sufficient tackling of noises in raw videos. The existing employment of self-attention mechanism cannot fully address this problem. These irrelevant or redundant information still harms cross-modal alignment. Hence, this work is valuable to the research community.\n\n2. The SFP mechanism and the three-level cross-modal interactions are sound, well motivated, and have good novelty. SFP is achieved through exploring MVCL and MFCL. After obtaining the language-guided salient frames, LGDN also has a language-guided salient frame matching (LSFM) module to conduct token-level semantic alignment between visual patches and words for improved performance. The final loss of LGDN is the combined loss of MVCL, MFCL, and LSFM.\n\n3. The Related Work section is clearly written. It summarizes significant works in the past and their limitations, and clearly explained the choice of momentum frame-level MSL-contrastive learning to help address the mis-aligned frames.\n\n4. The paper also conduct the interesting investigations of the four strategies for estimating relevance scores.\n\n5. The evaluations are comprehensive, on four public text-video retrieval datasets and one VQA dataset. The evaluation settings support a fair comparison to previous models. Performance gains from LGDN over existing approaches, including SOTA, are quite strong. Experimental results showed that LGDN outperforms previous methods, including the current SOTA, by a large margin on text<->video retrieval on MSR-VTT. LGDN also outperforms methods exploiting extra modalities or those pre-trained on very large video data. LGDN also shows promising model capacity, as its performance is significantly improved when trained on much larger pre-training data. The extensive evaluations on other text-video retrieval datasets and VQA also demonstrated that LGDN outperforms existing approaches including the current SOTA. \n\n6. Ablation studies verified the contributions from the proposed SFP, MVCL, MFCL, and LSFM. Last but not least, the visualization results are interesting and helpful for directly illustrating the importance of addressing the noisy frame issue and showing that the SFP mechanism helps LGDN to improve video-language modeling.\n\n7. Although the code is not released, the main body and Appendix provide enough details to help reproducibility. The Appendix also includes more detailed experimental results on analyzing effect of SFP mechanism, different relevance score estimations, and memory bank sizes, and useful additional visualization results.\n\nWeaknesses:\n\n1. The paper missed the important section of Limitations (and potential negative societal impacts). It is highly desired that the authors add discussions on limitations and potential negative societal impacts of this work.\n The paper is clear written. However, the authors mentioned the pre-training is set up as described in the paper due to restricted computation resources. It would be useful to explain the experimental setup alternative if more computation resources are available, that is, how to scale up the pre-training setup. The paper missed the important section of Limitations (and potential negative societal impacts). It is highly desired that the authors add discussions on limitations and potential negative societal impacts of this work.", " This paper introduces LGDN, which utilizes a self-taught mechanism to select salient frames to train the fusion layers. The vision and language encoder still use all frames and texts to train, and their goal is to learn representations while computing relevance scores for the frame selection. The whole mechanism by contrastive learning helps to filter out noisy information in the last few layers, boosting performance while reducing computation costs. They also demonstrate better empirical results over the baselines. 1. Pros:\n\n* The paper proposes an effective method to select frames from noisy video, leading to an improvement in both fine-tuned accuracy and computation efficiency.\n\n* To address the noisy problem in video is novel to me, it might become a new approach to constructing multi-modal models. There is no perfect and exact matching between any kinds of pair data so that the impact can be extended to other domains.\n\n2. Cons:\n\n* The empirical results are not fully convincing to me. I think it would be better if the authors can report the model capacity of each method (see Questions 3).\n\n* The central claim of this paper is that noisy video might harm the training, but the model still highly relies on learning on noisy data as the vision and language encoder still use full video to learn how to match the relevance between language and frames.\n\n* It seems to me that the redundancy issue has not been addressed by LGDN (see Question 2), and the sampling might introduce another problem (see Question 3). 1. ClipBERT samples frames before feeding them into the model while your approach does the sampling only before fusion layers, so I wonder about the memory efficiency and speed of LGDN compared to ClipBERT, Frozen in Time, and other methods.\n2. I understand that the SFP might address the misalignment issue, but how it can help to ease the redundancy in the video. If you compute the relevance score between text and frames, it's likely that similar frames have similar scores, and I am not sure what can prevent LGDN to select the other frames that are similar to the salient frame. I feel like some diversity loss might be necessary to address this problem.\n3. What's the model size of other baselines? LGDN is composed of vision, language, and fusion Transformers while Frozen in Time doesn't have a fusion Transformer, making me concerned if the performance gain somewhat comes from the bigger models. For better comparison, you can try to use lesser layers in vision and language encoders.\n4. What are global and local alignments used in the inference phase? I didn't find an explanation in the paper. Why does using global inference generates such low performance?\n5. The salience sampling breaks the temporal dependency of the video (compared to uniform sampling), which is an important characteristic to solve most video tasks. Do you think this sampling will prevent LGDN from applying to other video tasks that highly depend on temporal information? The paper's writing can be improved. The method has many components, and I was confused about which part addresses what problem at the beginning. But in general, I still understand the idea and contribution of the paper. My main concern is about the fairness of the experiments and some questions about the technical approach (refer to Questions). I will raise the score if those problems are addressed.", " The paper proposes a Language-Guided Denoising Network (LGDN) for video-language modeling, which can deal with the noisy information in video frames. LGDN dynamically filters out the misaligned or redundant frames under the language supervision and obtains only 2–4 salient frames per video for cross-modal token-level alignment. Extensive experiments on five public datasets show that LGDN outperforms the state-of-the-arts by large margins. Strengths: \nThe paper is well organized and easy to read. The noise issue in video-text retrieval task studied by this work is practical, and the proposed salient frame proposal mechanism seems simple and effective. Extensive experiments are performed on video-text datasets and the experimental results are promising.\n\nWeaknesses:\nThe paper needs more analysis and ablation studies to support its network architecture and proposals. For example, in-depth comparison between salient frame proposal (SFP) mechanism and sparse sampling or other frame sampling techniques. Also, some experimental results (e.g., Effect of SFP mechanism) in supplementary material can be included in the paper.\n 1. Table 2 shows the improvement of $\\mathcal L_{MVCL}$ is marginal. Is it necessary to the LGDN model? Can MIL-NCE be used to compute the $\\mathcal L_{MVCL}$?\n2. The authors claim that the proposed LGDN outperforms the state-of-the-arts by large margins. However, as shown in Table 8, LGDN performs worse than Clip4CLIP. This point needs further discussion. \n3. Since the pre-training with CC12M brings significant improvements, it would be nice to compare some SOTA methods under the same pre-training settings.\n The authors did not discuss the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "d9Q10_TUdMR", "RTqRCtwsvCI", "3eAAsEcMk_", "PaEG3FkewT", "S8dhD7BFubk", "CfmlGTnpaDU", "leRUHlfatRi", "jOvjAJSk7y2", "c6WO4PtpRYv", "nips_2022_rA2tItoRUth", "nips_2022_rA2tItoRUth", "CfmlGTnpaDU", "c6WO4PtpRYv", "c6WO4PtpRYv", "pCgvuv_uFgJ", "nips_2022_rA2tItoRUth", "nips_2022_rA2tItoRUth", "nips_2022_rA2tItoRUth" ]
nips_2022_7YTh6S8HIY
PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
Large-scale vision-language pre-training has achieved promising results on downstream tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from the Internet are in perfect one-to-one correspondence. However, in real scenarios, this assumption can be difficult to hold: the text description, obtained by crawling the affiliated metadata of the image, often suffers from the semantic mismatch and the mutual compatibility. To address these issues, we introduce PyramidCLIP, which constructs an input pyramid with different semantic levels for each modality, and aligns visual elements and linguistic elements in the form of hierarchy via peer-level semantics alignment and cross-level relation alignment. Furthermore, we soften the loss of negative samples (unpaired samples) so as to weaken the strict constraint during the pre-training stage, thus mitigating the risk of forcing the model to distinguish compatible negative pairs. Experiments on five downstream tasks demonstrate the effectiveness of the proposed PyramidCLIP. In particular, with the same amount of 15 million pre-training image-text pairs, PyramidCLIP exceeds CLIP on ImageNet zero-shot classification top-1 accuracy by 10.6%/13.2%/10.0% with ResNet50/ViT-B32/ViT-B16 based image encoder respectively. When scaling to larger datasets, PyramidCLIP achieves the state-of-the-art results on several downstream tasks. In particular, the results of PyramidCLIP-ResNet50 trained on 143M image-text pairs surpass that of CLIP using 400M data on ImageNet zero-shot classification task, significantly improving the data efficiency of CLIP.
Accept
This paper proposes PyramidCLIP. It improve contrastive learning method CLIP with more fine-grained information to produce multiple views of both the image and text during training. During inference/evaluation, only the standard view is used. The empirical results with different network architectures at different pretraining data scales show that the proposed Pyramid achieves clear gain over the baseline methods. The paper is comprehensively discussed, and receives unanimous accept from all reviewers, leading to an ``Accept'' decision. The authors are highly encouraged to revise the paper accordingly. The authors reported results on a customized benchmark, and showed improvement based on its own baseline. In the future, the authors are highly encouraged to report results based on a common benchmark below [*], so that readers can clearly see the position of PyramidCLIP in the context of all other similar papers in the literature. [*] https://computer-vision-in-the-wild.github.io/eccv-2022/
train
[ "_HxXP2hX8m", "qK-T1AhgHB", "eTzZUaOe6ST", "lnKxcTG1058", "haZ5FrWdSrG", "k7nj5tX9GBu", "FZm2TPvUyB0", "6uxKcYaFXQy", "VEj5bsfnO6e", "229MPfXBWE6", "yc7NdZbq2uv", "HQg4dqN9DD", "1b2Sx1dcIF", "4iG29vcdk3Y", "S-KKhEPq7g8", "npDczYLTMFd", "QIC4i5R3sUc", "Ch9vFhjwDMg", "Ib-op6cQ7lM", "K2IfD7o8HY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response!\n\nThe qualitative examples do address my concerns partially. I am still a bit uncomfortable with using summarization models that are pre-trained for summarizing long documents (>400 words) in the news domain in summarizing short captions (10~30 words) in another domain, especially considering that the summarizers can have the hallucination problem. It would be good to include these qualitative examples and discussions in the revised version.\n\nAlso, I assume that you mean \"if the output length of the summarization model is **greater** than the original caption, we will directly use the original caption\"? If the summary length can indeed be greater than the caption length, it confirms my hypothesis that using pre-trained summarizers is not suitable in this setting.\n\nGiven the additional examples and considering that this is one of the minor contributions of the paper, I would increase my score by 1, but I do hope the paper can be more explicit about the potential issue of applying summarization models in this setting.\n\n", " Thank you for the response. My concerns are well-addressed now. It would be nice to see the additional results and discussions in the next version of the paper. I will raise my rating to 7.", " Hi, Reviewer BusK, did our reply address your concerns? If you have other concerns, please feel free to let us know.", " Q1.1: About Random Crop.\n\nHere we use salient ROI instances as evaluation metric to quantitatively analyze the quality of Local Crops.\n\nThe ROI instances are salient objects detected by a powerful pre-trained detector. The more ROI instances the Local Crop contains, the more information of the original image the Local Crop contains.\n\nWe define that if area(intersection(ROI, Local Crop))/area(ROI) > 0.5, the ROI is in the Local Crop.\n\nAnd the amount of information contained in Local Crop = #(ROIs in local crops)/#(All ROIs), term as InfoiLC.\n\nIn the table below, we list the amount of information contained in the Local Crop (InfoiLC) under different random crop ratios, as well as the corresponding zero-shot ImageNet top-1 accuracy. All are the results on the cc3m dataset.\n\n| Random Crop Ratio | InfoiLC | IN ZS Top-1 |\n| --- | --- | --- |\n| 1.0-1.0 | 100% | 23.1 |\n| 0.9-1.0 | 99.1% | 22.8 |\n| 0.8-1.0 | 97.8% | 23.5 |\n| 0.7-1.0 | 96.2% | 23.6 |\n| 0.6-1.0 | 94.0% | 24.4 |\n| 0.5-1.0 | 93.0% | 24.5 |\n| 0.4-1.0 | 90.1% | 24.1 |\n| 0.3-1.0 | 87.1% | 24.0 |\n| 0.2-1.0 | 80.9% | 23.2 |\n\nIt can be seen that when the the random crop ratio is 0.5-1, the Local Crop still contains 93% of the original image information, and the result is improved by (24.5%-22.8%)=1.7%, compared to the random crop ratio 0.9-1.0 that contains 99.1% of the original image information. We hold the opinion that this improvement is brought by discarding the part of the image that does not match the caption.\n\nIn addition, it can also be seen that when the random crop ratio is too small, the zero-shot ImageNet Top-1 Acc will drop, so 0.5-1.0 is a reasonable value.\n\nQ1.2 and Q2: About the advantage of pure dual-stream models. \n\n1)The parallelization of the dual-stream model is better. \n\nOn the one hand, in practice, for some tasks, such as zero-shot classification, text features can be extracted offline in advance, which is more flexible and friendly in actual deployment.\n\nOn the other hand, on some tasks that require one image and multiple captions to predict, such as zero-shot multi-label classification, dual-stream shows great flexibility. For the dual-stream model, an image and multiple captions are forwarded separately, and then compared at the end, while the single-stream model requires image-text(concatenate) to be forwarded multiple times.\n\n2)The computational cost of the dual-stream model is smaller than that of the single-stream, since the number of tokens input to the fusion transformer is the sum of visual tokens and language tokens. (Transformer performs Self-Attention on N tokens, with a complexity of O(N2).)\n", " Thanks for your prompt reply.\n\nWe would like to stress once more about the soundness of text summarization.\n\nCOCO dataset is a manually annotated dataset with relatively concise captions and has less noise in captions, while the datasets used in large-scale image-text pre-training are usually crawled from the Internet, and the quality of captions is far inferior to that of manually annotated ones.\n\nWe randomly sample one caption and the corresponding text summarization from the several public datasets we used, and list them in the table below.\n\n\n| Dataset | Original Caption | Text Summarization |\n| --- | --- | --- |\n| CC3M | look an interesting photo , animal is sitting on child | animal is sitting on child.|\n| CC12M | A lot of <PERSON> plastic Lego blocks. A lot of <PERSON> plastic Lego blocks stock photography | plastic Lego blocks stock photography.|\n| YFCC15M | Alaska cruise day 1, boarding/setting sail: All aboard. 1 Sep 2008: Coral Princess setting sail from Vancouver, Canada. | coral princess set sail from Vancouver, canada. |\n| YFCC15MV2 | Hungry Boy Dendrocopos major - Great Spotted Woodpecker - I heard the unmistakalbe peck-peck-peck coming from the top of a pine tree and saw a young woodpecker sitting on a branch Moments later its tired parent landed on the same branch to feed him and then the two of them flew away Not the best picture technically but I liked the moment | a young woodpecker and its tired parent landed on the same branch to feed him |\n| LAION99M | Small kid's meal - spaghetti with cherry tomatoes and basil. Colorful italian dinner on white wooden table. Plate captured from above (top view, flat lay). Layout with free copy (text) space. | colorful italian dinner on white wooden table. |\n\nIt can be seen that the captions of these datasets are relatively noisy and contain more redundant information, but after the summarizaion model, captions with relatively compact and concise semantics can be obtained.\n\nFinally, it is worth mentioning that when we extract the text summarization, if the output length of the summarization model is less than the original caption, we will directly use the original caption, so the length of text summarization must be less than or equal to the caption.", " Thank you for the response!\n\n**Q2&Q4**\n\nI'm still not convinced that applying pre-trained text summarization models to image captions is a valid approach. Text summarization datasets typically have quite long inputs (e.g. the average number of tokens of data instances for popular summarization datasets are\n\n| | Reddit| XSum| CNN/DM | WikiHow | NYT | PubMed |\n| ----- | ----- | -----| ----- | -----| ----- | -----|\n| Input |482| 430 | 766 | 580 | 1183 | 444| \n| Output | 28 | 23 | 58 | 62 | 119 | 210 |\n\nAs you have shown in this table, the image captions are quite short (even shorter than an output summary of a typical instance) and I cannot imagine how sentences such as 'a street sign sits on top of a stop sign.' (which I randomly sample from COCO) can be summarized. Therefore, I do not believe applying summarization models to image captions is technically sound. Not including this augmentation method or using keyword extractors (e.g. TextRank) and sentence simplification models would be more plausible to me.\n\n**Q1&Q3**\n\nThank you for the additional experiments! I'm not sure if training the models with more data would be more costly than using an object detector and text summarizer to first create instances and then training the model with the augmented data. Also, the performance gap between CLIP and PyramidCLIP indeed seems to narrow as more data are included. However, I think this is a general problem for data augmentation methods and I do not have many concerns in this aspect.", " Thanks for taking the time to respond to my comments.\n\n**Q1.1: About Random Crop.**\n\nI agree that producing the local view using random crop is an acceptable solution, and, intuitively, the [0.5,1] can indeed cover the relevant regions of the image in most cases. However, the possibility that \"the local view is irrelevant to the textual description\" is still a potential problem because there is a trade-off in terms of the ratio of the random crop. If the ratio is too small, it is likely that the local view is an irrelevant region. If the ratio is too large, it degenerates into the whole image. Therefore, I think it would be nice to further analyze this trade-off.\n\n**Q1.2 and Q2**\n\nMy concern regarding these two questions is generally addressed. Just one more question: What are the advantages of pure dual-stream models over concatenating tokens from two modalities? Less computation?", " Hi, Reviewer jqKi, did our reply address your concerns? If you have other concerns, please feel free to let us know.", " **Q1.1: About Random Crop.**\n\nWe assume that the information at the edge of the image usually does not appear in the caption, or is not the main description of the caption. When constructing the local view, the ratio of random crop is 0.5-1, and its expectation is 0.75. Therefore, the discarded area is usually the bounding rectangle, which can eliminate the irrelevant information of the image.\n\nWe supplement an experiment in the table below, adjusting the scale of the ramdom crop ratio for the local view from 0.5-1 to using the whole image. It can be seen that the improvement brought by the random crop operation is quite obvious, which further verifies that the random crop operation can indeed discard parts of the image that are not related to the captions.\n\n| Method | YFCC5M |\n| --- | :-: |\n| PyramidCLIP (random crop=0.5-1) | 31.8 | \n| PyramidCLIP (random crop disabled)| 30.2(-1.6) |\n\nNote that YFCC5M are randomly sampled from YFCC15Mv1.\n\n\n**Q1.2: About Cross-level Relation Alignment.**\n\nOur ROI features are position-sensitive, and its dimension is 2052D, of which 2048D is for appearance, 4D is the position coordinate information. After MHSA aggregates ROI features, the class token contains the relationship between ROIs, both appearance and location. Then we contrast the relation class token with textual summay and the original text. Taking \"a table is next to a chair\" as an example, we detect the visual feature and position information of \"table\" and \"chair\" from the image, and obtain the visual and location relationship of the two after MHSA integration, then contrast the relaiton-containing visual cls token with the caption (\"a table is next to a chair\"), which can enhance the text encoder's modeling of the relationship between \"table\" and \"chair\" and learning of preposition \"next\".\n\n**Q2: About the difference between PyramidCLIP and existing VLP methods that introduce multi-level semantics.**\n\nHere we mainly discuss the differences between PyramidCLIP and OSCAR/VinVL/MVPTR/X-VLM from five perspectives, that are listed in the table below.\n\n| Method | Paradigm | Encoder Type | Auxiliary Pre-trained Model | Semantics Level | Pre-training Objective |\n| --- | :-: | :-: | :-: | :-: | :-: |\n| OSCAR/VinVL | Single-stream | Transformer-based | object(-attribute) detector | 2 levels(level concatenation) | 1)Masked Token Loss 2)Contrastive Loss |\n| MVPTR | Dual-stream + Cross-modal fusion | Transformer-based | 1) object detector 2) scene graph parser | 2 levels(level concatenation) | 1)Masked Concept Recovering 2)Contrastive Loss 3)Weakly-supervised Phrase Grounding 4)Masked Language Modeling 5)Image Text Matching |\n| X-VLM | Dual-stream + Cross-modal fusion | Transformer-based | None | several levels(each level processed parallelly) | 1)Bounding Box Prediction 2)Contrastive Loss 3)Matching Prediction 4)Masked Language Modeling |\n|PyramidCLIP | Dual-stream | Visual: CNN-based or ViT-based, Text: Transformer-based | 1) object-attribute detector 2)summary extractor | 3 levels(each level processed parallelly) | Contrastive Loss | \n\nIt can be seen that PyramidCLIP is a pure dual-stream network that does not need concatenating tokens from two modalities, and only requires contrastive loss for training, which is succinct. Moremore, PyramidCLIP can support more kinds of visual encoders, both CNN and ViT, which is more flexible. In addition, compared to OSCAR/VinVL/MVPTR, PyramidCLIP can support more granular semantics; compared to X-VLM, PyramidCLIP is more flexible and does not rely on manually annotated bounding boxes and corresponding text descriptions.\n\nNote that X-VLM is similar to PyramidCLIP to some extend regardless of the final cross-modal encoder. However, the several levels of X-VLM only make sense for datasets containing labeled bounding boxes and corresponding annotations like COCO and Visual Genome. For datasets of image-caption pairs, it degenerates into only one level. Besides, the contrastive loss of X-VLM is employed at each level. Compared with X-VLM, PyramidCLIP also introduces cross-level alignment (contrastive loss) to provide more supervisions in addition to the peer-level alignment at each level. And the three levels of PyramidCLIP make sense for any dataset containing image-caption pairs.", " **Q3: About why not compare PyramidCLIP with MVPTR and X-VLM?**\n\nThe targeted downsteam tasks of PyramidCLIP are quite different from MVPTR/X-VLM (see the table below). The common task is only zero-shot image-text retrieval. However, the model parameters are not comparable since MVPTR/X-VLM includes an additional cross-modal encoder. Besides, the pre-training datasets are also different. Therefore, comparison with MVPTR/X-VLM does not make much sense. PyramidCLIP is a further extension of CLIP, so it is mainly compared with CLIP-like methods (CLIP, SLIP, DeCLIP, FILIP and DeFILIP) in this paper.\n\n| Method | Targeted Tasks | Model Params | Datasets(Web Crawled) | Datasets(Human Annotated) |\n| --- | :-: | :-: | :-: | :-: |\n| PyramidCLIP | image classification , image-text retrieval (zero-shot) , object detection and instance segmentation | 100M (RN50) | SBU, CC3M, CC12M, YFCC15M-V1, YFCC15M-V2, LAION99M | None |\n| MVPTR| image-text retrieval (finetune) , VQA, visual entailment, visual grounding | / | Conceptual Captions (CC) , SBU | MSCOCO, Flirck30k, GQA, OpenImages |\n| X-VLM | image-text retrieval (zero-shot and finetune) , VQA, NLVR2, visual grounding, image captioning | 216M | SBU, CC3M, CC12M | COCO, VG, Objects365, OpenImages |", " **Q1: About the amount of pre-training data.**\n\nWe want to emphasize that instead of creating more image-caption pairs, we are mining more information based on existing image-text paris and improving the information utilization of image-caption paris. Text augmentation and extracting more fine-grained object information with a pre-trained object detector are two very common approaches used in image-text pre-training. For example, in FILIP, the text modal uses back-translation operation (please kindly refer to section 3.2 in FILIP[8], https://arxiv.org/pdf/2111.07783v1.pdf). OSCAL/VinVL/MVPTR all use a pre-trained object detector to extract the features of object, thereby introducing fine-grained information.\n\nAt present, it is far from enough to use the original data for image-text pre-training. Basically, all mainstream methods will resort to data augmentation, either in visual modality or linguistic modality, so as to improve data utilization, which is reasonable.\n\n**Q2: About text summarization.**\n\nWe have counted the average caption length for each dataset we used and the corresponding average summarization length, which is listed in the table below.\n\n1) It can be seen that the averge text length of some datasets is quite long (which may contain redundance), but the text summarization length is significantly shorter. For example, for YFCC15M dataset, the average length of the original text is 33.8, and the average length of text summarization is 8.5, which is shorted by 75%.\n\n2) When the original text itself is relatively short, the extracted text summarization still summarize the original caption, but the magnitude is relatively small.\n\nIn conclusion, text summarization works for captions of any length.\n\n| Datasets | Caption Length (avg) | Summarization Length (avg) |\n| --- | :-: | :-: |\n| CC3M | 10.3 | 10.1 |\n| CC12M | 17.7 | 10.8 |\n| YFCC15M | 33.8 | 8.5 |\n| YFCC15V2 | 16.7 | 8.8 |\n| LAION99M | 9.7 | 8.4|\n\n**Q3.1: About \" the noisy problem may be alleviated if sufficiently large data are used\".**\n\n1) Increasing the amount of data may lead to the improvement of performance, but it will also increase the training time and bring more computation.\n\n2) With the increase of the data set, the marginal return diminishes, and a large amount of data is required to bring about a siginificant improvement. As shown in the Figure 1(left) of DeCLIP[10] (https://arxiv.org/pdf/2110.05208.pdf), with the pre-training data of CLIP increasing from 15M to 88M, the ImageNet Top-1 accuracy increases by 21%, but with increasing from 88M to 400M, the accuracy only increases by 2.7%. Similar trend can be found in Figure 2 of CLIP[6]. The computational cost of training 400M data is 4.5 times that of 88M, and the cost-performance ratio is relatively low.\n\n\n**Q3.2: About \"performance-data size curve.\"**\n\nIn the table below, we record the performance of the baseline and PyramidCLIP with the increase in the amount of data. It can be seen that in the case of a small amount of data, the improvement brought by PyramidCLIP is significant, but when using 83% of the data volume of YFCC15-V1(i.e. 12.5M), PyramidCLIP can still bring a gain of 8.6%, which is considerable.\n\n| Data Volume | CLIP (baseline) | PyramidCLIP |\n| --- | :-: | :-: |\n| 2.5M | 6.9 | 24.8(+17.9) |\n| 5M | 21.7 | 31.8(+10.1) |\n| 7.5M | 25.1 | 35.4(+10.3) |\n| 10M | 28.2 | 37.3(+9.1) |\n| 12.5M | 30.6 | 39.2(+8.6) |\n\nThe results in the table above are zero-shot ImageNet top-1 accuracy, pre-trained with YFCC15M-V1 subsets of different sizes. Note, the subsets are sampled randomly.\n\nFurthermore, as shown in the Table 3 of the main text, when we increase the pre-training data volume to 143M, PyramidCLIP can still bring significant improvement compared to the CLIP baseline trained on 143M. Specifically, when the visual encoder is ResNet-50, PyramidCLIP improves zero-shot ImageNet classification by 6.1%, and on the Flickr30K retrieval task, the R@1 of I2T/T2I improves by 5.7%/8% respectively.\n\nFinally, we would like to emphasize that PyramidCLIP solves the common problem existing in vision-language pre-training, that is, the semantic mismatch of image-text and the mutual compatibility between pairs, making PyramidCLIP effective regardless of the size of the pre-training dataset.", " **Q4: About replacing text summarization with keywords.**\n\nWe try to use keywords to replace text summarization, but the performance is not as good as text summarization. We analyze it from both qualitative perspective and quantitative perspective.\n\n- Qualitative perspective. Most of the extracted keywords are nouns, and there are relatively few adjectives and prepositions, which will lack some visual patterns and position information between objects. For example, the keyswords for \"Soccer player competes for the ball during day of the training camp.\" is \"Soccer, training camp\", while the summarization is \"Soccer player competes for the ball.\", the semantic information of summarization is better. \n\n- Quantitative perspective. We replace the text summarization by keywords and conduct experiments on CC3M. The results are shown in the table below. It can be seen that the results of keywords are not as good as text summarization.\n\n| Method | IN ZS Top-1 Acc | \n| --- | :-: |\n| PyramidCLIP(Summarization) | 24.8 | \n| PyramidCLIP(Keywords) | 24.2(-0.6) |\n\nNote that we use hugging face's open sourced keyword extraction model (https://huggingface.co/yanekyuk/bert-keyword-extractor) to extract keywords, which is finetuned from bert. ", " **Q1: About training set.**\n\nOur training set is a collection of publicly available datasets, such as CC3M, YFCC15M, and a subset of 99M data in LAION400M. All these datasets are image-text pairs crawled from the Internet, which may contain a lot of noise. As for the 99M data seletected from 400M, we use the image-text similarities provided by LAION400M as metric and select the largest 99M. More details can be found in section 4.1 of the main text and A.1 in the supplementary material.\n\n**Q2: Results on CC3M.**\n\nWe conduct experiments on CC3M and compare PyramidCLIP with the CLIP baseline, and the results are shown in the table below. For fair comparison, the following experiments are all trained for 32ep with the same hyperparameters, e.g. batch size, learning rate.\n\n| Method | Model Structure | ImageNet Top-1 Acc | ImageNet Top-5 Acc |\n| --- | :-: | :-: | :-: |\n| CLIP | RN50 | 18.9 | 36.3 |\n| PyramidCLIP | RN50 | 27.4 (**+8.5**) | 47.4 (**+11.1**) |\n| CLIP | ViT-B/32 | 12.4 | 26.4 |\n| PyramidCLIP | ViT-B/32 | 24.0 (**+11.6**) | 42.5 (**+16.1**) |\n\nSince in PSD (Robust Cross-Modal Representation Learning with Progressive Self-Distillation), the baseline and its method both are trained for a longer time, i.e. 100ep, and the hyperparmaters used are also quite different, we think the comparison is unfair.\n\n**Q3: About different joint forms of categoires.**\n\nWe have tried different joint forms, such as splicing with spaces, but they have little effect on the results, and the fluctuation is around 0.2, so we adopt the most intuitive and easy-to-understand joint approach, namely \"adj adj adj n, adj adj adj n, ....\".", " During the training process, the computation cost of PyramidCLIP is about 2.3 times that of baseline. Therefore we compare the baseline (with longer training time and larger batch size) and PyramidCLIP (with soften target and LeFF disabled) and the results are shown in the table below. For fair comparison, all of these experiments are conducted on CC3M, using 16 V100 and the same hyperparameters.\n\nIt can be seen that when we adjust the training epoch of the baseline from 8ep to 2times (16ep) or even 3times (24ep), the improvement is not obvious, and it still underperforms PyramidCLIP trained with 8ep.\n\n\n| Batch Size | Epoch | Model | ImageNet Zero-shot Top1 | Model | ImageNet Zero-shot Top1 |\n| :-: | :-: | :-: | :-: |:-: |:-: |\n| 2048 | 8 | CLIP-ResNet50 | 18.9 | CLIP-ViT-B/32 | 10.6 |\n| 2048 | 16 | CLIP-ResNet50 | 19.7 | CLIP-ViT-B/32 | 11.7 |\n| 2048 | 24 | CLIP-ResNet50 | 20.2 | CLIP-ViT-B/32 | 13.6 |\n| 4096 | 8 | CLIP-ResNet50 | 18.3 | CLIP-ViT-B/32 |  9.7 |\n| 4096 | 16 | CLIP-ResNet50 | 18.9 | CLIP-ViT-B/32 | 10.9 |\n| 2048 | 8 | PyramidCLIP-ResNet50 | 23.8 | PyramidCLIP-ViT-B/32 | 16.8 |\n\nFurthermore, we would like to emphasize the importance of soften target proposed in PyramidCLIP. As shown in the table above, when we increase the batch size of baseline from 2048 to 4096, the top-1 acc drops instead. We attribute this phenomenon to that the repetitive rate of text in the existing image-text paired datasets is too high, since they are constructed by crawling images through a text on the Internet, and it is very likey that one text corresponds to multiple images. (We calculates the text repetitive ratio in the CC3M dataset, which is as high as 30.24%. Same phenomenon also occurs in other pretraining datasets.) Therefore, when the batch size increases, the probability of false negative samples will also increase. \n\n| Batch Size | Epoch | Model | ImageNet Zero-shot Top1 |\n| :-: | :-: | :-: | :-: |\n| 2048 | 8 | PyramidCLIP-ResNet50 (w/o soften target) | 23.8 |\n| 2048 | 8 | PyramidCLIP-ResNet50 (w/ soften target) | 24.5 |\n| 4096 | 8 | PyramidCLIP-ResNet50 (w/o soften target) | 23.4 |\n| 4096 | 8 | PyramidCLIP-ResNet50 (w/ soften target) | 24.3 |\n\nAs shown in the table above, soften target can significantly alleviate the problem of false negatives, which brings insight for follow-up research in this field.", " **Q1: About the computational cost brought by ROI features.**\n\n1) The ROI features are extracted offline in advance, so in the training process of all the models, they are extracted once and for all.\n\n2) Our ROI features are only used in the training phase and do not affect testing and depolyment, i.e., no extra computational burden is introduced in the inference stage, which is parctical.\n\n3) In the training process, ROI features are input to the latter layers of the visual model (specifically, for ViT, the 9th of the 12 blocks, for ResNet, only the last attention pooling), which requires less computation. Furthermore, Transformer performs Self-Attention on N tokens, with a complexity of O($N^{2}$). For input images of 224 resolution, for ViT-B/16, patch tokens plus cls token, a total of 197 token and for ResNet-50, the final attention pooling is calculated on 50 tokens. However, we only extract 10 ROI features for each image, so there are only 11 tokens for one image, and the amount of computation introduced is much less than that of images.\n\nWe list the FLOPs of the visual and textual model in the following table. Taking ResNet-50 as an example, when the input image is 224x224, the total computational cost of visual model and textual model is 6.26+2.98=9.24GMacs. When the input is ROI features, the computation amount of visual model is very small (0.185G), and the total calculation amount is only 3.165GMacs, which is only 34% of that of 224x224 image input.\n\n| Input | PyramidCLIP-RN50 | PyramidCLIP-ViT-B/16 |\n| --- | :-: | :-: |\n| 224x224 image | 6.26G(V)| 2.98G(L) | 17.61G(V) | 2.98G(L) |\n| ROI features | 0.185G(V) |2.98G(L) | 0.229G(V) |2.98G(L) |\n\nNote that xG(V) in the above table indicates that the calculation amount in visual modality is XGMacs. L represents linguistical modality.\n\n4)In order to show that simply increasing the computation amount can not bring significant performance improvement, we increase the number of training epochs of the CLIP baseline to 2 or 3 times. It can be seen that the results of the CLIP baseline trained for a longer time still cannot exceed PyramidCLIP. \n\n| Model | Epoch | IN ZS-shot Top-1 |\n| --- | :-: | :-: |\n| CLIP-RN50 | 8 | 18.9 |\n| CLIP-RN50 | 16 | 19.7|\n| CLIP-RN50 | 24 | 20.2 |\n| PyramidCLIP-RN50 | 8 | 24.5 |\n\nNote that the above experiments are performed on CC3M, and for fairness, the same hyperparameters are used.\n\n**Q2: About the ablation of peer-level alignment.**\n\nDue to the page limitation of the paper, we put more ablation studies in the supplementary material, including the granular ablation of peer-level alignment. Please refer to Section F.2 (especially Fig 2 and Table 8) in the supplementary material for more details.\n\n**Q3: About the cross-level relation alignment.**\n\nHere, we re-introduce the cross-level relation alignment briefly. \n\n**Motivation:** \n\n&ensp;&ensp; Cross-level relation alignment aims at introducing object-level information, thereby enhancing the modeling of relation between objects, both appearance and position.\n\n**Method:**\n\n1) First Step (preparation): \n\nPyramidCLIP uses a pre-trained object detector to extract salient object features (termed as ROI features) and corresponding categroy information in the image. ROI features are position-sensitive. In each ROI feature of 2052D, 2048D is for appearance and 4D for position information. \n\n 2) Second Step (encoding): \n\n- Visual Modelling: We input the 2052D ROI feature sequence into one or several transformer blocks, using MHSA to aggregate ROI features. After MHSA integration, the cls token contains the relationship between ROI features, both appearance and position. \n \n (When the image encoder is ViT, the ROI feature sequence can be directly input to the later layers of the visual encoder for modelling, as shown in Fig 3(b). When the visual model is CNN, we replace the traditional global average pooling with attention pooling and input the ROI feature sequence into the attention pooling for modelling, which is shown in Fig 3(a).)\n \n- Linguistic Modelling: The category information together with descriptions of all objects are concatenated and input to text encoder to capture the relationship between objects.\n\n3) Third Step (contrasting): \n- Contrasting visual relation-containing cls token with original caption or text summarization, can promote the text encoder to model the relationship between the nouns and learn the prepositions in the caption. \n- Contrasting textual relation-containing cls token with global view or local view, can make up for the problem that the original caption lacks the description of the salient objects.\n\nThe effectiveness of cross-level relation alignment please kindly refer to the Table 5 in the main text.\n\nAs for Fig 3(c),this is Locally-enhanced Feed-Forward(LeFF) we used for ViT, which aims to improve the patch-level local perception and interaction. The effectiveness of LeFF please kindly refer to the last two rows of Table 5 in the main text.", " This work introduces to construct an input pyramid with different semantic levels for each modality. It then aligns visual elements and linguistic elements in a hierarchical way. The proposed PyramidCLIP outperforms CLIP with a large margin. \n\n Strength:\n- The results are very promising. The model outperforms the sota methods across many datasets.\n- The overall framework is easy to follow.\n\nWeakness:\n- The idea of introducing ROI features may increase the computational cost significantly. \n- In peer-level semantics alignment, the authors introduced coarse-grained global contrast learning and fine-grained local contrast learning. However, I didn’t find the more studies to validate that combining two contrast alignments helps model training.\n- The authors described the cross-level relation alignment in Sec 3.3. Three structures are proposed but I suggest the authors providing more analysis to prove their effectiveness. Please see my comments in the weaknesses part. Yes.", " Under the contrastive CLIP learning framework, the work proposes to utilize more fine-grained information to produce multiple views of both the image and text during training, and hence constructs more contrastive loss terms across different views. During inference/evaluation, only the standard view is used.\n\nEmpirically, with 3 different architectures (ResNet50/ViT-B32/ViT-B16), different pretraining data scales and several down-stream datasets, authors show that the proposed approach achieves clear gain over the baseline systems. The work follows a natural motivation and achieves very good empirical gain over some strong baseline systems. Overall, the paper is well written and the empirical study is solid.\n\nA key concern I have is whether the comparison is fair enough. If I understand correctly, for each view of either the image or the text, we need to feed the view into the model once, which leads to roughly 2x - 3x additional computation cost. Again, if my understanding is correct, despite using the same batch size, the actual pretraining cost might be much higher for the proposed method than baselines, making the comparison less information. A better comparison seems to be make the batch size of baselines larger until the training cost is comparable. If my understanding above is correct, how would the comparison look like if similar training cost is invested in training the baselines, particularly with soften and LeFF disabled? (NOTE: In my opinion, this is very critical for judging whether the proposed method is really useful. Hence, my final review will largely depend on this.) I don't see any particular problem here.", " This paper proposes hierarchical feature alignment for vision language pre-training, called PyramidCLIP, which alleviates semantic mismatch as well as mutual compatibility problems, i.e. false positives and false negatives. PyramidCLIP constructs inputs with three levels of semantics in visual and language modalities respectively and then resolves semantic mismatch through peer-level semantic alignment and cross-level relation alignment. In addition, PyramidCLIP adopts a soft form of InfoNCE to deal with mutual compatibility. Strengths:\n\n1. This paper is well written and easy to follow.\n\n2. The motivation and solution of the article are clear. More precise hierarchical feature alignment for tackling semantic mismatch, and softening InfoNCE for mutual compatibility.\n\n3. The experiments are well designed, and the results are excellent.\n\n\nLimitations:\n\nHave the authors adequately addressed the limitations and potential negative social impact of their work? If not, please include constructive suggestions for improvement. Authors should be rewarded rather than punished for being upfront about the limitations of their work and any potential negative societal impact.\n\n1. Needs more explanation about how the training set is constructed.\n\n2. In order to compare with some recently published works(e.g. [1]), it is recommended that the author can supplement the results on smaller scale datasets, such as cc3m.\n\n3. The categories obtained by object detection are simply joined by commas. Whether different joint forms have an impact on the results (e.g. splicing with spaces)? \n\n[1] Robust Cross-Modal Representation Learning with Progressive Self-Distillation. CVPR 2022. See the weaknesses. Yes", " This paper proposes to improve CLIP by adding contrastive losses at different levels. Specifically, for each image in the original data, they construct image views at global, local, and region levels, and also create their corresponding text captions. Therefore, for each image-caption pair in the original dataset, they can create two additional image-caption pairs and the model is trained with additional contrastive losses given the newly constructed data. They also soften the original contrastive loss with a label-smoothing-like technique. Experiments demonstrate improvements over several baselines on zero-shot retrieval, linear probing on image classification, object detection, and semantic segmentation tasks. Strengths:\n1. The empirical results are strong as they can outperform several popular baselines.\n2. The paper is well-structured.\n3. I like the idea of softening the contrastive loss objective, which makes sense and the paper shows it works well.\n\nWeaknesses:\n1. The comparisons between their model and baselines may not be fair. The paper uses multiple pre-trained models for their model (e.g. text summarization and object detection models) which can introduce more supervision. Also, because their method will create more image-caption pairs for their model, they would have more pretraining data than the baselines.\n2. Some designs are questionable. For example, the text captions in image-caption datasets are typically short, whereas they use a pre-trained text summarization model, which is designed for summarizing long documents, to further shorten the captions, which does not make sense to me.\n3. While it is true that the pre-training datasets can be noisy, the problem may be alleviated if sufficiently large data are used. It would be good to see if they can keep the performance gain as more data are included. For example, they can plot performance-data size curves for both their model and baselines and see the performance gap when the data size varies. Would it make more sense to use a keyword extraction model than a summarization model? I am not sure if their method can be scalable as when more data is included, many of the problems mentioned in the paper may be alleviated.", " This paper proposes a new framework called PyramidCLIP for vision-language pre-training (VLP). PyramidCLIP first extracts multi-level features from both the visual and linguistic domains and then conducts contrastive learning by aligning visual and linguistic features in both peer-level and cross-level ways. In this way, the vision and language representations learned by PyramidCLIP encode better image-text alignment, which alleviates the semantic mismatch problem that exists in the image-text pairs for pre-training. Moreover, the authors also replace the loss term of the negative samples in contrastive learning with a softened version, to tackle the problem that different image-text pairs may have potential correlation. The empirical results show that PyramidCLIP clearly outperforms the CLIP baseline in a variety of downstream tasks and also achieves SoTA performance on several downstream tasks, as compared with other VLP models. ### Strengths\n* This paper addresses the problem of the quality of image-text pairs (which is an important problem in VLP) using hierarchical feature alignment. The core idea and specific designs of the proposed PyramidCLIP are generally reasonable.\n* The authors conduct extensive experiments to demonstrate the effectiveness of PyramidCLIP, covering different backbone architectures (ResNet50 and ViT-B), pre-training datasets of varying sizes and a variety of downstream tasks (zero-shot image classification, zero-shot image-text retrieval, linear probe, object detection and instance segmentation).\n* The ablation study is comprehensive, showing that all the components of the proposed method are conducive.\n* Qualitative analyses are conducted to intuitively show that PyramidCLIP learns better vision and linguistic representation than CLIP.\n* The paper is generally well-written and easy to follow.\n\n\n### Weaknesses\n\n1. Some specific designs of PyramidCLIP seem counter-intuitive.\n* Random crop cannot guarantee the quality, especially for the local view. For example, if the local view is irrelevant to the textual description, minimizing the distance of the corresponding features may confuse the model.\n* It is unclear how the cross-level alignment helps the modelling of relations between salient objects. For example, how does the model know that \"a table is next to a chair\", by contrasting the ROI features to the textual summary or the original text? It is better to provide an intuitive explanation.\n\n2. The difference between PyramidCLIP and existing VLP methods that also introduce multi-level semantics is not clearly discussed. In the related work, the authors state that \"Different from methods mentioned above, each level is input to the corresponding encoder individually, without concatenating.\" Such a difference seems trivial and incremental.\n\n3. Many methods described in the related work are not compared in the experiments, especially the methods that also introduce multi-level semantics (e.g., MVPTR and X-VLM).\n * Could you provide an explanation about how the cross-level alignment helps the modelling of relations between salient objects?\n* Could provide a more detailed discussion on the difference between PyramidCLIP and the VLP methods that also introduce multi-level semantics?\n* Why not compare PyramidCLIP with MVPTR and X-VLM? * The soften objective function treats all the negative samples equally using label smoothing, which is suboptimal considering different image-text pairs should have different degrees of correlation." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 3 ]
[ "haZ5FrWdSrG", "lnKxcTG1058", "QIC4i5R3sUc", "FZm2TPvUyB0", "k7nj5tX9GBu", "HQg4dqN9DD", "VEj5bsfnO6e", "Ib-op6cQ7lM", "K2IfD7o8HY", "K2IfD7o8HY", "Ib-op6cQ7lM", "Ib-op6cQ7lM", "Ch9vFhjwDMg", "QIC4i5R3sUc", "npDczYLTMFd", "nips_2022_7YTh6S8HIY", "nips_2022_7YTh6S8HIY", "nips_2022_7YTh6S8HIY", "nips_2022_7YTh6S8HIY", "nips_2022_7YTh6S8HIY" ]
nips_2022_NjImFaBEHl
Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source pretrained model is adapted to the target domain without access to the source data. Existing techniques mainly leverage self-supervised pseudo-labeling to achieve class-wise global alignment [1] or rely on local structure extraction that encourages the feature consistency among neighborhoods [2]. While impressive progress has been made, both lines of methods have their own drawbacks – the “global” approach is sensitive to noisy labels while the “local” counterpart suffers from the source bias. In this paper, we present Divide and Contrast (DaC), a new paradigm for SFUDA that strives to connect the good ends of both worlds while bypassing their limitations. Based on the prediction confidence of the source model, DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals under an adaptive contrastive learning framework. Specifically, the source-like samples are utilized for learning global class clustering thanks to their relatively clean labels. The more noisy target-specific data are harnessed at the instance level for learning the intrinsic local structures. We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch. Extensive experiments on VisDA, Office-Home, and the more challenging DomainNet have verified the superior performance of DaC over current state-of-the-art approaches. The code is available at https://github.com/ZyeZhang/DaC.git.
Accept
This paper proposes a relatively complicated method for source-free unsupervised domain adaptation, which integrates several techniques into a divide and contrast framework. The idea of dividing the target data into source-like subset and target-specific subset and employing global alignment and feature consistency for each subset is novel when the source data is inaccessible. The contrastive learning and memory-based MMD are novel in the context of source-free domain adaptation and introduce theoretical benefits in terms of the expansion theory and domain alignment theory, respectively. Reviewers were on the positive side while holding some concerns on the marginal improvement over the SoTA methods, which were addressed in the author rebuttal. AC generally agreed that the paper has introduced a novel and solid contribution to the field, with a nice connection between algorithmic methods and theoretical insights, and recommended the paper for acceptance. Authors are suggested to incorporate all rebuttal material in the revision and if possible, to work out a recipe for easing the adoption of their relatively complicated framework that comes with many modules and loss terms.
val
[ "fxIUL2OxPjM", "uLx_EzivY-Q", "mHxVheo7uG", "aMi-dDPorsh", "ovB5_dNXmA8", "t7HHbk5V-NN", "EB8QgLya_H8", "b73UJPbAfGX", "lPffnudFu1g", "Cp36eEwy2HX", "Fq9YKma9Hgt" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Authors have addressed the error bars issue that I have raised. Other reviewers have raised some concerns about comparisons with SOTA approaches, but it appears that the authors have adequately addressed most of the concerns.", " Thanks for the reply. My concerns are addressed.", " We thank reviewer xRX9 for the detailed review! We reply point-by-point here.\n\n> A more recent work that outperforms proposed approach ([a]) is not compared against.\n> \n\nSHOT++ [a] is a two-stage extension of SHOT [1]. After adding the rotation prediction auxiliary task [b] to SHOT in the first stage, the second stage of [a] is trained in a semi-supervised manner (MixMatch [c]). Since we aim to propose a simple and scalable paradigm for source-free unsupervised domain adaptation, we believe it is a bit unfair to compare our end-to-end framework with the two-stage extension of [1]. For a more thorough comparison, we first compare our method with the result of the end-to-end version of [a] (denoted as SHOT+ here). We then add the second stage training to compare with a full version of [a].\n\n| VisDA | Avg acc |\n| :-: | :-: |\n| SHOT+ [a] | 85.5 |\n| DaC++ | 87.3 |\n| Target Supervised | 89.6 |\n\n| Office-Home | Avg | AC | AP | AR | CA | CP | CR | PA | PC | PR | RA | RC | RP |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| SHOT+ [a] | 72.0 | 57.7 | 79.1 | 81.5 | 67.6 | 77.9 | 77.8 | 68.1 | 55.8 | 82 | 72.8 | 59.7 | 84.4 |\n| DaC (Ours) | 72.8 | 59.1 | 79.5 | 81.2 | 69.3 | 78.9 | 79.2 | 67.4 | 56.4 | 82.4 | 74.0 | 61.4 | 84.4 |\n\n| DomainNet | Avg | Rw→Cl | Rw→Pt | Pt→Cl | Cl→Sk | Sk→Pt | Rw→Sk | Pt→Rw |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| SHOT+ [a] | 66.4 | 67.7 | 65.6 | 69.3 | 62.1 | 64.9 | 57.6 | 77.7 |\n| DaC (Ours) | 68.3 | 70.0 | 68.8 | 70.9 | 62.4 | 66.8 | 60.3 | 78.6 |\n\nFrom the results above, even without the rotation prediction technique, DaC is comparable with SHOT+. After extending DaC to a two-stage version DaC++, we make another comparison with the full version of SHOT++ on the VisDA dataset. \n\n| VisDA | Avg Acc |\n| :-: | :-: |\n| SHOT++ [a] | 87.3 |\n| DaC++ | 88.6 |\n| Target Supervised | 89.6 |\n\nNote that “target-supervised” represents that all the target samples are supervised with ground-truth labels, and this oracle result is copied from [a]. As seen from the table, DaC++ outperforms SHOT++ by more than 1 percent in terms of the average accuracy while achieving performance very close to the oracle result. This validates the superiority and scalability of our framework. \n\n[a] Liang, Jian, et al. \"Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer.\" In TPAMI, 2021.\n\n[b] S. Gidaris, et al, “Unsupervised representation learning by predicting image rotations,” in ICLR, 2018.\n\n[c] D. Berthelot, et al, “Mixmatch: A holistic approach to semisupervised learning,” in NeurIPS, 2019.", " > Eqn. 4 and Eqn. 5 share the same parameter $\\tau$ but they are different from each other. The threshold $\\tau$ in Eqn. 4 is set to 0.95 according to the supplementary material following [47], however, I could not find the reference [47]. Besides, the choice of the threshold $\\tau$ is not included in the ablation study, it would be necessary to see the ablation of choosing the threshold $\\tau$.\n> \n\nWe first apologize for the abused use of $\\tau$ (in Eqn. 4 and Eqn. 5), and the missing reference [47] due to the submission error of supplementary material. We will include all missing references in the revised version. We add the ablation study of $\\tau$ on VisDA, and the results are shown in the following table.\n\n| $\\tau$ | Avg acc |\n| :-: | :-: |\n| 0.91 | 87.06 |\n| 0.93 | 87.27 |\n| 0.95 | 87.34 |\n| 0.97 | 87.39 |\n| 0.98 | 87.19 |\n\nAs can be seen, the performance is not sensitive to the choice of $\\tau$.\n\n[44] Shai Ben-David, et al. \"A theory of learning from different domains.\" In Machine learning, 2010.\n\n[45] Jian Liang, et al. \"Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer.\" In TPAMI, 2021.\n\n[46] Qizhe Xie, et al. \"Unsupervised data augmentation for consistency training.\" In NeurIPS, 2020.\n\n[47] Alex Kurakin, et al. \"Fixmatch: Simplifying semi-supervised learning with consistency and confidence.\" In NeurIPS, 2020\n\n\n> Unlike previous method [3], Eqn. 5 jointly achieves class-wise adaptation and instance-wise adaptation. It would be interesting if the authors could compare the proposed loss with separate class-wise adaptation and instance-wise adaptation losses.\n> \n\nEqn. 5 is the proposed contrastive loss that enhances both global and local structures. We have already included the suggested comparison **in our ablation analysis** (Section 5.3. Role of *Divide and Contrast* paradigm), in which *Scheme-S* only achieves class-wise adaptation like method [3] while *Scheme-T* regards all samples as target-specific and only conducts instance-wise adaptation. The results in Table 4 demonstrate the effectiveness of our framework.\n\n> The section of preliminaries and analysis is a bit unclear to introduce the task.\n> \n\nWe agree that Section 3 is theory-heavy and a bit difficult to follow. We intended to provide the theoretical insight behind our proposed method in Section 3. In particular, *Claim 3.1* shows that the source-like set is consistency-robust under a sufficiently large threshold of prediction probability. This lays the foundation of our data segmentation strategy. *Theorem 3.2* states that the target risk is bounded by three parts that are later formulated as the three losses of our method. In the revision, we will polish the writing to make it more informative and better connected to our task. \n\n\n> When do the method update the source-like set and class centroids, after one batch or one epoch?\n> \n\nWe update the source-like set and class centroids **after one batch**. (see Supplementary Material C. Algorithm 1. the line commented with “divide”)", " We thank reviewer 4tiB for the thorough review. The two main concerns are addressed below.\n\n> It shows that the results of the proposed method are just marginally good compared to [2] and [3] in 2 out of 3 datasets (table 1 and table 3).\n> \n\nWe have **significantly outperformed the best source-free baseline (i.e. SHOT) on DomainNet**, the most difficult benchmark that we have experimented with in this paper, by **more than 3 percent** (68.3 v.s. 65.1). In fact, **our approach can achieve a bigger leap in the performance boost in a more difficult and larger dataset with a greater domain gap**. Intuitively, we divide the target samples into source-like and target-specific ones by the source classifier. If the domain gap is small, most of the target samples would be regarded as source-like in the perspective of the source model. In this case, most of the domain adaptation methods can easily exploit their advantages and achieve good classification accuracy, leaving little room for improvement. If the domain gap is large, most of the existing methods would fail to generate accurate pseudo-labeling results. In contrast, our divide-and-contrast framework can take full advantage of both the global and local structures of target data via data segmentation and customized learning strategies for data subsets. Hence, we are able to significantly improve the performance in difficult and noisy settings.\n\nOffice-Home is a relatively small benchmark. DaC can still achieve more than 1.2 points average accuracy improvement (over 12 transfer scenarios) compared with most of our baselines. \n\nFor the VisDA dataset, we have achieved more than 1 percent advantage in average accuracy over the best source-free baseline approach (87.3 v.s. 86.0). The numerical improvement is less significant than that of DomainNet as the performance gain in VisDA is close to saturation. We provide more numerical comparisons between DaC, DaC++(extension of our model), CPGA [3], NRC [2], and the oracle (target supervised) results as follows:\n\n| VisDA | NRC[2] | CPGA [3] | DaC | DaC++ | target-supervised |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| Average Acc | 85.9 | 86.0 | 87.3 | 88.6 | 89.6 |\n\nNote that “target-supervised” represents that all target samples are supervised with ground-truth labels, and this oracle result is copied from [a]. DaC++ is our two-stage extension by the same second stage training with [a]. Both DaC and DaC++ can achieve better accuracy that is much closer to the target-supervised result than [2][3]. Hence, we believe the improvement on the benchmarks is substantial.\n\nBesides the numerical results, we offer the **accuracy curve comparison** on VisDA in Figure 3. It shows that DaC can achieve **more** **stable training, faster convergence speed, and better classification performance than the other candidate approaches.**.\n\n> The performance is good in table 2. Can the authors explain more about the baseline results? For example, how the official code was used and how the authors ensure the hyperparameters are reasonably well tuned?\n> \n\nAs the discussion above, the performance gain of DaC is more significant for the more challenging DomainNet dataset. \n\nIn terms of implementation details, to ensure fair comparisons with all SFUDA baselines, we first trained the source model by supervised learning, and then conduct model adaptation on the target domain using the same batch size, learning rate, and training epochs as that of our approach. For the source-present domain adaptation baselines, we copy the results of MME and CDAN from the original papers and implement VDA and GVB with the same learning rate and training epochs. We directly use the hyper-parameters provided in their released codes.\n\n\n[a] Liang, Jian, et al. \"Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer.\" In TPAMI, 2021.", " Thank you for the thorough review. The two main concerns are addressed below.\n\n> Compared to existing SOTA methods (NRC and CPGA), the performance improvement is quite limited.\n> \n\nWe have **significantly outperformed the best source-free baseline (i.e. SHOT) on DomainNet**, the most difficult benchmark that we have experimented with in this paper, by **more than 3 percent** (68.3 v.s. 65.1). In fact, **our approach can achieve a bigger leap in the performance boost in a more difficult and larger dataset with a greater domain gap**. Intuitively, we divide the target samples into source-like and target-specific ones by the source classifier. If the domain gap is small, most of the target samples would be regarded as source-like in the perspective of the source model. In this case, most of the domain adaptation methods can easily exploit their advantages and achieve good classification accuracy, leaving little room for improvement. If the domain gap is large, most of the existing methods would fail to generate accurate pseudo-labeling results. In contrast, our divide-and-contrast framework can take full advantage of both the global and local structures of target data via data segmentation and customized learning strategies for data subsets. Hence, we are able to significantly improve the performance in difficult and noisy settings.\n\nOffice-Home is a relatively small benchmark. DaC can still achieve more than 1.2 points average accuracy improvement (over 12 transfer scenarios) compared with most of our baselines. \n\nFor the VisDA dataset, we have achieved more than 1 percent advantage in average accuracy over the best source-free baseline approach (87.3 v.s. 86.0). The numerical improvement is less significant than that of DomainNet as the performance gain in VisDA is close to saturation. We provide more numerical comparisons between DaC, DaC++(extension of our model), CPGA [3], NRC [2], and the oracle (target supervised) results as follows:\n\n| VisDA | NRC[2] | CPGA [3] | DaC | DaC++ | target-supervised |\n| :-: | :-: | :-: | :-: | :-: | :-: |\n| Average Acc | 85.9 | 86.0 | 87.3 | 88.6 | 89.6 |\n\nNote that “target-supervised” represents that all target samples are supervised with ground-truth labels, and this oracle result is copied from [a]. DaC++ is our two-stage extension by the same second stage training with [a]. Both DaC and DaC++ can achieve better accuracy that is much closer to the target-supervised result than [2][3]. Hence, we believe the improvement on the benchmarks is substantial.\n\nBesides the numerical results, we offer the **accuracy curve comparison** on VisDA in Figure 3. It shows that DaC can achieve **more** **stable training, faster convergence speed, and better classification performance than the other candidate approaches**. \n\n[a] Liang, Jian, et al. \"Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer.\" In TPAMI, 2021.\n\n> The proposed method is not evaluated on Digit datasets (MNIST, SVHN, and USPS) and the Office31 dataset which are two important benchmarks. Any reason behind this?\n> \n\nAs for Digit datasets, we do not evaluate DaC on them since they are relatively simple benchmarks. The average result of SHOT [1] is 98.3, which is quite close to the target supervised result 98.4. In addition, other most recent baselines [2][3][5] are not tested in Digit datasets. \n\nDue to the limited space, we only chose the more challenging Office-Home as representative of the Office series datasets (Office31, Office-Home, and Office-Caltech). For more thorough comparisons, we add the experimental results on Office31 as follows. In Office31, our method surpasses both source-free and source-available baselines.\n\n| Office31 | Source free | AD | AW | DA | DW | WA | WD | Average |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n| GVB [40] | No | 95 | 94.8 | 76.8 | 98.7 | 73.7 | 100.0 | 89.3 |\n| SHOT [1] | Yes | 94 | 90.1 | 74.7 | 98.4 | 74.3 | 99.9 | 88.6 |\n| DaC (Ours) | Yes | 94.2 | 91.7 | 76.8 | 98.1 | 75.7 | 99.8 | 89.4 |", " Thank you for appreciating our work! Specific concerns are addressed below.\n\n> Line 51: Provide a brief explanation for the \"memory bank\".\n\nConsidering your constructive suggestion and the coherence of the reading, we will add the following brief description of the memory bank in Line 47.\n\n‘’…with memory bank. Specifically, memory bank consists of representations of all target samples, and momentum updated in the training stage [3][15][16]. Thanks to the high…’’\n\n>Line 260: What is SHOT? Cite a reference for that.\n\nSHOT [1], which leverages self-training to achieve class-wise adaptation (as described in the Abstract), is one of our most related baselines. We will include the corresponding reference in the revision. \n\n>No error bars are provided for the numerical results.\n\nWe conducted our method three times and found error bars are relatively small. Taking the average accuracy on VisDA as an example, the mean results of multiple runs is 87.3, while the deviation between the best and worst runs is 0.027. Thus, we follow all of our baselines [1][2][3][5] which do not provide an error bar in numerical results. We will include it in the revision.\n\n>Future work discussion.\n\nThe semi-supervised SFUDA can obtain a few labels for each category, and the labeled samples can assist DaC to generate more robust class-wise prototypes. How to make better use of annotation information under our proposed DaC framework is an interesting open question in this setting.\n\nThe open-set DA setting contains some out-of-distribution samples, which may lead to negative transfer that is detrimental to the performance. Under the framework of DaC, these OOD samples could be treated as target-specific samples and get better exploited in our contrastive learning pipeline to achieve more discriminative local features. This could be another interesting future avenue of extending our work.\n\n", " The authors proposed a method for source-free unsupervised domain adaptation (SFUDA) task. The key idea is to combine the advantages of global alignment [1] and feature consistency [2]. The authors divided the target data into source-like and target-specific samples and \ntreat them by different learning methods. The authors demonstrate extensive experiments on three datasets and verify the performance of the proposed method Strengths\n1.The paper is well-organized and easy to follow.\n2. The \"divide and contrast\" strategy is simple but effective; it can exploit both the global and local structures of target data.\n3. The proposed Exponential-MMD loss is novel, it also makes sense to align the source-like and target-specific samples to reduce distribution mismatch.\n4. The experiment and ablations are sufficient to support the conclusion.\n\nWeaknesses\n1. The section of preliminaries and analysis is a bit unclear to introduce the task.\n2. A more recent work that outperforms proposed approach ([a]) is not compared against.\n[a] Liang, Jian, et al. \"Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer.\" IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).\n3. Eqn. 4 and Eqn. 5 share the same parameter \\tau but they are different from each other. The threshold \\tau in Eqn. 4 is set to 0.95 according to supplementary material following [47], however, I could not find the reference [47]. Besides, the choice of the threshold \\tau is not included in the ablation study, it would be necessary to see the ablation of choosing the threshold \\tau.\n4. As described in Discussion(L200-L207), unlike previous method [3], Eqn. 5 jointly achieves class-wise adaptation and instance-wise adaptation. It would be interesting if the authors could compare the proposed loss with separate class-wise adaptation and instance-wise adaptation losses.\n5. When do the method update the source-like set and class centroids, after one batch or one epoch. 1. When do the method update the source-like set and class centroids, after one batch or one epoch? Yes", " This paper presents a source-free domain adaptation method. Previous methods either use self-supervised pseudo labeling to conduct class-wise global alignment or leverage local structure to enforce feature consistency. This work combines the idea of both. The proposed method divides target samples into source-like and target-specific ones. Source-like samples are used for global class clustering and target-specific samples are used for learning local structures. The two are further aligned using maximum mean discrepancy loss. Strengths are as follows.\n\nThe idea is interesting. The target samples are divided based on confidence output of source classifier. Different groups are treated differently, either globally in class-level or locally in instance-level. Two different groups are aligned to encourage consistency that is also interesting. The presentation is generally good. Ablation study is conducted for each part.\n\nWeaknesses are as follows.\n\nThe major weakness is the performance compared to prior work. It shows that the results of the proposed method are just marginally good compared to [2] and [3] in 2 out of 3 datasets (table 1 and table 3). \n\nThe performance is good in table 2. Can the authors explain more about the baseline results? For example, how the official code was used and how the authors ensure the hyper parameters are reasonably well tuned? \n See weaknesses above. No", " This paper proposes a new source-free unsupervised domain adaptation method named DaC. The key idea is to leverage the advantages of existing “global” methods and “local” counterparts. Specifically, DaC uses the source model to split the target data into source-like and target-specific samples. After that, an adaptive contrastive learning strategy is used to achieve class-wise adaptation (global) and local consistency (local). Finally, MMD is used to minimize the distribution mismatch between source-like samples and target-specific samples. The proposed method achieves the best performance on widely used benchmarks.\n ---\n\nOriginality: the proposed method is a combination of well-known techniques, but achieves good performance. Integrating contrastive learning into source-free domain adaptation is novel and brings new insights into this community.\n\n---\n\nQuality: \n\nStrengths: this work is technically sound with theoretical proof and empirical evaluation. The effectiveness of the proposed regularizations are evaluated on various downstream tasks. The work is complete.\n\nWeakness: (1) the major concern is the performance. As shown in Table 1 and Table 3, compared to existing SOTA methods (NRC and CPGA), the performance improvement is quite limited. Although it outperforms the baseline SHOT by a large margin, it is still hard to convince readers. (2) The proposed method is not evaluated on Digit datasets (MNIST, SVHN, and USPS) and the Office31 dataset which are two important benchmarks. Any reason behind this?\n\n---\n\nClarity: this paper is well written and well organized. It is easy to follow. Detailed implementation details are also provided in the supplementary.\n\n---\n\nSignificance: compared to the baseline SHOT, the result is important. Specifically, the proposed method brings a larger improvement than SHOT. However, it is not clear to me whether other methods can borrow the same idea and get performance improvement.\n\n---\n 1. Line 262: change “trains” to “train”\n2. Any reason for missing the experiments on Digit datasets and the Office31 dataset?\n Not applicable.\n", " This paper is about unsupervised domain adaptation when the source data is unavailable at the time of adaptation. Authors term this problem as source-free unsupervised domain adaptation (SFUDA). Authors make the insightful observation that existing SFUDA techniques use all pseudo labels on target data including noisy labels or enforce local feature consistency at the expense of being source-biased. Motivated by these observations, authors propose a new SFUDA approach that they call Divide and Contrast (DaC) which divides the target data into two disjoint groups: source-like samples and target-specific samples. An adaptive learning framework is presented that treats each of these target sample groups differently during the training process. Authors present both theoretical results and numerical results that illustrate the superiority of DaC over state of the art methods for SFUDA. Strengths\nAuthors illustrate well (e.g., Fig. 1) the tradeoffs involved in the global and local approaches to SFUDA.\n\nThe proposed DaC approach for SFUDA is a novel approach to address the limitations of prior methods to SFUDA.\n\nAuthors present theoretical derivations to justify the DaC approach.\n\nNumerical results shown on multiple datasets are pretty convincing of the superiority of DaC approach.\n\nWeaknesses\n\"Memory bank\" is invoked early on without sufficient explanation. It is explained in Section 4.2.2, but may make the paper more readable if this introduction could be provide earlier in the manuscript.\n\nNo error bars are provided for the numerical results. 1. Line 51: Provide a brief explanation for the \"memory bank\".\n2. Line 260: What is SHOT? Cite a reference for that.\n Authors do not explicitly discuss the limitations of the proposed method, but mention semi-supervised SFUDA and source-free open-set DA as possible research extension topics. If space permits, it may be interesting to provide a few more sentences about each of these topics and how DaC might fare in those cases." ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "EB8QgLya_H8", "aMi-dDPorsh", "b73UJPbAfGX", "b73UJPbAfGX", "lPffnudFu1g", "Cp36eEwy2HX", "Fq9YKma9Hgt", "nips_2022_NjImFaBEHl", "nips_2022_NjImFaBEHl", "nips_2022_NjImFaBEHl", "nips_2022_NjImFaBEHl" ]
nips_2022_S7Evzt9uit3
Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
We present modality gap, an intriguing geometric phenomenon of the representation space of multi-modal models. Specifically, we show that different data modalities (e.g. images and text) are embedded at arm's length in their shared representation in multi-modal models such as CLIP. Our systematic analysis demonstrates that this gap is caused by a combination of model initialization and contrastive learning optimization. In model initialization, we show empirically and theoretically that the representation of a common deep neural network is restricted to a narrow cone. As a consequence, in a multi-modal model with two encoders, the representations of the two modalities are clearly apart when the model is initialized. During optimization, contrastive learning keeps the different modalities separate by a certain distance, which is influenced by the temperature parameter in the loss function. Our experiments further demonstrate that varying the modality gap distance has a significant impact in improving the model's downstream zero-shot classification performance and fairness.
Accept
This paper investigates the gap between representations when training with a contrastive objective, through the characterisation of the gap in various settings, and building a theoretical analysis of this gap. The reviewers mostly agree that the paper tackles an interesting problem through investigation and characterisation of the inductive biases provided by CLIP-style models, and the experiments appear to cover a good number of cases. The primary issues with the work however appear to be with some of the framing---it comes across as an investigation into something a bit more generic than the title suggests, and the claims to novelty, while reasonable, are also a bit too strong given the existence of the heterogeneity gap. The authors argue that finding that multi-modal data project to separate subspaces is somewhat reasonable, but I still don't think that supports as strong a claim as given. On balance, though it appears as if the paper has more merits than issues, and most of the issues raised could be addressed with a bit of work. I would strongly urge the authors to actually make the edits for framing, clarity, and incorporating the additional experiments from the rebuttal into the manuscript, as requested by the reviewers.
train
[ "aSO1uIdCtwN", "n3pxQC95RQL", "7TeXh8t-4yN", "rrrdrDfWDtT", "EQSHdKkACf9", "NmW0pPUeEUj9", "y4rxL0il36G", "0wJY-Sg6her", "xeNHouujcM9", "ZvNm1vLb5S0", "vkZfDby3uwi", "yKhOe6RPl8C", "nrMw71JSD1", "33eEUPW5oob", "1pDmowld46R", "OtLjxYKggP2", "KhmazHX8O-N", "8uVCttIGer", "i-Ozc_cWG0R", "5NQDLW6Q9r3" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer QxpT,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you very much!", " Dear reviewer nReN,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you very much!\n", " Dear reviewer 1atQ,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you very much!", " **References**\n\n[10] C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning (still) requires rethinking generalization. Commun. ACM, 64(3):107–115, 2021.\n\n[11] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In ICLR, 2017.\n\n[12] B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro. The role of over-parameterization in generalization of neural networks. In ICLR, 2019.\n\n[13] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2019.\n\n[14] B. Kim, E. Reif, M. Wattenberg, S. Bengio, and M. C. Mozer. Neural networks trained on natural scenes exhibit gestalt closure. Computational Brain & Behavior, 4(3):251–263, 2021.\n\n[15] R. Geirhos, J. Jacobsen, C. Michaelis, R. S. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2(11):665–673, 2020\n\n[16] D. Arpit, S. Jastrzebski, N. Ballas, D. Krueger, E. Bengio, M. S. Kanwal, T. Maharaj, A. Fischer, A. C. Courville, Y. Bengio, and S. Lacoste-Julien. A closer look at memorization in deep networks. In ICML, 2017.\n\n[17] J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR, 2019.\n\n[18] Z. Allen-Zhu, Y. Li, and Z. Song. A convergence theory for deep learning via over-parameterization. In ICML, 2019. \n\n[19] K. Ethayarajh. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In EMNLP, 2019.\n\n[20] J. Gao, D. He, X. Tan, T. Qin, L. Wang, and T. Liu. Representation degeneration problem in training natural language generation models. In ICLR, 2019.\n\n[21] B. Li, H. Zhou, J. He, M. Wang, Y. Yang, and L. Li. On the sentence embeddings from pre-trained language models. In EMNLP, 2020.\n\n\n", " Thank you very much for your comments! We really appreciate your time. \n\n**About the contrastive loss and its temperature**\n> If your claim \"Contrastive learning preserves modality gap\" does not hold for higher temperatures, as shown in Figure 8, it sounds overclaimed. Specifically, Figure 8 shows that fine-tuning CLIP with the contrastive loss under highly changed temperatures (t=1/10 and 1) does not preserve the gaps anymore.\n\nThank you for this question. We clarify that there is still a gap with the higher temperature during fine-tuning in Figure 8 right part. Fine-tuning with temperature=1 still leads to a *significant gap with a distance of 0.24*. Moreover, in Figure 8 left part, we clarify that while there seems to be no gap in the PCA view for temperature=1, there is a gap in the *high dimensional space*. We can think of a simple example where two spheres with a radius of 100 are in z=0.1 and z=-0.1. If we use PCA to reduce to 2D, there will be no gap from the 2D PCA view but the actual gap=0.2 in 3D. Therefore, our finding that \"contrastive learning preserves modality gap\" *does* hold for higher temperatures. Simply changing temperatures cannot eliminate the gap.\n\n\n**The bigger picture: Why studying the gap is important**\n> I hope the authors can add more detailed discussions for the above.\n\nThank you for the suggestion. There has been tremendous recent interest and excitement in studying the **inductive bias of neural networks** mathematically and empirically [10-21]. For example, an influential line of research shows that neural networks can easily fit random labels [10], and SGD provides an inductive bias of “implicit regularization” by favoring minima that are flatter [11] and closer to the initialization [12]. Another impactful line of research shows that neural networks trained on natural scenes are biased towards texture [13], and exhibit gestalt closure similar to human perception, which is an inductive bias long-studied in the psychology literature [14]. Researchers have also shown that neural networks favor “shortcut learning”, which may be a common characteristic of learning systems, biological and artificial alike, as known in Comparative Psychology, Education and Linguistics [15,16]. \nOur paper contributes to this broad and exciting trend of studying the inductive bias of neural networks by analyzing the modality gap phenomenon which occurs consistently in multi-modal contrastive representation learning. \n\nBy studying the modality gap, our analyses also provide new insights into the cone effect, which we show is a general inductive bias of deep neural networks. In the recent literature, the cone effect has been observed in the language representations from language models such as BERT and GPT-2 [19,20,21]. A common explanation is that the *unbalanced* distribution of word frequencies biased the optimization [20,21]. However, we found that the cone effect still exists in models with random weights (Figure 2(c)). In fact, the average cosine similarity there is even higher than in trained models. For example, any two embeddings from a randomly initialized ResNet have on average an almost perfect (0.99) cosine similarity. Interestingly, the cone effect still holds when the input data is random noise, indicating that the unbalanced data distribution suggested in previous works is not necessary for the cone effect. Together these experiments suggest that the cone effect reflects a more general inductive bias of deep networks than might be previously appreciated. We rigorously analyzed why it happens in Theorem 1 and further examined how the cone effect leads to the modality gap when there are multi-modality data (Theorem 2 and Figures 2 and 3). \n\nTo sum up, there has been a long-established line of influential research in studying the **inductive bias of neural networks** mathematically and empirically including the cone effect, and our research makes novel contributions to and pushes the boundaries of knowledge significantly forward in this impactful research topic for multimodal models. \n\n**Please let us know if you have further questions and we are happy to further respond!** If our responses (this and the previous one) have addressed some of your questions, we would very much appreciate it if you would consider increasing your score. \n", " Thanks for the authors' comments. However, there are still parts I am not convinced of.\n- About the contrastive loss and its temperature: If your claim \"Contrastive learning preserves modality gap\" does not hold for higher temperatures, as shown in Figure 8, it sounds overclaimed. Specifically, Figure 8 shows that fine-tuning CLIP with the contrastive loss under highly changed temperatures (t=1/10 and 1) does not preserve the gaps anymore. Overall, I still think the gaps in pre-trained models are not from the contrastive loss, just surmountable issues from their training recipes.\n- Why studying the gap is important: As the authors pointed out in the discussion section (``...why studying the gap is important, i.e., it can affect the downstream task performance and fairness...``), it would be one of the potential readers' concerns to understand the importance of the gap. Although the authors remarked that ``...The goal of our paper is not to propose a method to close the gap or to improve downstream performance...`` in the general response, I expect to hear another explanation if proposing a method to close the gap or improving downstream performance was not one of the main contributions.\n\nI hope the authors can add more detailed discussions for the above.", " Dear reviewer 2Ge4, \n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you very much!", " **References**\n\n[1] So, Junhyuk, Chang-Seok Oh, Minchul Shin and Kyungwoo Song. Multi-Modal Mixup for Robust Fine-tuning. arXiv:2203.03897 [cs.CV], Mar 2022\n\n[2] Cohen, Niv, Rinon Gal, Eli A. Meirom, Gal Chechik and Yuval Atzmon. This is my unicorn, Fluffy: Personalizing frozen vision-language representations. arXiv:2204.01694 [cs.CV], April 2022.\n\n[3] Ramesh, Aditya, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. \"Hierarchical Text-Conditional Image Generation with CLIP Latents.\" arXiv:2204.06125 [cs.CV], April 2022.\n\n[4] W. Guo, J. Wang and S. Wang, \"Deep Multimodal Representation Learning: A Survey,\" in IEEE Access, vol. 7, pp. 63373-63394, 2019, doi: 10.1109/ACCESS.2019.2916887.\n\n[5] Girdhar, Rohit, et al. \"Omnivore: A single model for many visual modalities.\" CVPR (2022).\n\n[6] Tarvainen, Antti, and Harri Valpola. \"Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.\" NIPS (2017).\n\n[7] Wainwright, Martin J. High-dimensional statistics: A non-asymptotic viewpoint. Vol. 48. Cambridge University Press, 2019.\n\n[8] Poklukar, Petra, et al. \"Delaunay component analysis for evaluation of data representations.\" ICLR (2022). \n\n[9] Kynkäänniemi, Tuomas, et al. \"Improved precision and recall metric for assessing generative models.\" NeurIPS (2019).\n\n\n ", " We thank the reviewers for their thoughtful and constructive review of our manuscript. We were encouraged to hear that the reviewers found the modality gap phenomenon we present to be interesting (2Ge4, nReN, QxpT), original (QxpT), and insightful (nReN), and that they view our analysis as extensive (nReN, 1atQ), solid and well-supported (QxpT), and that all reviewers found our paper well-written and clearly organized (2Ge4,1atQ, nReN, QxpT). We have carefully updated the paper (*PDF uploaded as Supplementary Material*) based on the reviewers’ suggestions. In response to feedback, we provide a general response here to points raised by multiple reviewers, and individual responses below to address each reviewer’s concerns.\n \n\nIn response to the general comments about **the main objective and the contributions of our paper**, we reiterate that the main objective of our paper is to understand the modality gap phenomenon, a general inductive bias that holds across various data modalities and NN architectures. The goal of our paper is not to propose a method to close the gap or to improve downstream performance, which is an important direction of follow-up work. In summary, our paper makes the following contributions:\n1. **Demonstrating a general modality gap phenomenon**: To the best of our knowledge, we demonstrate a general modality gap phenomenon for the first time. We show that this phenomenon holds across a large class of networks and multi-modal problems and hence is likely to be broadly applicable to the entire field of multi-model learning. In the revision, we added more experiments supporting our findings using the ImageNet dataset. \n2. **Explaining why the gap occurs**: To explain the modality gap, we provide a three-part explanation supported by extensive theoretical and empirical analyses.\n - **Cone Effect**: Our analyses also provide new insights on the cone effect, which we show is a general phenomenon for deep neural networks. Our findings and analyses on the cone effect contradict previously held notions and advance scientific understanding.\n3. **Theoretical analyses**: We mathematically characterize the contraction mapping induced by linear layers with ReLU non-linearities to explain the cone effect. Our theory matches well with experiments and provides insights for understanding the general inductive biases of deep neural networks.\n \nRegarding **the improvements shown in Table 1**: We thank the reviewers for asking about this. In this revision, we have further investigated the significance of the improvements in Table 1, showing that they are *statistically significant*. Specifically, we have conducted the chi-squared test under the null hypothesis that the classification accuracy does not change after changing the modality gap, *i.e.*, $H_0 : p_{\\mathrm{before}} = p_{\\mathrm{after}}$. Our results show that the p-values are less than $0.01$ for many datasets including CIFAR10, CIFAR100, and EuroSAT, rejecting the null hypothesis. We use the whole dataset instead of only the validation set to make our results more robust because our embedding shifting experiments involve no fine-tuning. We have added the statistical testing to page 19 of the revised paper. \n \n\n| Dataset | Original Acc | Modified Acc | Direction | p-value |\n|----------|:--------:|:--------:|:---------:|:---------:|\n| CIFAR10 | 0.9026 | 0.9104 | ↑ | 3.476e-06 |\n| CIFAR100 | 0.6705 | 0.6776 | ↓ | 8.701e-03 |\n| EuroSAT | 0.5494 | 0.5686 | ↓ | 7.020e-06 |\n \n\n \nWe would again like to thank all reviewers for their time and feedback, and we hope that our changes adequately address all concerns.\n \n \n \n", " **Modality Gap in unimodal contrastive learning**\n> Is the proposed modality gap specific to multi-modal learning? Does the gap exist for unimodal contrastive learning?\n\nThank you for the question. Unimodal contrastive learning typically only has one encoder as there is only one data modality, while Multimodal contrastive representation learning involves two separate encoders, which creates two different cones in the representation space. Our analyses assumed two separate encoders, and thus do not directly generalize to the one encoder case. \n\n**Simulating mismatched data**\n> Section 4.3 (simulating mismatched data) seems an interesting investigation. What’s the multi-modal data being used here? Also, I wonder if the authors have some insight on how the modality gap correlates with multimodal data misalignment (e.g., an image is paired with a wrong caption due to data collection error).\n\nThank you for the comments. The data used in Section 4.3 are synthetic 3-dim features for visualization purposes. This indicates that the presence of mismatched data might be an important forming factor of the modality gap under low temperatures. Regarding how multimodal data misalignment might contribute to the modality gap, this is a great question that we are interested in as well. We have added a sentence “Investigating how and to what extent the multimodal data misalignment could affect the contrastive loss landscape and thereby the modality gap is an interesting direction for future research.” in our revised version, and we think that answering this question precisely will require a separate publication with additional experimental and theoretical results. \n\n\n**Improvements shown in Table 1**\n> The improvement in Table 1 seems quite small. Is the reported number given by exhaustively iterating over all possible shifting? Are the results averaged over multiple runs / random seeds?\n\nThank you for the question. We have shown that the improvements are statistically significant with p-value<0.01, and we refer the reviewer to the general response above. We have added the statistical testing to page 19 of the revised paper (See Supplementary Material).\n\n\nWe also clarify that we shifted embeddings along the line that passes through the two modality means. In other words, we did not exhaustively search for all possible directions. Because this embedding shifting procedure is deterministic (i.e., no training is involved), our results are evaluated based on a single run. \n\n\nWe again thank Reviewer nReN for their review of our manuscript, and we hope that the above responses adequately address all concerns.\n\n", " We thank Reviewer nReN for their positive comments and for providing thoughtful feedback on our work. We address many of Reviewer nReN’s comments in our general response above, and we provide additional details on specific comments below. \n\n**Why Euclidean distance to quantify the gap**\n> Figure 3, the objective of contrastive learning is cosine similarity while Figure 3 plots Euclidean distance. Could the authors provide … comment on reasons for using Euclidean distance? \n> Is it possible that the embedding shift is related to difference between cosine distance and Euclidean distance?\n\nThank you for the question. In CLIP, the image embeddings and text embeddings are L2-normalized (See Supplementary Figure 12: CLIP’s contrastive loss in Numpy-like pseudo-code). In other words, the image and text embeddings of CLIP are always on the $n$-dimensional unit sphere (n=512). Specifically, for any $n$-dimensional vectors $x$ and $y$, the cosine similarity is given as $\\cos(x,y)=x^T y$, and the Euclidean distance is given as $(x-y)^T (x-y) = 2(1-x^T y)$. Therefore, they have a functional relationship as $\\mathrm{Euclidean Distance}(x,y)=2(1-\\cos(x,y))$. When the angle between $x$ and $y$ is less than $\\pi/2$, which is the case as embeddings are in a narrow cone, the small Euclidean distance directly means a high cosine similarity. \n \n\nWe have added these clarifications in the caption of Figure 3 and Section 4.2. \n\n\n**Why is the gap termed as ‘modality gap’**\n> The gap exists even if the two encoders operate on the exact same data (i.e., no multi modality). If so, why is the gap termed as ‘modality gap’? What differentiates the gap in a multi-modal learning setting from a uni-modal setting?\n\nThank you for the comment. We term the gap as “modality gap” because the main focus of our paper is in multi-model contrastive representation learning, which is an important research area that has garnered tremendous interest and excitement. We observe two modality clusters in various multi-modality learning settings in Figure 1 that seemingly capture differences in modalities. That is the main reason why we call this gap the modality gap, but our in-depth analyses show that there are many factors leading to such gaps. This counterintuitive conclusion is exactly why we believe our finding is surprising and of great interest to the multimodal learning community.\n\n\n\n**Modality gap with only one shared encoder**\n> Conversely, if we use a single model for multiple modalities (e.g., Omnivore [2]), where different modalities are tokenized and fed into the same transformer. Will the modality gap still exist here?\n\nThank you for the question. Multimodal contrastive representation learning involves two separate encoders, which create two different cones in the representation space. In contrast, there is only one encoder in Omnivore [5]. Our analyses assumed two separate encoders, and thus do not directly generalize to the one encoder case. \n\nBecause Omnivore [5] has only one shared encoder, different modalities would be mapped into the same cone of the shared encoder. Exploring the geometry within that cone, especially whether different modalities are located in completely separate regions within that cone, is an interesting future work that we will like to follow up. \n\nWe have added a citation to Girdhar et al. (2022) [5], and the discussion of this promising direction for future work in our revised version. \n\n**Modality gap in uni-modal setting**\n> If we consider a uni-modal setting, take Mean Teacher [1] in semi-supervised learning as an example: given an image, the consistency loss aims to minimize the predictions of a teacher and student model. The teacher model is the exponential moving average of the student model weights (i.e., the two models have different weights and operate on same data modality). Does the gap still exist here if we use a contrastive learning objective?\n\nThank you for the question. Although there are two encoders in Mean Teacher [6], the weights of the two encoders are correlated in a very special way: namely, the teacher model is the exponential moving average of the student model weights [6]. This means that during training, the two cones produced by the two encoders are correlated in a special way beyond the contrastive learning objective. Our analyses assume that the two encoders are only connected via contrastive learning during optimization, and hence our analyses do not directly generalize to the Mean Teacher case [6]. \n\nWe have added a citation to Tarvainen et al. (2017) [6] and a discussion of this promising direction for future work in our revised version. \n\n\n\n", " **Adding Modality Gap Experiments in “non-Gaussian” setups**\n> The experiments suggest that the gap is present either when network parameters follow a Gaussian distribution (related to initialization) or the data (when using random noise). I assume the input data is also normalized before given to the network, and that the network uses batch or layer normalizations. \n> Have you perhaps investigated whether the gap arises in more “non-Gaussian” setups? \n\nMost of our results are in “non-Gaussian” setups, where i) the network parameters are pre-trained, and ii) the data are real images or texts. To make our settings even more “non-Gaussian”, we have added an experiment where we have iii) disabled both input data normalization (e.g., by ImageNet mean and std) and iv) all normalization layers. \n\n\nNew results are shown on page 22 of the revised paper (See Supplementary Material). The modality gap still clearly exists under such a “non-Gaussian” setup. \n\n\n\n\n**Modality Gap is not specific to CLIP**\n> The experiments in the paper are specific to the CLIP model, which limits the scope of the work and makes it more difficult to judge the significance of the contribution. If no other models are added into the evaluation, I believe the introduction and abstract should reflect that, i.e., I do not think it is fair to claim that the work investigates the gap for general multi modal contrastive representation learning models if this is in fact not the case.\n\nThank you for the comments. The modality gap phenomenon we found is not limited to CLIP. As shown in Figure 1, we showed the modality gap phenomenon in not only CLIP, but also in various multi-modal contrastive representation learning models including VideoCLIP (videos + texts), ConVIRT (medical images + texts), and CLASP (amino-acid sequences + texts). \n\nWe have also shown that our analyses are generalizable: \n1. We have shown that the cone effect is a general inductive bias of deep neural networks that hold on ResNet, vision transformer, and text transformer. We mathematically characterize the contraction mapping induced by linear layers with ReLU non-linearities to explain the cone effect, thereby confirming that the cone effect is a very general phenomenon. \n2. The contrastive learning objective we analyzed is also not limited to CLIP. This contrastive learning objective is one of the most widely adopted learning objectives in multi-modal contrastive representation learning models, which is used by VideoCLIP (videos + texts), ConVIRT (medical images + texts), CLASP (amino-acid sequences + texts), and many others. \n\n**Heterogeneity gap**\n> the observation about the geometric misalignment of multi-modal data representations is not novel but is typically referred to as heterogeneity gap\n\nThank you for the comments. We believe that the modality gap is a novel finding and is fundamentally different from the heterogeneity gap [4]. The heterogeneity gap states that the inherent differences in data modalities make exactly aligning the data representations from different data modalities (e.g., image, text) conceptually challenging for multimodal learning in general. However, this vague statement does not necessarily mean that the data representations from different modalities would be located in two **completely separate** regions of the embedding space, which is a much stronger statement. Therefore, the modality gap phenomenon is a novel finding, and also constitutes a much stronger statement than the heterogeneity gap. \n\nWe have added a citation to Guo et al. (2019) [4] and a discussion to our revised version. \n\n\n**Improvements shown in Table 1**\n> I am doubtful about the significance of the results… the results are not very significant (especially for Table 1).\n\nThank you for the comment. We have shown that the improvements are statistically significant with p-value<0.01, and we refer the reviewer to the general response above. We have added the statistical testing to page 19 of the revised paper (See Supplementary Material). We also reiterate that the main objective of our paper is to understand the modality gap phenomenon, a general inductive bias that holds across various data modalities and NN architectures. The goal of our paper is not to propose a method to close the gap or to improve downstream performance, which is an important direction of follow-up work. \n\n\n\n\n**Thank you again for your feedback, which was very helpful in improving the paper.** We hope you would consider increasing your score in light of our detailed response. Please let us know if you have any more questions and we are happy to follow up!\n\n", " We thank Reviewer 1atQ for their positive comments and for providing thoughtful feedback on our work. We address many of Reviewer 1atQ’s comments in our general response above, and we provide additional details on specific comments below. \n\n\n\n**Adding Cone Effect Experiments on ImageNet**\n> the cone effect … could be specific to the MSCOCO dataset and/or the selection of the 5000 embeddings?\n\nWe have added experiments on ImageNet to show that the cone effect is *not* specific to the MSCOCO dataset. We use the whole validation set of ImageNet, which contains 50,000 images, thereby scaling up the number of embeddings tested by an order of magnitude. We have added the ImageNet experiment results to page 21 of the revised paper (See Supplementary Material).\n\n\n| Dataset | ImageNet | COCO | ImageNet | COCO |\n|---------------------------|------------|------------|-----------------------|-----------------------|\n| Average cos similarity | ResNet | ResNet | Vision Transformer | Vision Transformer |\n| mean | **0.5160** | **0.5556** | **0.4835** | **0.4679** |\n| std | 0.0618 | 0.0695 | 0.0883 | 0.0900 |\n| 25% | 0.4752 | 0.5081 | 0.4243 | 0.4095 |\n| 50% | 0.5142 | 0.5523 | 0.4778 | 0.4660 |\n| 75% | 0.5547 | 0.5993 | 0.5364 | 0.5222 |\n\nAs shown in the table above, the average cos similarity and other statistics are similar on ImageNet and COCO, with only minor variations. In particular, the average cosine similarity on ImageNet is also substantially larger than 0, indicating that the embedding space is a narrow cone. This shows that the cone effect *not* specific to MSCOCO dataset and/or the selection of the 5000 embeddings. \n\n\n\n\n**Dimension of the representations**\n> You never specify the dimension of the representations. How does this affect the gap? It is a known fact that higher dimensions lead to the diminishing effect of the distances.\n\nThe dimensions of the representations that we tested are: CLIP 512-dim, VideoCLIP 768-dim, ConVIRT 512-dim, and CLASP 768-dim. We have added this information in the revision (Page 16).\n\nWe added an experiment to investigate how changing the embedding dimension of CLIP would affect the gap. We train 4 different multi-modal models from scratch using CLIP’s objective, with an embedding dimension of 64, 128, 256, and 512 respectively. We trained the models on Conceptual Captions 3M with 15 epochs. Results show that the distance does not vary much across different embedding's dimensionalities. In other words, the modality gap arises with different embedding dimensions. We have added the experiment results to page 22 of the revised paper (See Supplementary Material).\n\n \n| Dim | Gap L2 Distance |\n|-----|-----------------|\n| 64 | 0.3545 |\n| 128 | 0.3440 |\n| 256 | 0.3377 |\n| 512 | 0.3512 |\n\n\n**The extent of misalignment (i.e.,modality gap)**\n> In general, I think it would be helpful to quantify the extent of misalignment. This could be done with geometric methods for evaluation of representations such as Delaunay Component Analysis (Poklukar et al, ICLR 2022), Improved Precision and Recall (Kynkäänniemi et al, NeurIPS 2019) or similar.\n\nThank you for this suggestion. The extent of misalignment between CLIP's image embeddings and text embeddings (i.e., modality gap) is so large that both the precision and recall are zero for Kynkäänniemi et al [8]. More specifically, because CLIP's image embeddings and text embeddings are located in two *completely separate* regions of the embedding space, and they are perfectly linearly separable with a large margin (Supp. Figure 4), there would be *zero* shared support between the image embeddings and text embeddings, and thus both the precision and the recall are *zero*. \n\nWe have added citations to Kynkäänniemi et. al. (2019) [8] and Poklukar et. al. (2022) [9], and a sentence in the discussion section “Development of geometric methods for evaluation of representations (Kynkäänniemi et. al. 2019, Poklukar et. al. 2022) to further capture the geometric landscape of the modality gap is also an interesting direction of future work.”\n\n\n\n", " **Improvements shown in Table 1 and 2**\n> improvements in Tables 1 and 2 seem to be marginal \n\nThank you for the comment. We have shown that the improvements are statistically significant with P-value<0.01, and we refer the reviewer to the general response above. We have added the statistical testing to page 19 of the revised paper (See Supplementary Material). We also reiterate that the main objective of our paper is to understand the modality gap phenomenon, a general inductive bias that holds across various data modalities and NN architectures. The goal of our paper is not to propose a method to close the gap or to improve downstream performance, which is an important direction of follow-up work. \n\n\n**How should we modify the modality gap**\n> The authors do not provide a proper guide for the modality gap; which one brings the benefits for multi-modal learning - closing or increasing the gap?\n\nThank you for the comment. We agree this is an important question, but the goal of our paper is mainly to analyze how the modality gap is formed. This is much more complex than simple intuition, where we find that it is related to many fundamental questions in machine learning, such as initialization and optimization. Systematic analysis of the impact of the gap on applications is an important direction for future work. \n\nIn terms of the future impact of this line of research, it has been shown that intervening the modality gap can improve the performance of both image-to-text retrieval on MS COCO and Flickr30k [1], and personalized image retrieval [2]. Closing the modality gap can also simplify DALLE-2 by removing the prior network which converts text embedding to image embedding [3]. We believe this would be a promising direction for future work. \n\n\n**We again thank Reviewer 2Ge4 for their review of our manuscript. Your questions have improved the paper.** We hope you would consider improving your score in light of our detailed response. Please let us know if you have any more questions and we are very happy to follow up.\n", " We thank Reviewer 2Ge4 for reviewing our paper and providing helpful feedback on our work. We address many of Reviewer 2Ge4’s concerns in the general response above, and we provide additional details on specific comments below. \n\n\n**Cone effect: How narrow is the cone**\n> the authors claim the avg 0.56 cosine similarity score (in Figure (a)) is high enough to show the embedding space is a narrow cone. … Is there any other baseline (such as standard contrastive learning on ImageNet) to show the values are meaningful?\n\nThank you for the comment. We clarify that the average 0.56 cosine similarity score already indicates that the embedding space is actually an *extremely narrow* cone in the 512-dimensional feature space. Cosine similarity ranges in [-1,1]. The following mathematical evidence can intuitively explain how narrow it is. We have added these discussions to page 3 of the revised paper (See supplementary material PDF): \n- **Fraction of surface area in a unit hypersphere:** \n - In 2D, arccos(0.56)=55.94°, indicating that a cosine similarity of 0.56 can “occupy” 55.94°/360°=15.53% of the 2D unit circle. \n - In 3D, cosine similarity of 0.56 can “occupy” $\\frac{2 \\pi r^2 (1- \\cos \\frac{55.94 \\degree}{2})}{4 \\pi r^2}$=3.34% of the 3D unit sphere. \n - In 512D, cosine similarity of 0.56 can “occupy” less than $\\frac{1}{2^{512}}$ fraction of the surface area in a unit 512D hypersphere.\n\n- **Gaussian baseline:** For random 512-dimensional vectors drawn from the standard normal distribution, the cosine similarity scores are zero-centered, with a standard deviation of 0.046. And note the at 0.56 cosine similarity score is much larger than 0±0.046. \n\n - In fact, it is well known that any two high-dimensional vectors are likely to be orthogonal [7]. \n - To be more specific, for any fixed $y$ on a $n$-dimensional unit sphere and $\\epsilon >0$, the following holds (Example 3.10 of Wainwright (2019)): $P( y^T Z > \\epsilon/2 ) < \\exp ^{-n \\epsilon ^2 /8}$.\n - This inequality implies that the cosine similarity between two random vectors on a $n$-dimensional unit sphere goes to zero with a high probability. With this result, the theoretical baseline can be set to zero, but our results in Figure 2-(a) show unintuitive results—even the smallest one, which is 0.56, is clearly greater than zero. \n \nPlease let us know if you have any further questions regarding the narrowness of the cone effect and we are happy to follow up. \n\n\n **Contrastive loss landscape and temperature** \n> Inconsistent claims; the authors claim closing the gap increases the contrastive loss. But…. In my understanding… the temperature parameter (i.e., sharpening parameter) in contrastive loss can control the magnitude of the loss. \n\nThere is no inconsistency in our claims. In Line 187-195, we make two points about the contrastive loss landscape and the temperature: \n1. **Under CLIP’s default temperature:** The default gap distance of 0.82 actually achieves the global minimum, (Figure 3(a), Line 187-190). Under CLIP’s default temperature, shifting toward closing the gap increases the contrastive loss. \n2. **Higher than default temperature:** However, when the temperature increases, closing the gap becomes more optimal (Figure 3(c,d), Line 193-194). Note that CLIP’s default temperature is which is 1/100, and Figure 3(c,d) uses much higher temperatures (e.g., 1/50, 1). \n\nThere is no inconsistency between the two claims: claim (i) discussed the loss landscape under the fixed default temperature $\\tau=1/100$, while claim (ii) discussed a temperature parameter that is higher-than-the default ones (e.g., 1/50, 1). We have clarified in the abstract that “During optimization, contrastive learning keeps the different modalities separated by a certain distance, which is influenced by the temperature parameter in the loss function.” \n", " We thank Reviewer QxpT for their positive comments and for providing thoughtful feedback on our work. We address many of Reviewer QxpT’s comments in our general response above, and we provide additional details on specific comments below. \n\n**The pre-training data of CLIP**\n> For example, the pretraining is performed on MSCOCO\n> If the CLIP model is pre-trained on large-scale datasets, does the modality gap still have an impact on the downstream tasks?\n\nThanks for the question. We clarified that the CLIP model we study is pre-trained on the original OpenAI image-caption dataset, not the MSCOCO dataset. The OpenAI image-caption dataset contains 400 million image-caption pairs, which is a very large-scale dataset. \n\n\n**Improvements shown in Table 1**\n> as shown in Table 1, the impact of modality gap on downstream tasks is marginal.\n\nThank you for the comment. We have shown that the improvements are statistically significant with p-value<0.01, and we refer the reviewer to the general response above. We have added the statistical testing to page 19 of the revised paper (See Supplementary Material). We also reiterate that the main objective of our paper is to understand the modality gap phenomenon, a general inductive bias that holds across various data modalities and NN architectures. The goal of our paper is not to propose a method to close the gap or to improve downstream performance, which is an important direction of follow-up work. \n\n\n\n**Zero-shot classification experiment on ImageNet**\n> What’s the zero-shot performance on ImageNet?\n\nThank you for your comment. We have added new zero-shot experiments on ImageNet. New results are shown on page 19 of the revised paper (See Supplementary Material). \n\nWe found that making the gap smaller or larger by feature shifting decreases the model performance on ImageNet. Although changing the gap does not improve the model performance, and it is not clear why the model performs best with the default gap, we can still clearly see the effect of the modality gap on zero-shot classification.\n\nMoreover, we reiterate that the main objective of our paper is to i) empirically demonstrate the modality gap phenomenon across different data modalities and NN architectures; ii) explain how the gap arises, and iii) show that the size of the gap can affect downstream applications. It is not our goal to propose a method to close the gap, since it’s not clear that it’s desirable to have no modality gap.\n\n", " This manuscript demonstrates a modality gap phenomenon for multi-modal contrastive models like CLIP. Specifically, the authors analyze why the gap exists and its importance; (a) the modality gap is born at random initialization and the contrastive learning objective encourages the gap, and (b) changing the modality gap can affect zero-shot and fairness performances on downstream tasks. Furthermore, the authors provide a theoretical analysis of the modality gap phenomenon. Strengths\n- The writing is clear and easy to understand\n- This manuscripts study interesting intrinsic phenomenon of contrastive-based multi-modal models (e.g., CLIP)\n- Theoretical analysis supports the existence of the modality gap at randomly initialized weights\n\nWeakness\nOverall, the backups are not enough to support the claims and the improvements in the experiments are marginal\n- The cone effect phenomenon in pretrained models; for example, the authors claim the avg 0.56 cosine similarity score (in Figure (a)) is high enough to show the embedding space is a narrow cone. But, compared to the random initialization (in Figure (b)), the score is already significantly reduced from 0.99 to 0.56. Is there any other baseline (such as standard contrastive learning on ImageNet) to show the values are meaningful?\n- Inconsistent claims; the authors claim closing the gap increases the contrastive loss. But, Figure (d) contradicts the claim - closing the gap reduces the contrastive loss. In my understanding, the contrastive loss forces to close the gap between given pair of data (e.g., image and text pair), where the temperature parameter (i.e., sharpening parameter) in contrastive loss can control the magnitude of the loss. For example, Figure 8 in Supp. shows that high temperatures can remove the gap. So, after training, the gap could be negligible in cases.\n- Experimental results are not supportive; improvements in Tables 1 and 2 seem to be marginal and simply obtained from the best results from the search space of hyperparameter $\\lambda$ in Sec 4.2.\n- The authors do not provide a proper guide for the modality gap; which one brings the benefits for multi-modal learning - closing or increasing the gap? I hope the authors could resolve my concerns in the weakness part above. The authors do not provide any limitation or potential negative social impact of their work", " The paper investigates the geometric misalignment of multi-modal data representations in CLIP representation space. The authors investigate the existence of the gap under different random initialization, real or random input data and during the optimization of the model with contrastive learning objectives. They also provide theoretical analysis of the misalignment.\n Strengths:\nThe paper is well written and provides extensive analysis (both theoretical and empirical) of the geometric misalignment of representations under different modelling assumptions. Proofs and code are provided. The influence of the geometric misalignment is also investigated when using representations for real-world downstream tasks.\n\nWeaknesses:\nAs far as I am aware, the observation about the geometric misalignment of multi-modal data representations is not novel but is typically referred to as heterogeneity gap (discussed for example, in https://ieeexplore.ieee.org/document/8715409). \nWhile the experiments are interesting and insightful, I am doubtful about the significance of the results. In particular, I am not convinced by the results on the downstream tasks which are in my view the important ones. The authors investigate only one value of the modified gap and the results are not very significant (especially for Table 1). \n I kindly ask the authors to elaborate on the following questions:\n- In Sec 2.1, you investigate the cone effect on 3 pretrained models. I was wondering if this could be specific to the MSCOCO dataset and/or the selection of the 5000 embeddings? \n- You never specify the dimension of the representations. How does this affect the gap? It is a known fact that higher dimensions lead to the diminishing effect of the distances.\n- In general, I think it would be helpful to quantify the extent of misalignment. This could be done with geometric methods for evaluation of representations such as Delaunay Component Analysis (Poklukar et al, ICLR 20211),Improved Precision and Recall (Kynkäänniemi et al, NeurIPS 2019) or similar. \n- The experiments suggest that the gap is present either when network parameters follow a Gaussian distribution (related to initialization) or the data (when using random noise). I assume the input data is also normalized before given to the network, and that the network uses batch or layer normalizations. Have you perhaps investigated whether the gap arises in more “non-Gaussian” setups?\nIs the gap specific for the cosine similarity?\n The experiments in the paper are specific to the CLIP model, which limits the scope of the work and makes it more difficult to judge the significance of the contribution. If no other models are added into the evaluation, I believe the introduction and abstract should reflect that, i.e., I do not think it is fair to claim that the work investigates the gap for general multi modal contrastive representation learning models if this is in fact not the case. ", " The paper proposes ‘modality gap’ in multimodal representation learning. The authors observe that embeddings of different modalities under a CLIP-style contrastive training objective will fall in different regions of the embedding space. They further provide explanations for the modality gap and a theoretical analysis.\n \\+ The paper presents an interesting phenomenon and unique insight in multimodal representation learning\n\n\\+ The conclusion is evaluated with many experiments and backed with theoretical analysis.\n\n\\+ The paper is well written and clearly organized. \n\n&nbsp;\n\n\n\\- What’s unclear to me is that: modality gap seems to stem from the inductive bias of deep neural networks. The key is that using two encoders (with different initialization) to process input data will produce a gap in the representation space, and a contrastive learning objective preserves the gap. I don’t quite see the influence of data modality in the problem statement. As stated in L108-110, the gap exists even if the two encoders operate on the exact same data (i.e., no multi modality). If so, why is the gap termed as ‘modality gap’? What differentiates the gap in a multi-modal learning setting from a uni-modal setting? \n\nIf we consider a uni-modal setting, take Mean Teacher [1] in semi-supervised learning as an example: given an image, the consistency loss aims to minimize the predictions of a teacher and student model. The teacher model is the exponential moving average of the student model weights (i.e., the two models have different weights and operate on same data modality). Does the gap still exist here if we use a contrastive learning objective? \n\nConversely, if we use a single model for multiple modalities (e.g., Omnivore [2]), where different modalities are tokenized and fed into the same transformer. Will the modality gap still exist here?\n\n[1] Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.\n[2] Omnivore: A Single Model for Many Visual Modalities + Figure 3, the objective of contrastive learning is cosine similarity while Figure 3 plots Euclidean distance. Could the authors provide plots for the cosine similarity or comment on reasons for using Euclidean distance? Is is possible that the embedding shift is related to difference between cosine distance and Euclidean distance?\n\n+ Is the proposed modality gap specific to multi-modal learning? Does the gap exist for unimodal contrastive learning?\n\n+ Section 4.3 (simulating mismatched data) seems an interesting investigation. What’s the multi-modal data being used here? Also, I wonder if the authors have some insight on how the modality gap correlates with multimodal data misalignment (e.g., an image is paired with a wrong caption due to data collection error). \n\n+ The improvement in Table 1 seems quite small. Is the reported number given by exhaustively iterating over all possible shifting? Are the results averaged over multiple runs / random seeds?\n\n+ Line309: shwon -> shown \nAs the authors mention in the paper, an important future direction is to investigate how the modality gap influence downstream task performance. ", " This paper investigates the modality gap issue in existing multi-modal models. Specifically, this work empirically shows the gap phenomenon across different neural networks and different modalities, and makes a conclusion that the gap issue is a combination of model initialization and contrastive learning optimization. This paper further theoretically proves how inductive bias in deep neural networks creates narrow representation cones in the embedding space. Another contribution of this work is to empirically investigate the impact of the modality gap distance in downstream tasks.\n ---\n\nOriginality: to the best of my knowledge, this is the first work that investigates the modality gap of multi-modal models. It is interesting to see that neural networks with random initialization create narrow cones in the embedding space. This paper also theoretically proves how it happens.\n\n---\n\nQuality: \n\nStrengths: the empirical study and theoretical proof of the cone effect in deep neural networks are solid and well supported. Specifically, this work first empirically demonstrates that the cone effect widely exists in deep neural networks, then theoretically prove that each network layer narrows the representation cone.\n\nWeakness: (1) as shown in Table 1, the impact of modality gap on downstream tasks is marginal. (2) All experiments are performed on small-scale datasets. For example, the pretraining is performed on MSCOCO, and downstream tasks are performed on CIFAR10, CIFAR100, and SVHN. What’s the zero-shot performance on ImageNet? If the CLIP model pretrained on large-scale datasets, does the modality gap still have an impact on the downstream tasks? \n\n---\n\nClarity: this paper is well written and well organized. It is easy to follow. Since this is an empirical work and built upon existing models, there is no reproduction issue.\n\n---\n\nSignificance: the investigation of the modality gap in multi-modal models is interesting and would benefit the community.\n\n---\n 1. What’s the zero-shot performance on ImageNet?\n2. If the CLIP model is pre-trained on large-scale datasets, does the modality gap still have an impact on the downstream tasks? \n Not applicable" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "5NQDLW6Q9r3", "i-Ozc_cWG0R", "8uVCttIGer", "EQSHdKkACf9", "NmW0pPUeEUj9", "y4rxL0il36G", "KhmazHX8O-N", "xeNHouujcM9", "nips_2022_S7Evzt9uit3", "vkZfDby3uwi", "i-Ozc_cWG0R", "nrMw71JSD1", "8uVCttIGer", "1pDmowld46R", "KhmazHX8O-N", "5NQDLW6Q9r3", "nips_2022_S7Evzt9uit3", "nips_2022_S7Evzt9uit3", "nips_2022_S7Evzt9uit3", "nips_2022_S7Evzt9uit3" ]
nips_2022_OjS3nkNATOw
Adapting Self-Supervised Vision Transformers by Probing Attention-Conditioned Masking Consistency
Visual domain adaptation (DA) seeks to transfer trained models to unseen, unlabeled domains across distribution shift, but approaches typically focus on adapting convolutional neural network architectures initialized with supervised ImageNet representations. In this work, we shift focus to adapting modern architectures for object recognition -- the increasingly popular Vision Transformer (ViT) -- initialized with modern pretraining based on self-supervised learning (SSL). Inspired by the design of recent SSL approaches based on learning from partial image inputs generated via masking or cropping -- either by learning to predict the missing pixels, or learning representational invariances to such augmentations -- we propose PACMAC, a two-stage adaptation algorithm for self-supervised ViTs. PACMAC first performs in-domain SSL on pooled source and target data to learn task-discriminative features, and then probes the model's predictive consistency across a set of partial target inputs generated via a novel attention-conditioned masking strategy, to identify reliable candidates for self-training. Our simple approach leads to consistent performance gains over competing methods that use ViTs and self-supervised initializations on standard object recognition benchmarks. Our code is available at https://github.com/virajprabhu/PACMAC.
Accept
This work looks at adapting ViT-like models for unsupervised domain adaptation, by cleverly finding pseudo-labels with 'attention-guided masking'. There's a weak consensus among the reviewers that this work has good empirical results, but somewhat limited novelty. I think the rebuttal discussion has helped improve this work quite a bit, and given the good results, ablations, and the importance of unsupervised domain adaptation, I am recommending acceptance.
train
[ "pulj6WK56mN", "HoKoEfVLneA", "vT1x6wAnXC", "0oId3uOtpCJ", "3jfCTIibVfs", "xGaks6sDs7i", "k5byWRtIXBS", "sA178xhXoe8", "3zCqOr_fwhu", "N6x-1ZE7NfS", "IKyp9sc4zcL", "cPNDJioZWKx", "sCClK7Rz4CN" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would be happy to address any additional questions. Otherwise, we would appreciate if the reviewer would consider updating their score in light of the clarifications and new experiments.\n\n", " We would be happy to address any additional questions. Otherwise, we'd appreciate if the reviewer would consider updating their score in light of the clarifications and new experiments.", " We have carefully revised the draft to include rebuttal clarifications and experiments, and will be happy to incorporate other suggestions to further improve clarity!", " As recommended, we have added a brief description of this experiment in L272-277 of the main paper, and a detailed description in Sec 4 of the supplementary. ", " Thank you for the rebuttal. I updated my review and kept the same score, \"weak accept\". \nIndependent of accept/reject, I would recommend the authors to update the manuscript with the rebuttal clarifications/new results to make the paper easier to follow (and the contributions more clear to understand).", " Thank you for the detailed response and additional experiments. I am glad to see improved performance for DINO with attention-seeded local-global cropping. Consider adding a discussion (2-3 lines) about this experiment in the main paper. It would make the paper more interesting.", " > Most of the components of the proposed method have already been explored in previous literature (SSL for UDA, self-training for UDA). The main contribution is mostly adapting those approaches to a new architecture (ViT).\n\nPlease see our general response above that clarifies our main novel contributions over prior work. We also re-emphasize that our focus is not just adapting a new architecture (ViTs) but _also models initialized with self-supervised learning (SSL)_, which has received scant attention in prior work, despite such SSL rapidly becoming the de-facto pretraining strategy due to improved scalability and generality.\n \nAdapting SSL initializations becomes especially important as we consider more diverse domain adaptation settings. As a motivating example, we experiment with the iWildCam dataset from the newly proposed WILDS v2 benchmark [A], which measures adaptation across camera trap deployments. We find applying PACMAC starting from an SSL DINO initialization to strongly outperform starting from a supervised ImageNet initialization (**80.7**% v/s **77.2**%), even without additional in-domain SSL pretraining on the source and target domains. Clearly, adapting SSL initializations is an understudied problem of practical importance.\n\n[A] Sagawa, Shiori, et al. \"Extending the WILDS Benchmark for Unsupervised Adaptation.\" International Conference on Learning Representations. 2022.\n\n> The main contribution then becomes the application of ViT-specific tricks on either the SSL or self-training components (eg, the attention-conditioning mask). This trick, however, could as well be used in any SSL/self-training method that relies on ViT, not only on UDA problems.\n\nWe argue that the generality of our novel attention-conditioned masking strategy is a feature, not a bug! To verify this, we modify our selection strategy to match DINO’s multi-crop augmentation strategy, and instead measure predictive consistency across random local-global image crops. We then further improve this strategy via attention-conditioning, by constraining each crop to center on the most highly attended image patch, and observe a performance improvement (**72.9**% v/s **74.3**%, **+1.4**%) on OfficeHome Cl->Pr. Clearly, our attention conditioning trick is indeed generally beneficial. We will be happy to include additional experiments to demonstrate this generality.\n\n> How does capacity of the model and inference time change between PACMAC and other SOTA methods (eg, SENTRY, Shen et al.)?\n\nWe use the same ViT-Base architecture across all methods and so model capacity is identical. Similarly, inference time is also identical as all methods perform a single forward pass at test data.\n\nFor completeness, we also compare training time across methods and include results on the Product to Real shift from OfficeHome with a DINO initialization. Note that both datasets contain a large number of high resolution images which consequently makes adaptation particularly slow. We benchmark performance using a single NVIDIA A40 GPU across experiments.\n\n| | time (hours) |\n| ----------- | ----------- |\n| Shen et al. | 18h 39m | \n| SENTRY | 28h 15m | \n| PACMAC | 20h 23m | \n\nAs seen, Shen et. al and PACMAC take a similar amount of training time on this shift whereas SENTRY converges the slowest. We have included this comparison in Sec 2.9 of the supplementary.\n\n> I found Table 6 very difficult to understand/parse.\n\nSorry about the unclear description! We describe the results in Table 6 in L292-302: the goal is to compare _representations_ learned by different pretraining strategies (self-supervised learning with MAE and DINO, and supervised ImageNet pretraining), by measuring the error of a linear classifier trained to distinguish different sets of features, eg. source v/s target features: in this case, we observe higher error for supervised representations compared to self-supervised initializations, indicating that after supervised pretraining on ImageNet, source and target features tend to be hard to distinguish and are therefore better aligned. We have revised the description in the text, and hope it is now clear.\n", " > the only difference between the SENTRY and PACMAC is the data-augmentation scheme and the loss function. Further, Sec 1 of supplementary material establishes the superiority of the loss function used in SENTRY over the one presented in the paper.\n\nPlease see the general response above for a detailed conceptual and empirical comparison with SENTRY. \n\nTo further clarify, the crucial distinction between SENTRY and PACMAC is _not_ data augmentation but rather a novel selection strategy used for self-training based on predictive consistency across partial images generated via an attention-conditioned masking strategy. Our general response above empirically establishes its superiority to SENTRY’s selection strategy. \n\nWe agree however that our attention-conditioned masking may indeed be considered a form of data augmentation. However, its main contribution to performance is **via better selection** rather than via improved regularization. To verify this, we run PACMAC by using masking for augmentation alone, and observe moderate performance (**64.3**% on OfficeHome Cl$\\to$Pr), whereas using it for selection alone provides a more significant boost (**70.8**%). Using it for both does best (**74**%).\n\nFinally, we disagree with the reviewer’s takeaway from Sec 1 of the supplementary: what it shows in fact is that by matching SENTRY’s loss objective by adding entropy maximization and diversity regularizers (incidentally, both originally proposed in prior work preceding SENTRY [A, B]), our gains over SENTRY increase. This further establishes the superiority of our main contribution (selection strategy) over SENTRY’s, while controlling for other confounding factors.\n\n[A] Pereyra, Gabriel, et al. \"Regularizing Neural Networks by Penalizing Confident Output Distributions.\" (2017), ICLR Workshops, 2017.\n\n[B] Li, Bo, et al. \"Rethinking distributional matching based domain adaptation.\" arXiv preprint arXiv:2006.13352, 2020.\n\n> Please compare the attention-guided masking augmentation and other types of augmentations such as attention-guided cropping (e.g. one that matches the multi-crop augmentation of DINO).\n\nThanks for the great suggestion! We match DINO’s local-global multi-crop augmentation strategy and measure predictive consistency across a random local image crop (of size 112x112) and global image crop (of size 192x192). We also implement a version with attention-conditioning that centers the global crop on the most highly attended image patch, and the local crop over the most highly attended image patch that is at least 48 pixels away from the centre of the global crop. We visualize this strategy in Fig 9 of the revised supplementary.\n\nShown below are results on OfficeHome Cl->Pr, with a DINO initialization.\n\n| augmentation | acc. |\n| ----------- | ----------- |\n| RandAugment (SENTRY) | 71.1 |\n| random masking (MAE) | 72.6 |\n| attention-seeded masking (ours) | **74.0** |\n| random local-global cropping (DINO) | 72.9 |\n| attention-seeded local-global cropping (ours) | **74.3** |\n\nAs seen, both random local-global cropping and attention-seeded local-global cropping outperform their masking counterparts as well as the next best baseline (SENTRY [9]), verifying that i) matching the augmentation scheme to the pretraining scheme is beneficial, and ii) attention-conditioning is helpful. We have included these experiments in Sec 4 of the supplementary and will also experiment with additional augmentation schemes, thanks!\n\n> It is mentioned in L240 that PACMAC intends to design a data augmentation scheme that matches the design of the SSL pretraining. However, the presented method only matches the design of MAE, not DINO.\n\nWe match the SSL pretraining's general design of pulling together representations extracted from partial images, and do not imply that we exactly match the specifics (we will clarify). However as shown by the previous experiment, we find that exactly matching the pretraining's proxy task results indeed leads to better performance.", " > Clarify the novel contributions and how the method compares to a combination of [28]+[9], which appears conceptually very similar.\n\nPlease see the general response above for a detailed conceptual and empirical comparison.\n\n> Clarify whether the benefit of the masking is more due to example selection or additional data augmentation.\n\nGreat point! Both are contributing factors, but the benefit of masking is more due to better selection than additional data augmentation. In the table below, we ablate PACMAC by varying whether masking is used for augmentation, target selection, or both, and show adaptation results on OfficeHome Cl$\\to$Pr:\n\n| augmentation | selection | acc (%) |\n| ----------- | ----------- |----------- |\n| N | N | 59.8 |\n| Y | N | 64.3 |\n| N | Y | 70.8 |\n| Y | Y | **74.0** |\n\nAs seen, using masking for augmentation alone provides a moderate boost (**+4.5**%), whereas using it for selection alone provides a significant one (**+11.0**%). Using it for both performs best (**+14.2**%). We have included this new ablation in L271.\n\n> Unclear how self-training (which typically leverages class predictions) relates to a self-supervised model (the focus of this paper).\n\nSorry about the unclear description! The reviewer is correct: after SSL pretraining on source+target data, we first learn a classifier on only labeled source data (L193), and then initialize our proposed masking-consistency based selective self-training strategy. We have revised Sec 3.3 to reflect this (see L149), and hope the description is now clear. We also note that source model training before self-training is common practice in domain adaptation [5,9].\n", " We thank reviewers for their effort and thoughtful feedback, and are delighted they found our problem setting important and practical (Reviewers LLh8, MM52), our proposed approach novel (Reviewer LLh8) and effective (Reviewers LLh8, MM52), our experiments extensive (Reviewers LLh8, pAyV, MM52), and our writing clear (Reviewers pAyV, MM52). \n\nThe primary concern shared by reviewers appears to be contributions over prior work, specifically SENTRY [9] and Shen et al. [28]. We agree (and clearly acknowledge – L240-247) that PACMAC and SENTRY [9] both use selective self-training on reliable instances identified via predictive consistency, and that PACMAC makes use of in-domain self-supervised pretraining proposed in Shen et al. [28]. \n\nHowever, PACMAC differs from a combination of [28]+[9] in 2 important ways, which leads to improved performance:\n\ni) **PACMAC proposes a novel proxy task for identifying reliable target instances**: predictive consistency across partial image inputs generated via masking. By doing so, PACMAC approximately matches the design of its selection strategy to its SSL pretraining (MAE [26] and DINO [25], which learn to reconstruct / learn invariance to partial inputs respectively), in contrast to SENTRY, which measures consistency across random image augmentations.\n\nii) **PACMAC incorporates model knowledge in its selection strategy** by using attention-conditioning to focus on salient image regions, rather than random augmentations sampled from a manually pre-defined set.\n\nUnlike a naive combination of [28]+[9], PACMAC thus explicitly couples its SSL pretraining with its selection strategy, and further improves this selection by leveraging the Vision Transformer (ViT) attention mechanism. \n\n**We now demonstrate that such coupling improves performance**. First, we ablate PACMAC by replacing its selection strategy with SENTRY’s: we exactly match hyperparameters, and select target instances based on predictive consistency across 3 image augmentations, generated via RandAugment [47] with N=3 and M=2.0, and use majority voting. Shown below are target accuracies averaged over all 12 shifts in OfficeHome:\n\n| | MAE | DINO |\n| ----------- | ----------- |----------- |\n| SENTRY selection | 66.1 | 67.4 |\n| PACMAC selection | **66.8** | **69.6** |\n\nAs seen, PACMAC selection outperforms SENTRY selection in both cases: +0.7 (MAE init.) and +2.2 (DINO init.). We have included this new experiment in L269-275 of the main paper.\n\nNext, we compare directly against a combination of Shen et al.[28]+ and SENTRY [9]: We note that the full SENTRY method uses additional diversity regularizers and entropy maximization losses. For a fair comparison, we add these losses to our method and call it PACMAC*. Shown below are target accuracies comparing [28]+[9] with PACMAC*, averaged across 12 OfficeHome shifts with a DINO initialization:\n\n| | acc.(%) |\n| ----------- | ----------- |\n| Shen et al. [28] + SENTRY [9] | 69.6 |\n| PACMAC* | **70.6** |\n\nIn this case as well, PACMAC* outperforms [28]+[9]. We have included this comparison in Sec 1 of the revised supplementary material.\n\nFinally, we compare the effectiveness of SENTRY’s selection strategy against ours on the Cl->Pr shift from OfficeHome. To do so, we measure reliability precision (how often is a target instance marked as reliable, actually correctly classified?), and reliability recall (what fraction of correctly classified target instances are selected via each method?), and compute the F1 score. Averaged across epochs, we observe the following (full plot in Sec 2.4 of supplementary):\n\n| | avg. F1 score |\n| ----------- | ----------- |\n| SENTRY selection | 84.0 |\n| PACMAC selection | **85.0** |\n\nIn response to reviewer comments, we have also made the following revisions to the draft (in red for convenience):\n- Approach (Sec 3.2): Added description of source training stage that precedes self-training\n- Results (Sec 4.4): Expanded conceptual comparison between PACMAC and SENTRY\n- Ablating PACMAC (Sec 4.5): Included experiment to disentangle the contribution of masking towards selection and regularization. Also included detailed empirical comparison between PACMAC and SENTRY’s selection strategies.\n- Analyzing PACMAC (Sec 4.6): Simplified description of Table 6.\n- Improving PACMAC performance (Supp. Sec 1): Expanded empirical comparison to SENTRY + Shen et al.\n- Reliability checker (Supp. Sec 2.4) and Fig 4: Added analysis of SENTRY selection v/s PACMAC\n- Comparison of training time (Supp. Sec 2.9): Included training time details for PACMAC and competing methods\n- Selection strategy matching DINO augmentation (Supp. Sec 4): Included experiment of PACMAC with selection strategy matching DINO", " The paper proposes PACMAC, a method for unsupervised domain adaptation tailored to vision transformer architectures (ViT) pre-trained using self-supervised approaches (SSL). \nThe method works by 1) continued SSL training on the union of the labeled source domain and unlabelled target domain, and 2) joint fine-tuning on the source domain and self-training on the target domain.\nFor the self-training on the target domain, the paper proposes a mechanism based on the consistency of the model's prediction on differently masked inputs to select examples for self-training. \nThese input masks are generated based on the ViT self-attention scores and a greedy selection mechanism.\nExperiments demonstrate improvements over prior work in various benchmarks and include ablation experiments to demonstrate the importance of continued SSL pre-training and the example selection strategy for self-training. Strengths:\n- Domain adaptation is of practical importance, and investigating approaches for modern architectures and pre-training strategies (ViT + SSL) is useful\n- Using consistency of predictions between differently masked inputs is an interesting novel idea\n- Extensive experimental evaluation\n- Good performance in various domain adaptation benchmarks\n- A good set of ablation experiments is provided\n\n\nWeaknesses:\n- I found the presentation quite confusing. The main reason for this is that it is unclear how self-training (which typically leverages class predictions) relates to a self-supervised model (the focus of this paper). Only at L193 is it mentioned that a classifier is learned on the source data after SSL pre-training, and I assume that those predictions are then used for the self-training (+mask consistency, etc.). This makes the paper up to that point rather confusing. I also assume this fine-tuning is performed after the continued SSL on source+target data, but it is unclear based on the paper. \n- It appears that the novelty of the method is limited mainly to the mask-consistency strategy to select examples for self-training. For example, the SSL pre-training on pooled source+target data was proposed in [28]. Furthermore, [9] offers a selection mechanism based on consistency across augmentations, which is conceptually quite similar to the mask consistency proposed here. Therefore, it would be important to see how a combination of [28] and [9] would do, especially since [9] by itself appears to perform well already. \n- It is unclear whether the selection mechanism through mask consistency benefits the method or the additional augmentations through the use of masking during self-training. Indeed, only using the selection appears to lead to subpar performance (see L268). It might be good to investigate this observation further and disentangle these two factors. \n\nPOST REBUTTAL UPDATE:\nThe authors addressed some of my concerns in terms of presentation and the benefits of the selection mechanism (compared to additional augmentations). I still think the novelty over [28]+[9] is somewhat limited, but improvements appear to be solid. I'm thus raising my rating. I would appreciate it if the authors could address the weakness listed above, i.e., \n- How they aim to improve the presentation and where to introduce the classifier training.\n- Clarify the novel contributions and how the method compares to a combination of [28]+[9], which appears conceptually very similar.\n- Clarify whether the benefit of the masking is more due to example selection or additional data augmentation. Limitations were addressed well in the paper. ", " The paper focuses on adapting self-supervised ViTs (MAE[26] and DINO[25]) for unsupervised domain adaptation (UDA). The goal is the find reliable pseudo labels for unlabeled target domain images and then train the model using pooled source and target data. The paper uses a setup prevalent in the existing literature to find the reliable image-pseudo-label pairs – filter the target domain images with consistent predictions across multiple augmented versions and consider the predicted labels for such images as reliable pseudo labels. The main contribution of this paper lies in designing ViT-specific data augmentation, namely attention-guided masking. The presented method is dubbed ‘PACMAC’, short for Probing Attention-conditioned Masking Consistency for UDA. Strengths:\n\n* The paper attempts to find ViT-specific data augmentation for finding reliable image-pseudo-label pairs using FixMatch[R1]-style setup.\n* The paper is well-presented with a good set of experiments.\n\nWeakness:\n\n* The learning setup is very similar to the previous method called SENTRY [45]. As described in L239-244, the only difference between the two methods is the data-augmentation scheme and the loss function. Further, Sec 1 of supplementary material establishes the superiority of the loss function used in SENTRY over the one presented in the paper.\n* Augmentation being the main contribution, the paper lacks a comparison between attention-guided masking augmentation and other types of augmentations (such as random or attention-guided cropping).\n* It is mentioned in L240 that PACMAC intends to design a data augmentation scheme that matches the design of the SSL pretraining. However, the presented method only matches the design of MAE, not DINO. One can observe that the improvement offered by PACMAC over the previous state-of-the-art is lower for DINO than MAE.\n\nReferences:\n\n[R1] Sohn, Kihyuk, et al. \"Fixmatch: Simplifying semi-supervised learning with consistency and confidence.\" NeurIPS, 2020.\n * Looking at the experiment presented in Sec 1 of supplementary material, the improvements offered by PACMAC over SENTRY can be attributed to data augmentation and not the loss function. Please explain which other features of PACMAC are superior to SENTRY.\n* Please compare the attention-guided masking augmentation and other types of augmentations such as attention-guided cropping (e.g. one that matches the multi-crop augmentation of DINO).\n Limitations discussed. There are no ethical concerns.", " In this paper, the authors propose a new method for adapting Visual Transformers (ViT) with modern self-supervised learning (SSL) pretraining adapted to unsupervised domain adaptation (UDA). The proposed approach, named PACMAC, starts with a weights pretrained on Imagenet and follows of two steps: (i) in-domain SSL on both source and target datasets and (ii) self-training model on target based on a \"reliance measure\" (based on predictive consistency across many masked inputs). The work shows good results on multiple standard UDA evaluation datasets.\n\n\n=======================\n### Post-rebuttal updates\nI thank the authors for their feedback. After looking at the rebuttal and other reviewers' comment, I decided to keep my “Weak Accept” (6) rating. \nI continue thinking that the paper achieves good empirical results (and the authors provide a good range of experiments), but the novelty is fairly limited—as agreed by all the reviewers. ### Pros\n* The paper is clear, easy to follow in most parts and provides a good literature review.\n* The authors show good results compared to current models and provide good ablation studies.\n* The idea of using \"attention-conditioned\" masking makes sense and \"consistency\" on self-training are interesting (however, they are not related to UDA and would be better studied in a general context rather than in the particular case of unsupervised domain adaptation).\n* The problem of domain shift, OOD and domain adaptation in neural networks is important to the community.\n\n### Cons\n* The main weakness of this paper IMO is the lack of novelty. As pointed out in the manuscript, most of the components of the proposed method have already been explored in previous literature (SSL for UDA, self-training for UDA). The main contribution is mostly adapting those approaches to a new architecture (ViT).\n* The main contribution then becomes the application of ViT-specific tricks on either the SSL or self-training components (eg, the attention-conditioning mask). This trick, however, could as well be used in any SSL/self-training method that relies on ViT, not only on UDA problems. * I found Table 6 very difficult to understand/parse. Could the authors elaborate a bit more what is it about? And more importantly, I would recommend the authors to update the manuscript and make it more clear as well.\n* How does capacity of the model and inference time change between PACMAC and other SOTA methods (eg, SENTRY, Shen et al.)?\n The authors were clear with the limitation of the method (slower and not same results as supervised initialization)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "3zCqOr_fwhu", "0oId3uOtpCJ", "3jfCTIibVfs", "xGaks6sDs7i", "k5byWRtIXBS", "sA178xhXoe8", "sCClK7Rz4CN", "cPNDJioZWKx", "IKyp9sc4zcL", "nips_2022_OjS3nkNATOw", "nips_2022_OjS3nkNATOw", "nips_2022_OjS3nkNATOw", "nips_2022_OjS3nkNATOw" ]
nips_2022_jowVZoitZYu
On Trace of PGD-Like Adversarial Attacks
Adversarial attacks pose safety and security concerns for deep learning applications. Yet largely imperceptible, a strong PGD-like attack may leave strong trace in the adversarial example. Since attack triggers the local linearity of a network, we speculate network behaves in different extents of linearity for benign examples and adversarial examples. Thus, we construct Adversarial Response Characteristics (ARC) features to reflect the model's gradient consistency around the input to indicate the extent of linearity. Under certain conditions, it shows a gradually varying pattern from benign example to adversarial example, as the later leads to Sequel Attack Effect (SAE). ARC feature can be used for informed attack detection (perturbation magnitude is known) with binary classifier, or uninformed attack detection (perturbation magnitude is unknown) with ordinal regression. Due to the uniqueness of SAE to PGD-like attacks, ARC is also capable of inferring other attack details such as loss function, or the ground-truth label as a post-processing defense. Qualitative and quantitative evaluations manifest the effectiveness of ARC feature on CIFAR-10 w/ ResNet-18 and ImageNet w/ ResNet-152 and SwinT-B-IN1K with considerable generalization among PGD-like attacks despite domain shift. Our method is intuitive, light-weighted, non-intrusive, and data-undemanding.
Reject
This paper observes that "PGD-like" attack algorithms have characteristics that allow one to detect an input has been attacked. While I agree that it is interesting PGD-like attacks have detectable properties, I agree with the reviewers that designing attck-specific defenses has limited utility. The authors already show that FGSM and similar attacks are not detected with this approach, and this makes me worry that adaptive attacks will also not be easy to detect. And so while making an observation about how PGD works is interesting, it is not yet sufficient and will likely not form the basis of a strong defense.
train
[ "vdM6PKmF7a3", "0JXUU_EXXWRM", "j-z8-gwy5df", "bzdAzYjHiXg", "Zpo4AlOtcuF", "cAY1s_jUHgM", "vdL0qcqgABX", "hq4-fNXeXfZ", "4nM9clN-Cw4", "KO81F-sNIpD", "J5pSDsIDhTC", "3IMQZhseHd7", "dkCYvOsFR8J", "4OKs4YihynZ", "ocrOggOlRQX", "ks3TllR9f2U", "zLbl2g-jOPx", "N6w3yvLDbBY", "J_Khri3h609" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the detailed response and comprehensive discussions. The proposed method is relatively new and interesting, which could potentially arouse future research along this line. Meanwhile, the application of the current method is somewhat limited as pointed out by the other reviewers. Therefore, I decided to keep my original score.", " Thanks for the additional comments.\n\nAs discussed with reviewer vEc7, the core part of the manuscript, as reflected by the title, is Section 2 which characterizes PGD-like attacks in non-adversarial setting. To quantitatively evaluate such characterization method, we use its direct application (attack detection) and presented Section 3.\n\nWe will clearly justify the limitations of using such characterization for real detection as discussed with reviewers. Meanwhile we will make minor edits to make sure the emphasis put on attack detection does not outweigh the emphasis put on the non-adversarial characterization method itself.", " As discussed in the previous post (\"2. Reducing the number of steps for PGD-like attacks.\"), we conduct experiments with different numbers of steps of BIM attack on CIFAR-10/ResNet-18, and report the corresponding results as follows:\n\n| Steps | DR | FPR | Acc | Acc\\* |\n|-------|------|-----|-----|-------|\n| 100 | 79.2 | 1.1 | 0.0 | 62.4 |\n| 50 | 75.0 | 1.1 | 0.0 | 58.1 |\n| 25 | 64.1 | 1.1 | 0.0 | 47.3 |\n| 15 | 49.3 | 1.1 | 0.0 | 33.5 |\n| 10 | 33.1 | 1.1 | 0.2 | 20.1 |\n| 08 | 22.4 | 1.1 | 0.7 | 12.2 |\n| 05 | 7.1 | 1.1 | 3.7 | 5.5 |\n\nWhich is consistent with our expectation. We will add this to the appendix and justify its effect in Section 4.2.", " Thanks for answering my questions. \n\nI think the paper is very interesting and makes good contributions in general, especially for analyzing the PGD-like attacks. \n\nI agree that the method can infer more information and are data undemanding. However, failing to detect FGSM (the simplest white-box adversarial attack) and black-box attacks (the realistic setting for attackers) prevents it to deploy as a valid defense. \n\nIn summary, I am willing to improve the score to borderline accept, since the authors clearly addressed all limitations in the paper. In addition, tracing the adversarial attacks is a promising research direction, and this paper can inspire more follow-up works.", " PGD-like attacks are rather common in the literature on attack/defense. Meanwhile, PGD is also a part of standard emprirical evaluation tool for adversarial robustness. Sharing the characterization method with the community may inspire future works involving:\n\n**(1)** similar attack detection problems. For example, how to detect black-box attacks (e.g., score-based like NES/SPSA, or transfer-based like TI-FGSM/DI-FGSM) under the same extremely challenging problem setting? Characterizing black-box attacks is more practical. But since these attacks involve more uncertain factors (these attacks behave much more randomly than PGD-like attacks), characterizing these attacks could be even more challenging under our problem setting, compared to the well-conditioned PGD-like attacks.\n\n**(2)** stronger attacks. The characterization of the PGD-like attacks show their difference compared to other non-PGD-like attacks such as C&W. Future works may be inspired based on further analysis on the differences in characteristics.\n\n**(3)** stronger adversarial training. PGD-like attacks is commonly used for robustness evaluation. Our PGD characterization and the new interpretation on defense (Section 5 “Combination with Adversarial Training”) may inspire future defense works.", " We fully agree with the reviewer's summary. Specifically, the two mentioned perspectives (i.e., “non-defensive characterization” and “defense”) are exactly Section 2: “Adversarial Response Characteristics & Sequel Attack Effect” and Section 3: “Attack Detection and Inferring Attack Details”. They are organized logically, and we also realize that some minor modifications are required to better balance the emphasis on the two Sections based on the discussions with reviewers.\n\nAs reflected by the title “On Trace of PGD-Like Adversarial Attacks”, the core part of this manuscript is Section 2, namely the non-defensive characterization of PGD-like attacks. The qualitative demonstrations in Figure 2 is intriguing and demonstrate the effectiveness of the proposed characterization method. But there is not yet way to quantitatively support it. Based on this, a direct application of such characterization (such as attack detection) can serve as a quantitative support for the proposed characterization. And hence attack detection is organized into an isolated section.\n\nIn order to better present the proposed method, we will make the following minor revisions to better highlight the characterization and make the logic of the manuscript less confusing:\n\n**(1)** From Abstract to Section 1: we will stick to the title and clarify the non-defensive characterization is the core of the manuscript. And we will explicitly justify that attack detection is a direct application of the proposed characterization method, which serves as a quantitative evaluation of the characterization method.\n\n**(2)** Section 3: we will explicitly move the “Limitations” paragraph from Section 5 to the end of Section 3, and meanwhile include the limitations discussed with reviewers in the revised paragraph. In this way there will be a smaller risk of confusing future readers – the proposed method is a non-defensive characterization method, but will face the discussed limitations when really used for defense.\n\n**(3)** We will reduce the claims on defense throughout the manuscript, and will explicitly clarify the motivation of providing Section 3 (quantitative support for the characterization method) at the beginning of Section 3.\n\nIn the end, thanks again for the very constructive comments.", " Thank you for your response. It is good to see that\n1. The adaptive attack’s cost on CIFAR10 is still high\n2. The defense has reasonable robustness to previous adaptive attacks\n3. Discussion of similar defenses\n\nGiven this, I am raising my score from 5 to 6.\n\n---\n\nI could not give 7 due my only remaining concern, that **the applicable scenario of this method is somewhat limited.** This concern arose as I went through other reviewers' comments on the strong threat model.\n\nBelow is my best disentanglement of this issue.\n\n**1. This work as a non-defensive method to characterize PGD-like attacks.**\n\nAs indicated in my original review, I like the proposed method of characterizing adversarial examples created by PGD-like attacks. I believe this method should inspire some work in this direction, i.e., characterizing adversarial examples and inferring the underlying attack settings.\n\n**Pros:** From this perspective, it is OK to have strong assumptions (i.e., the SAE conditions), as they benefit inferring the attack details. That means, being sensitive to these conditions is a strength.\n\n**Cons:** The problem is, I am not sure if there are many practical scenarios where someone wants to characterize white-box attacks (i.e., the PGD-like attacks considered in this paper). It might be good to provide some justifications about why this method is needed in practice. (sorry for brining up this new information not indicated in the original review)\n\n**2. This work as a defense (with robustness claims) to detect PGD-like attacks.**\n\n**Pros:** The defense is robust to a reasonable set of adaptive attacks, assuming that the strong threat model is reasonable.\n\n**Cons:** From this perspective, however, I am afraid it is NOT OK to have strong assumptions, as attacks can easily break some of them to evade the defense. That means, being sensitive to these conditions is a weakness.\n\n**3. Summary**\n\nI believe the above disentanglement illustrates my root concern for this paper — it is good as a non-defensive characterization but is bad as a defense.\n\nFrom responses to other reviewers, I can see that the authors have been trying to justify the validity of their threat model from the first perspective. However, they should notice that the claims as a defense belong to the second perspective, where the threat model is surely too strong. I would suggest the authors carefully think about what is the best way to present this method, as the current claims are slightly convolved.", " We appreciate further comments from the reviewer.\n\nWe acknowledge that there are many choices of adaptive attacks against the proposed method, and it suffers from slow computation largely due the the computation of Jacobian matrices. Meanwhile, the proposed method is specific to PGD-like attacks (as reflected by title). However, it shows a consistent pattern that the unique trace (SAE as reflected by ARC features) becomes stronger when the attack is stronger across different architectures, although the concrete performance differs across architecture.\n\nWe would like to further emphasize two differences between the proposed method and related works:\n\n**(1)** Our problem setting requires the least amount of information from the potential user (only a pre-trained network as well as a few training samples), which makes the method applicable in a wider range of scenarios, including scenarios where most existing methods will be infeasible (e.g., Federated Learning as discussed). This problem setting is extremely challenging, and is less explored in the literature.\n\n**(2)** While existing detectors can detect multiple types of attacks, they can only infer whether the input is adversarial. While our proposed method is specific to PGD-like attacks, it can infer more information about the attack including loss, label, and Lp bound, etc., based on the five conditions.", " We appreciate the reviewer's additional comments. Here are our further discussions:\n\n**1. ARC is effective and valid given the five assumptions are satisfied.**\n\n**(1)** We acknowledge the mentioned strong assumptions imposed by our model. However, the assumptions that are uniquely specific to PGD-like attacks is meanwhile the reason why we can infer more information about the attack upon detection, e.g., Lp bound, loss function, and the label.\n\n**(2)** The related works focus on a wide range of attacks but can only infer whether the example is adversarial or not, while our method is specifically focusing on PGD-like attacks (namely the five assumptions), and can infer further information apart from whether the example is adversarial or not. As reflected by our title, we aim to reveal the unique trace (or “signature”) of the PGD-like attacks that are very popular among the attack/defense literature. We think identifying the unique trace of PGD-like also helps us to better understand the characteristics of PGD-like attacks as well as its difference from other attacks.\n\n**(3)** Our method is built upon a problem setting that requires less information from the potential users, which makes it usable in a wider range of potential scenarios. The methods that requires a large amount of training data will become infeasible when accessing a large amount of raw data is infeasible (e.g. Federated Learning as mentioned in the response to other reviewer comments. Accessing raw images from participating devices will violate user privacy).\n\n**2. Reducing the number of steps for PGD-like attacks.**\n\nWe agree that clarifying the issue of number of steps makes the first condition more accurate. In the first revision of the manuscript we added Appendix A.2 to discuss the effect on number of steps and the expected results. We will conduct experiments with different numbers of steps on CIFAR-10 and add the experimental results later. We will eventually also add this discussion in Section 4.2 to make the condition less confusing and better justified.\n\nThe preliminary revised condition (I) is as follows: “whether the input is adversarially perturbed by an iterative projected gradient update method with a not small number of steps.” And we will clarify in Section 4.2 that “the larger the number of steps is, the stronger the unique trace will be.”\n\n**3. Adaptive Attacks.**\n\nApart from the adaptive attack that aims to make ARC ineffective (further discussed in Appendix A.4), other forms of adaptive attacks are also possible (further discussed in Appendix A.4 as well as the reply to reviewer vEc7 Based on “On Adaptive Attacks to Adversarial Example Defenses. Tramèr et al. NeurIPS 2020”).\n\nIt is pointed out by reviewer vEc7 that our proposed method is spiritually similar to “The Odds are Odd” and “Turning a Weakness into a Strength” – both of them are statistical tests. Their corresponding adaptive attacks are “Logit Matching” and “Interpolation”, which is discussed in the response to reviewer vEc7 as well as Appendix A.4.\n\nAs for the four kinds of adaptive attacks mentioned by the reviewer:\n\n**(1)** Changing the loss function to attack the detector. This is exactly breaking our condition (IV). With a different loss function, the SAE reflected by ARC feature will be weaker (discussed in Section 4.2).\n\n**(2)** Using BPDA to approximate the gradient. Our proposed method does not change the weights or architecture of a pre-trained neural network, and we can directly compute the gradient without gradient-masking effect. Hence it is unnecessary to approximate gradient and lower the success rate of attack.\n\n**(3)** We experiment with transfer-based attacks in Table 2 (t8: DI-FGSM and t9: TI-FGSM). Meanwhile, they break the condition (II) – if our proposed method does not raise alert, we will also know that the pre-trained model weights are probably not yet stolen by the attacker.\n\n**(4)** Generating random noise untill a successful attack is also a valid adaptive attack.\n\nIn practice, breaking one of five conditions is simpler than directly attacking ARC. But we speculate that not discussing adaptive attack at all in the original submission will lead to a struggling rebuttal process.", " I appreciate the author's response and am eager to discuss the paper. \n\n**ARC is effective and valid given the five assumptions are satisfied.**\n\nI agree that defending against or detecting adversarial attacks is exceptionally challenging, and some assumptions of the attackers are reasonable. However, I think the assumptions in this paper are too strong. The pipeline of the attack includes 1) attackers getting and using the victim's model parameters, 2) attackers choosing the exact same $L_p$ (not small $\\epsilon$) bound with the defender, 3) attackers choosing the $L_{CE}$, 4) attackers using enough attacking iterations with PGD. I think it is unlikely that the attackers can strictly follow them when constructing the adversarial examples. \n\n**Reducing the number of steps for PGD-like attacks.**\n\nThank you for clarifying it. If SAE is not effective when facing weak adversarial examples (small $\\epsilon$ and fewer attacking steps), then I think it is necessary to modify the first condition in Section 4.2 and discuss it. \n\n**Adaptive Attacks**\n\nGiven those attacking assumptions, I think it might be unnecessary to even talk about adaptive attacks. Common adaptive attacks include:\n\n* Changing the loss function to attack the detector.\n* Using BPDA to approximate the gradient.\n* Evaluating with transfer-based attacks.\n* Generating random noise with a large number of times.\n\nAll of them are excluded from attacking assumptions. I am happy to talk more with the authors. \n\n", " Thanks for the answers. There are some comments from me after reading the rebuttals.\n\nFirstly, for the adaptive attacks and the authors' reply to Question 4. ''The DR in Table 2 is very low, even for the baseline (t1)'', the adversary can break the conditions to decrease the detection success rate. Clearly, these attacks can be seen as adaptive attacks.\n\nSecondly, for the detection speed, the authors claim that their method can be used in forensics. However, in the experiments, there is no evidence that their method is better than existed forensics methods.\n\nFinally, for the generalizability, based on the authors' reply, the method is not general. The model structures, data distributions, and attack algorithms will influence the detection performance significantly.", " **Quality-1. What if the least-likely label is not the ground\ntruth label? Is it possible that the linearity goes down ...?**\n\nThis exactly corresponds to the experiments for our condition (V)\nintroduced in Section 2. This condition is justified in Section 4.2.(V):\nPGD-like attacks triggers local linearity for a considerable portion\nof network output dimensions, because there will be SAE even with\na randomly guessed label (f10 in Figure 4). The best guess leads to\nthe strongest SAE (f1 in Fig 4), while the worst guess leads to the\nleast significant SAE (f9 in Fig 4).\n\nBesides, the linearity will not \"go down\", as shown in (f10) of\nFigure 4 using a randomly guessed label -- the values in matrix are\nincreasing. Namely, the linearity \"goes up'' with any guess (most-likely,\nleast-likely, random) according to visualizations mentioned in Section\n4.2.(V).\n\n**Quality-2. Discuss if the defense is indeed lightweight compared\nwith previous defenses requiring auxiliary models.**\n\nSorry for the ambiguity. The attribute \"light-weighted'' is justified\nin Section 1 \"Contributions'': \"light-weighted (requires no auxiliary\ndeep model)''. Namely, it is light-weighted in terms of algorithm\ncomponents. The proposed method is slow due to Jacobian\ncalculation, which is justified in Section 5 \"Limitations'' in\ninitial submission. This is also included in the \"Pros\\&Cons''\nlist in the Appendix.\n\n**Quality-3.1. Discuss the computational cost of the proposed\nadaptive attack on CIFAR-10.**\n\nThe additional loss term `||S_*(x+r)||_F` can be expanded with\nEq.(3) as shown in Appendix A.4. To solve this adaptive attack problem,\na straightforward solution is to conduct $Z$-step PGD updates with\nthe additional loss term. Thus, each step includes but is not limited\nto these computations: (1) $T+1$ Jacobian matrices to calculate $n^{\\*}$\nand $\\nabla f_{n^{\\*}}(\\cdot)$; (2) $T+1$ Hessian matrices to calculate\n$\\nabla^{2}f_{n^{\\*}}(\\cdot)$. Let $\\psi_{J}$ and $\\psi_{H}$ be\nthe time consumption for Jacobian and Hessian matrices respectively.\nThen the time consumption of the $Z$ steps of optimization in total\nis greater than $Z(T+1)(\\psi_{J}+\\psi_{H})$.\n\nFor reference, for Nvidia Titan Xp GPU and CIFAR-10/ResNet-18, the\n$\\psi_{J}=0.187\\pm0.012$ seconds, and $\\psi_{H}=20.959\\pm0.679$\nseconds (Python code for this benchmark can be found in Appendix).\nIf we use $Z=100$ steps of PGD attack, and $T=6$ for calculating\nARC, each adaptive adversarial example of a CIFAR-10 image takes more\nthan $Z(T+1)(\\psi_{J}+\\psi_{H})\\approx14802$ seconds (i.e., $4.1$\nhours).\n\n**Quality-3.2. Adaptive attacks: Logit matching and Interpolation\nwith binary search.**\n\nFor\nCIFAR-10, we use all testing data. For ImageNet, we only use 128 samples\ndue to limited time frame. The detailed results and discussions are\nadded to Appendix A.4.\n\nFor __Logit Matching__, the (DR, FPR) at epsilon=\\{2,4,8,16,?\\}/255 are\nrespectively:\n\n(0.0, 0.0), (0.0, 0.0), (23.8, 1.5), (48.0, 1.1), (22.8, 1.5) for\nResNet-18;\n\n(0.0, 0.0), (7.0, 1.4), (17.2, 1.4), (91.4, 1.4), (30.3, 1.6) for\nResNet-152;\n\n(0.8, 1.6), (7.0, 2.0), (55.5, 2.0), (90.6, 0.2), (41.2, 2.0) for\nSwinT.\n\nFor __Interpolation__ method, the corresponding (DR, FPR) are:\n\n(0.0, 0.0), (0.0, 0.0), (28.0, 1.5), (74.4, 1.1), (28.0, 1.5) for\nResNet-18;\n\n(0.0, 0.0), (4.7, 1.4), (25.0, 1.4), (90.6, 1.4), (31.4, 1.6) for\nResNet-152;\n\n(1.6, 1.6), (3.9, 2.0), (66.4, 2.0), (97.7, 0.2), (42.8, 2.0) for\nSwinT.\n\nThe SAE is expectedly weaker with the two attacks compared to the\nbaselines in Table 1, but our method still remains effective. \n\n**Clarity-1. Motivation of the setting discussed at L24-26 comes\nout without any context, making it hard to understand why it is important\nand hard to achieve.**\n\nAn extremely limited problem setting makes the proposed method valid\nand feasible in a wider range of defense and forensics scenarios.\nThis is discussed in detail in Appendix A.3. The most straightforward\nexample is face recognition model with Federated Learning. It is impossible\nto access the raw training data from user device (violating privacy\nof a large number of users), but it is still possible to collect 50\nsamples from several volunteers. In this case, data-demanding methods\nwill be infeasible, while our method is still valid.\n\n**Clarity-2. Similarly, at L59 I can see that the setting is\nextremely limited, but what are the strong \"cues\"\nand why are they hard to solve?**\n\nThe \"strong cues'' is a transitional sentence in order to smoothly\nintroduce our method. We will change the sentence into ``requiring\nus to identify strong traces left by the adversarial attacks'' to\navoid confusion. This problem is hard because it requires the least\namount of information from the user to detect adversarial examples\ncompared to related works.\n\n**Originality-\\{1,2\\}. Discuss similar defenses. Discuss local\nlinearity.**\n\nWe added the corresponding discussions in the Appendix. The discussion\nis omitted here due to character limit.", " **1. Design of adaptive attack not convincing**\n\nWe acknowledge that other alternative adaptive attack designs are possible, but as long as a loss term involves gradients, second-order gradients (Hessian) will be inevitable for optimization, which makes it computationally prohibitive again. We also discuss other adaptive attacks mentioned by other reviewers in Appendix A.\n\n**2. Threat model too strict**\n\nOur method inherits limitations from the strong assumptions, which are justified in Section 4.2. Besides, for forensics, our method can identify attack type (whether or not PGD-like) and infer attack details, while existing detectors cannot distinguish results created by different attacks. This is less explored in the literature.\n\n**3. This detection is too slow**\n\nOur method is very slow due to Jacobian matrix calculation (Section 5 “Limitations” in initial submission). The proposed method is slow for real-time defense, but is still suitable for forensics. We summarize the pros and cons in Appendix.\n\n**4. The DR in Table 2 is very low, even for the baseline (t1)**\n\nThe effectiveness of our method relies on the five conditions disucssed in Section 2. In Section 4.2, we justify these conditions by breaking them respectively as shown in Table 2. Hence, Table 2 includes the baseline (t1) and the cases (all except for t1) under which our method will not be very effective due to clearly broken assumption. Namely, high DR in Table 2 is not expected and will be paradox.\n\nThe expeirments are carried out with the most difficult “epsilon=?” setting (more difficult than the common settings in related works), which involves many small perturbations that are hard to detect. Hence the baseline (t1) performance is not high.\n\n**5. The results in Table 3 indicate that this detection method cannot efficiently detect AEs for ImageNet**\n\nSmaller perturbations are harder to detect. The unique trace of PGD-like attacks is stronger when the attack is stronger, and hence the detector performance will be better with a stronger trace (98% DR for ImageNet/SwinT with epsilon=16/255). This characteristics is consistent across different settings, and none of the existing methods shows the same characteristics. For instance, the DR of NSS even drops by a large margin when epsilon is increased from 8/255 to 16/255, which is an undesired characteristics showing inconsistency of NSS.\n\n**6. The victim should be able to freely gather as much as clean data to train a detector**\n\nData-demanding methods are only applicable for models using publically available datasets, or is only applicable by the ones who trained the neural network. This seriously limits the use cases of these methods, while a data-undemanding method can be used in a wider range of defense or forensics scenarios. Collecting a large amount of data may be difficult for some potential adopters. For instance, the raw training data of a federated learning face recognition model is inaccessible (sending raw image data is forbidden in Federated Learning due to privacy violation), but collecting a few reference samples from volunteers is still possible. None of data-demanding methods is applicable in this case.\n\nOur proposed method focuses on the network response characteristics. It does not benefit much from a large amount of training data. As discussed on L297-301, the performance gain of our method starts to plateau from roughly 200 training samples.\n\n**7. This method is only sensitive to PGD-like attacks. (followed by two questions)**\n\nOur ARC feature is calculated using BIM, in order to “continue” a previous PGD-like attack (Section 2). Hence, SAE relies on the five conditions in Section 2, and non-PGD-like attack will easily break the conditions. Our empirical observations (all Figures and Tables) show that only PGD-like attacks (will easily satisfy the conditions) will effectively trigger SAE (local linearity).\n\nStatistically, PGD-like attacks will trigger local linearity based on our demonstrations (Figure 2), but there will be hard samples that fall into the cluster of benign examples. Our method is not sensitive against these samples as demonstrated, which leads to imperfect Detection Rate.\n\n**8. In Figure 5, what is the reason that NSS is a better detection method on ResNet-152?**\n\nARC is based on model gradient, and different architecture have different characteristics. Compared to SwinT, ResNet-152 is harder to be turned “linear” (Figure2; Figure3; L235). We speculate this is the reason why the trace for ResNet-152 is weaker than that of SwinT.\n\nFigure 5 is based on the difficult “epsilon=?” setting involving both small and large perturbations. According to Table 3, a larger perturbation results in stronger trace, and hence more distinct SAE and higher DR for our method. NSS extracts low level hand-crafted features, and is better than our method for small perturbations. But NSS performance will plateau or even decrease with larger perturbations, which is inconsistent.", " **1. The proposed method is limited to PGD-like attacks as discussed in the paper.**\n\nThe proposed method relies on the five assumptions as discussed in “Uniqueness of SAE to PGD-like Attack” of Section 2, which are further justified in Section 4.2. These assumptions collectively make the proposed method specific to PGD-like attacks. Being insensitive to non-PGD attacks makes the proposed method unsuitable for general attack detection scenario, but it is still useful in defense scenario with knowledge about the attacker, or forensics scenario to example whether an adversairla example is created by PGD-like attack by identifying its unique trace.\n\nWe summarized a list of pros and cons of the proposed method in the Appendix of the revised manuscript.\n\n**2. Motivation for extremely limited training data.**\n\nData-demanding methods is only applicable for models using publically available datasets, or is only applicable by the first-party who trained the neural network. This limits the use cases of these methods. In contrast, we do not assume collecting a large amount of data is easy for potential adopters of the proposed method. Due to the low demand on data, the proposed method enables a wider range of defense or forensics scenarios, especially when there is no access to the whole training dataset. For instance, the __\"Third-party Attack Detection or Forensics\"__ and __\"Attack Detection for Federated Learning\"__ scenarios.\n\n__Third-party Attack Detection (identify whether the model is attacked) or Forensics (identify attack type and infer the attack detail).__ Being data-undemanding means the proposed method can be applied to any pre-trained neural network randomly downloaded from the internet, or purchased from an commercial entity. For pre-trained neural networks using proprietary training datasets with commercial secret or ethic/privacy concerns (such as commercial face datasets and CT scans from patients), the proposed method is still valid as long as there are are a few training samples for reference, or it is possible to request a few reference training samples.\n\n__Attack Detection for Federated Learning.__ In federated learning, raw training data (such as face images) is forbidden to be transmitted to the central server. And hence even the neural network trainer cannot access the full training dataset (will violate user privacy), and it is impossible to use any data-demanding methods to detect attack against a trained model (e.g., face recognition model). In contrast, the proposed method is still valid in this scenario as long as a few training samples can be collected from several volunteers for reference.\n\nWe added these in the Appendix of the revised manuscript.\n\n**3. Performance when there is sufficient training data.**\n\nPlease refer Section 5 “Training Set Size.” Our proposed method relies more on the stable pattern about model gradient consistency instead of the amount of data. It does not benefit from a large amount of data. According to our empirical observations, the performance gain from number of training data will become marginal starting from roughly 200 samples (L297-301), because the distribution of ARCv feature is already well-represented by the small batch of data. Meanwhile, there is no extra representational capacity of the easy-to-interpret 2-dimensional ARCv feature. Thus, our proposed method is more suitable for scenarios with extremely limited data.\n\n**4. Why use T=48 in L100?**\n\nIn our experiments, we use T=6 (L181-184), and the corresponding features are visualized in Figure 2, Figure 3, and Figure 4. We speculate some readers might want to know what the feature wille be like with a larger step size. Thus, we show this to through T=48 examples in Figure 1, which meanwhile demonstrates that the network behaves more and more linear from step to step. With an empirically chosen T=48, the trend of the matrix is clear, and each cell in the matrix will not be too small to visualize. With a larger step size like T=100 or even larger, the matrix will show the same pattern, but the cells will be too small to visualize.", " **1. Is ARC a valid method?**\n\nARC is effective and valid given the five assumptions are satisfied (See “Uniqueness of SAE to PGD-like Attack.” in Section 2). These assumptions collectively make the proposed method specific to PGD-like attacks, and hence is expectedly ineffective against non-PGD-like attacks. These assumptions are justified in Section 4.2 through both qualitative and quantitative demonstrations. Conclusively, SAE is a unique trace of PGD-like attacks (L269-L271), and is effective against PGD-like attacks (Section 4).\n\nWe summarized a list of pros and cons of the proposed method in the Appendix of the revised manuscript.\n\n**2. Reducing number of steps for PGD-like attacks.**\n\nWe agree with this. It is known that the number of iterations (fixed at 100 in our experiments) also impacts the attack strength besides perturbation magnitude ε. As increasing number of iterations will also lead to a more linear response from the model given an fixed and appropriate ε and achieve SAE similarly, we stick to one controlled variable ε for simplicity.\n\nOn the contrary, reducing the number of iterations of a PGD-like attack will also lead to small perturbations that are hard to detect (as demonstrated in Section 4), and hence increase the possibility that the attack will not trigger clear SAE and hence bypass the proposed detection method. As an extreme case, FGSM, namely the single-step version of PGD does not effectively trigger SAE (as discussed in Section 4.2). \n\nThe related works usually fix at a single set of attack parameters, and hence miss the observation that smaller perturbations are harder to detect.\n\nThis is added the corresponding discussion in the Appendix (due to page limit) of the revised manuscript.\n\n**3. It cannot work with adversarial training.**\n\nThe proposed method is incompatible with Adversarial Training. But meanwhile it provides a new perspective to understand why adversarial training works. See \"Combination with Adversarial Training\" in Section 5).", " This paper proposed a method to trace the strong adversarial attacks (specifically PGD). The insight is PGD attack is likely to leave a trace and trigger the local linearity of the network. They then introduced the ARC to capture the SAE, resulting in a detector that can identify the informed attacks and uninformed attacks. Experiments including ImageNet showed the effectiveness. \nStrength:\n1. It studies very extremely limited settings for the defender, which is novel and interesting. \n2. The paper presents a host of visualization to support the intuitions and conclusions. \n\nComment:\n1. My major concern is that ARC is not effective under a lot of scenarios (including FGSM, small $\\epsilon$, and transfer-based attacks ). It is also possible for PGD attacks to bypass the detector by reducing the number of steps. It cannot work with a robust model (adv training). I am not sure if ARC is a valid trace method. \n\nAfter rebuttal, I am willing to improve the score to 5, but my concern about the method still exists. NA NA", " This paper proposes a new and interesting method to detect PGD-like adversarial perturbations. The insight is to leverage the fact that PGD-like adversarial attacks will trigger local linearity of the input sample. To quantitatively measure the local linearity, the authors propose to use the Sequel Attack Effect (SAE) to continue the attack with BIM and then measure linearity with Adversarial Response Characteristics (ARC). SVM-based models are then trained and used for the detection of informed/uninformed attack detections. Strength\n- The proposed method is very interesting and novel, providing great contributions to understanding and detecting PGD-like adversarial attacks and defense.\n- The investigated topic is important and useful.\n- Extensive empirical results on CIFAR-10 and ImageNet well illustrate the effectiveness of the proposed method and support the claims.\n- The paper is well-written and easy to follow. Provided figures clearly show the insights of the method.\n\nWeakness\n- The method is only limited to PGD-like adversarial perturbations, unable to other popular attacks like C&W as discussed in the paper.\n- The importance of limited training data for adversarial detection should be better motivated in introduction. - I understand the main contribution here is for detection with limited amount of training data. I wonder how well the performance of the proposed method would be given sufficient training data compared to other SOTA detection methods?\n- A minor question: why use T=48 in line 100? It would be better to give a brief explanation for special numbers. Limitations and societal impact are comprehensively discussed in the paper.", " This paper proposes a detection defense against adversarial examples generated by untargeted PGD-like attacks. This defense relies on the assumption that such adversarial examples trigger distinguishable patterns of linearity along a local trace defined by the BIM attack, which maximizes the cross-entropy loss that regards the least-likely label as the ground truth. The linearity pattern is characterized by the ARC feature, a vector reduced from the jacobian matrices along the BIM trace. Extensive qualitative and quantitive results on CIFAR10 and ImageNet (ResNet and SwinT) demonstrate the distinguishability of the ARC feature over benign inputs and adversarial examples generated by PGD-like attacks. This characterization is then used to detect PGD-like attacks and infer their parameters. The overall defense is effective against non-adaptive (unaware of the defense) PGD-like attacks, and outperforms another detection defense that does not require too much data to collect statistics. Finally, some interesting observations are discussed, such as linearity enforced by adversarial training. Before heading to the detailed comments, I would like to note *neutrally* that the proposed defense is similar in spirit to some previous detection defenses that rely on the adversarial example's higher robustness against benign noise [1] and adversarial perturbation [2], which were later broken by [3]. *This similarity leads to both pros and cons, detailed below.*\n\n[1] The Odds are Odd: A Statistical Test for Detecting Adversarial Examples. Roth et al. ICML 2019.\n\n[2] A New Defense Against Adversarial Images: Turning a Weakness into a Strength. Hu et al. NeurIPS 2019.\n\n[3] On Adaptive Attacks to Adversarial Example Defenses. Tramèr et al. NeurIPS 2020.\n\n### Originality\n\n**Strengths (major)**\n* The proposed characterization of adversarial examples is novel and only requires a tiny set of training data, which seems to be less discussed in the literature.\n* The proposed ARC feature is novel and explicitly characterizes a well-defined property (local linearity) of adversarial examples. I believe this is more sophisticated than previously explored properties such as robustness to benign noise [1] and adversarial perturbation [2].\n* The idea of using BIM attacks to characterize the trace of untargeted PGD-like attacks is insightful. The proposed defense can detect perturbed-but-unsuccessful inputs, which seemed not to be covered by previous defenses [1, 2].\n\n**Weaknesses (minor)**\n* **Discuss similar defenses.** It is suggested to discuss [1, 2] and potentially other similar defenses with more details on the underlying properties. For example, how the property explored in this work is different and superior to those explored by previous defenses.\n* **Discuss local linearity.** Since this work relies on the local linearity of adversarial examples, it is suggested to include more discussion about newer work in this direction to solidify the underlying assumption.\n\n### Quality\n\n**Strengths (major)**\n* The proposed characterization of adversarial examples is technically sound and well supported by experimental results on CIFAR-10 (ResNet) and ImageNet (ResNet and SwinT). It can effectively distinguish between benign inputs and adversarial examples generated by PGD-like attacks with different parameters.\n* The ablation study demonstrates the defense's non-sensitivity to non-PGD-like attacks, but the pros and cons are adequately justified.\n\n**Weaknesses (major)**\n\n* **Some assumptions are too strong.** This paper made some strong assumptions that may not always hold. For example, at L81 the authors assume that the input's ground-truth label is simply the least-likely one. While this is likely to hold for vanilla attacks, a slightly smarter attacker could simply make the ground-truth label anywhere between the most-likely and least-likely label to break this assumption. It is unclear how the original characterization performs in this case. For example, is it possible that the linearity goes down at the first few steps (given an incorrect guess of ground truth) before heading up in those cases?\n\n* **The proposed defense may not be lightweight**. Since the proposed defense requires several steps of BIM attacks at the inference stage to obtain the input's ARC feature, the defended model's inference of each input would now include overheads (both efficiency and memory) of several backward passes. While other defenses require auxiliary models (thus argued as not lightweight in this paper), those defenses do not involve backward passes and thus may be more efficient than the proposed defense. This weakness is partially supported by the authors' inability to evaluate all ImageNet data. It is suggested to include some empirical results to resolve or clarify this weakness.\n\n* **Insufficient discussion of adversarial robustness.** Since this paper claims to propose a defense against adversarial examples, I find it hard to be convinced that the proposed defense would come with much robustness against adaptive attacks. I understand that the authors have discussed the hardness of adaptive attacks to some extent and left them for future study (L123-134), but the current discussion is rather limited, even from the perspective of *published* adaptive attacks on similar defenses. *I am outlining my comments below but will be open on this point if the meta-reviewer deemed the following discussion as beyond the scope or outweighed by the strengths of the proposed characterization approach (in a non-adversarial setting).*\n * At L123-134, the authors discussed an adaptive attack that is computationally prohibitive on ImageNet. However, it is unclear if the same claim holds for the smaller CIFAR-10, which is also a dataset evaluated in this paper. **Please discuss the computational cost of this adaptive attack on CIFAR-10 and if it is still prohibitive.**\n * If I understand correctly, the adaptive attack must reach a point whose BIM-trace is equally non-linear (or has the same linearity pattern) as that of a benign input. **However, the current defense cannot prevent the existence of such points, so it is still possible to find them.** This is similar to previously broken defenses that expect adversarial inputs to have unique patterns in terms of robustness to benign noise [1] and adversarial perturbation [2]. Therefore, I believe discussing their *existing* adaptive attacks from [3] would significantly strengthen this paper. In particular, it is suggested to discuss the following two *published* adaptive attacks.\n * Logit matching (Section 5.2 of [3]). If the PGD attacker now aims to reach a point whose logit is sufficiently close to an unperturbed image from a different class, can the proposed characterization still distinguish between benign inputs and such adversarial examples?\n * Interpolation between the adversarial and benign examples (Section 5.13 of [3]). If the attacker moves the adversarial example generated by PGD towards the original benign input by interpolation, can the proposed characterization still distinguish between the original benign input and the adversarial example moved towards it?\n\n\n### Clarity\n\n**Strengths (major)**\n* The overall presentation is good. I appreciate the clear demonstration in Figure 1.\n* The summarized conditions in Section 4.2 is good.\n\n**Weaknesses (minor)**\n* The setting discussed at L24-26 comes out without any context, making it hard to understand why it is important and hard to achieve. It is suggested to motivate this setting with some related work and emphasize it more throughout the paper.\n* Similarly, at L59 I can see that the setting is extremely limited, but what are the strong \"cues\" and why are they hard to solve?\n\n### Significance\n\nI appreciate the great effort in characterizing the trace of PGD-like attacks and the extensive experiments; this paper might inspire some work in that direction. However, my biggest concern about this work is its robustness as a defense against adversarial examples, given my experience of the commonly acknowledged importance of evaluating adaptive attacks in the adversarial example defense literature. That being said, I am open to discussion on this point. My current score is based on the strengths of the proposed characterization method and the expected fix of minor weaknesses noted above. I am willing to raise my score if the following major concerns are adequately clarified or justified.\n* [Quality-Weakness-1] What if the least-likely label is not the ground truth label?\n* [Quality-Weakness-2] Discuss if the defense is indeed lightweight compared with previous defenses requiring auxiliary models.\n* [Quality-Weakness-3.1] Discuss the computational cost of the proposed adaptive attack on CIFAR-10.\n* [Quality-Weakness-3.2] Discuss the two adaptive attacks mentioned above.\n\nI am open to decrease the significance of the last question if the meta-reviewer deemed it as beyond scope or outweighed by the strengths of the proposed characterization method in a non-adversarial setting. Since this paper claims to propose a defense, its main limitation is the lack of a sufficient discussion of adaptive attacks. Although the authors have provided some discussion at L123-134, I find it hard to be convinced that a more thorough evaluation is not necessary. It is strongly recommended to at least evaluate published adaptive attacks (that break previous similar defenses) on the proposed defense.", " The authors propose a new detection method to detect PGD-like attacks. The detection method is based on the analysis of local linearity between benign samples and adversarial samples. The authors also introduce ARC and SAE to measure the local linearity. The experimental results indicate that this method can successfully detect PGD-like attacks on various datasets and model architectures. I think using gradients to detect AEs is interesting, and there are few works focusing on this point. However, there are some weaknesses.\n\n1. The design of adaptive attack is not convincing. To minimize the Fro-norm of S_*(x+r), the attacker only needs to adopt an additional loss term to minimize the difference between the gradients of AEs and clean data.\n2. There are five requirements for SAE to be consistently triggered. It seems that breaking requirements (III), (IV) and (V) are very easy. The threat model is too strict for me. The victim needs to have perfect knowledge of the attacker. \n3. As a detection method, its premier requirement is fast and cannot influence the performance (e.g., the predicting speed and accuracy) of the classifier. This detection is too slow to use in practice. \n4. The DR in Table 2 is very low, even for the baseline (t1).\n5. For ImageNet, the perturbation size is usually very small, i.e., 4./255 or less. So, the results in Table 3 indicate that this detection method cannot efficiently detect AEs for ImageNet.\n6. The baseline method is only NSS, which is not enough. Although the proposed method is data-undemanding, I think a victim can freely gather as much as clean data to train a detector. It should not be a problem to compare your method with other previous works. 1. The authors claim that this method is only sensitive to PGD-like attacks. Is it because that only PGD-like attacks can trigger the local linearity? If a PGD-like attack does not trigger it, can the detection still sensitive to this attack?\n2. In Figure 5, what is the reason that NSS is a better detection method on ResNet-152? The authors have addressed the limitations. I appreciate it." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5, 4 ]
[ "4OKs4YihynZ", "bzdAzYjHiXg", "KO81F-sNIpD", "4nM9clN-Cw4", "vdL0qcqgABX", "vdL0qcqgABX", "3IMQZhseHd7", "J5pSDsIDhTC", "KO81F-sNIpD", "ocrOggOlRQX", "dkCYvOsFR8J", "N6w3yvLDbBY", "J_Khri3h609", "zLbl2g-jOPx", "ks3TllR9f2U", "nips_2022_jowVZoitZYu", "nips_2022_jowVZoitZYu", "nips_2022_jowVZoitZYu", "nips_2022_jowVZoitZYu" ]
nips_2022_m67FNFdgLO9
Dense Interspecies Face Embedding
Dense Interspecies Face Embedding (DIFE) is a new direction for understanding faces of various animals by extracting common features among animal faces including human face. There are three main obstacles for interspecies face understanding: (1) lack of animal data compared to human, (2) ambiguous connection between faces of various animals, and (3) extreme shape and style variance. To cope with the lack of data, we utilize multi-teacher knowledge distillation of CSE and StyleGAN2 requiring no additional data or label. Then we synthesize pseudo pair images through the latent space exploration of StyleGAN2 to find implicit associations between different animal faces. Finally, we introduce the semantic matching loss to overcome the problem of extreme shape differences between species. To quantitatively evaluate our method over possible previous methodologies like unsupervised keypoint detection, we perform interspecies facial keypoint transfer on MAFL and AP-10K. Furthermore, the results of other applications like interspecies face image manipulation and dense keypoint transfer are provided. The code is available at https://github.com/kingsj0405/dife.
Accept
This paper uses knowledge distillation to transfer learn facial embeddings across humans and animals. Helps when sufficient data for learning embeddings from animal faces is not available. An interesting application of standard concepts from domain adaptation, knowledge distillation, etc. While preparing the final paper the authors may highlight the novelty in this work. The paper is acceptable.
train
[ "LW04V-f_nvi", "VGQlcNLR1hM", "aYXt8CfmH1J", "iFaMW8IArv6", "hrj6bfislK", "yCh0N5kBLv", "1-BayH1bvoU", "ZtGEzoo0Nm9", "PalDNHeERqa", "nyMveVtmfjJ", "n3mlSG42241", "VQstlncmXJv", "0K8kgcfEz9i" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your answers. They addressed most of my concerns pre-rebuttal. I will update my score.\n\n", " ### Table 3: The human landmark detection\n- We bring citations starting with ‘O’ and the NME value of previous works from On Equivariant and Invariant [R1]. For example, ‘O67’ means the 67th reference of the paper. The bold fonts mean lower performance than DIFE.\n| Category | Method | Unsup. | Cross-Domain | MAFL | AFLW_M | AFLW_R | 300W |\n|:--------------------------:|:--------------------------------:|:------:|:------------:|:--------:|:---------:|:---------:|:--------:|\n| Supervised learning | TCDCN | X | X | **7.95** | 7.65 | - | 5.54 |\n| | MTCNN [O66] | X | X | 5.39 | 6.90 | - | - |\n| | Wing Loss [O14] | X | X | - | - | - | 4.04 |\n| Generative modeling based | Deforming AE [O47] | O | X | **5.45** | - | - | - |\n| | ImGen. [O28] | O | X | 2.54 | - | 6.31 | - |\n| | ImGEN.++ [O29] | O | X | - | - | - | 5.12 |\n| Equivariance based | Dense 3D [O50] | O | X | **4.02** | **10.99** | **10.14** | **8.23** |\n| | DVE [O49] | O | X | 2.86 | 7.53 | 6.54 | 4.65 |\n| | On Equvariant And Invariant [R1] | O | X | 2.44 | 6.99 | 6.27 | 5.22 |\n| | DIFE | O | O | 3.40 | 10.11 | 8.68 | 7.57 |\n### Weakness 1 + Question 1 + Question 2 + Question 3\n- We greatly appreciate your instructive replies toward solid work.\n- (1)+(2)+Q2: When we try to choose the mean face and transfer landmarks, the performance was highly dependent on the selected mean face. Instead, we bring the keypoint regression experiment setup following DVE [36] and On Equivariant And Invariant [R1] to evaluate the robustness and the accuracy of DIFE on intra-species landmark detection. In the experiment, we train a single FC layer with a frozen pre-trained embedder to predict keypoint. The NME values of previous methods are also brought from On Equivariant And Invariant [R1]. Even though our embedder is trained on synthesized datasets, not the target dataset, DIFE shows compatible performance with the early study results of each category meaning DIFE is the apposite baseline for cross-domain face understanding. The qualitative results are also provided in Figure 6 of supplementary materials.\n- (3)+Q1: In appendix E and Table 2 we conduct the human keypoint transfer experiment with DVE and DIFE trained on only human data. The test dataset is fixed as MAFL [46] and Ap-10k [41] on every row of Table 2. The upper two rows are pre-trained weights from the original paper and the lower two rows are trained by synthesized images from StyleGAN2. The NME values of landmarks are evaluated for same-identity and different-identity following the experiment setup of DVE [36].\n### Weakness 2 + Question 4\n- The problem you mentioned is more related to StyleGAN2 latent exploration rather than the spatial consistency of the domain converter. Since we search for a pseudo-paired image (e.g. dog in Fig. 2 (b)) in the manifold of StyleGAN2, it is sometimes difficult to find an image whose face geometry exactly matches that of the input. In this case, we have observed that our latent exploration finds a realistic but unaligned image that some parts are not perfectly aligned with the input. To mitigate the misalignment, we proposed a soft distance measure in a semantic matching loss in Eq (5).\n### References\n- [R1] Cheng, Z., Su, J. C., & Maji, S. (2021). On equivariant and invariant learning of object landmark representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 9897-9906).", " This paper is about understanding the faces of animals (e.g., identifying key points such as location of the eyes, nose, or mouth) by extracting common features among the faces of other animals and of humans. This use case raises several ethical concerns:\n\n1) Data protection / privacy: The data rights of humans whose facial features are being mapped to the facial features of animals. It may well be the case that a human would object to such use of their data.\n\n2) Racial / ethnic / gender bias: Performance of the proposed method is not evaluated specifically on human faces of different racial / ethnic groups and of different genders. It may well be the case that the method works better or less well for mapping the faces of members of specific demographic groups to specific animal species. The outrageous example that comes to mind is when an AI built by Facebook labelled videos of Black men with \"Primates\" (e.g., https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html). While the method described here is not used for image or video labeling, the issue of demographic bias deserves to be further explored experimentally, reflected upon, and acknowledged in the writing.\n\n The data protection issue is acknowledged as: \"We note that the studies using DIFE could potentially violate the personal privacy or animal rights. The studies extending DIFE should carefully consider the above potential misuses, and bring positive impacts to the society.\"\n\nThe demographic bias issue is not acknowledged.\n\n Yes, these concerns can be addressed in the current version of the paper, but (1) adding a reflection in the Broader Impacts section and (2) explaining the demographic representativeness of the training corpora (for humans) and quantifying performance for different racial / ethnic / gender groups.", " ## Weakness 1 + Question 1 + Question 2 + Question 3\n - It is good to see the experiment results on WFLW and Animalweb, which I think is also helpful to dispel some of other reviewers' concerns.\n - **(1)+(2)+Q2.** Since it is hard to evaluate the contour landmarks, I think it would be great to see an intraspecific landmark transfer experiment, which reflects the robustness and accuracy of the method to some extent. One of the methods to conduct this experiment is to make a \"mean face\", with annotated landmarks, then you map this \"mean face\", as well as the landmarks, to all images in the testing set, then you are able to figure out the \"Normalized Mean Error (NME)\" in this dataset. In this way, you can find where you are in the landmark detection task. I did not expect your results to outperform intra-domain results. I just hope to put them in the same metrics. As in many unsupervised methods, they would measure with the same metrics and provide the result of supervised methods for reference.\n - **(3)+Q1.** I do not quite understand the setting of the experiment in Appendix E, could you elaborate more about it?\n\n## Weakness 2 + Question 4\n - Channel-wise operation does not guarantee the spatial semantic consistency. As shown in the Figure 2 (b), the jaw of the human face DIFE feature is mapped to the dog's mouth in the Domain-Specific Embedding. ", " ### Table 1: The interspecies keypoint transfer on CelebA+AP-10K\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 19.00 | 18.15 | 8.00 | 17.69 |\n| DVE | 17.78 | *17.40* | *6.75* | 15.58 |\n| CATs | *14.78* | 17.63 | 8.47 | *14.05* |\n| Ours | **11.73** | **11.00** | **6.51** | **10.37** |\n### Table 2: The interspecies keypoint transfer on WFLW+AnimalWeb\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 14.43 | 14.95 | 16.85 | 16.08 |\n| DVE | *14.16* | *13.28* | *13.40* | *15.74* |\n| CATs | 20.84 | 18.84 | 19.35 | 22.20 |\n| Ours | **12.01** | **12.84** | **11.70** | **14.03** |\n### Weakness 1 + Weakness 2\n- We understand your concern about the weak connection between the task and the main experiment. As mentioned in Section 4.1 there is no proper dataset or previous works for interspecies face understanding. The interspecies keypoint transfer experiment is an indirect way to evaluate our method quantitatively. We also prepare the best possible previous works as baselines; CSE [23] and DVE [36]. Besides, we carry out the interspecies keypoint transfer on WFLW [R1] and AnimalWeb [R2] datasets in Table 2. The new baseline from the semantic visual correspondence is also added in Table 1 and Table 2.\n- In Table 2, we report the quantitative result of the interspecies keypoint transfer on WFLW and AnimalWeb with 9 landmarks including the corners of the eye and mouth. The qualitative results of the experiment are updated in Figure 4 of supplementary material. Our method shows the best performance compared to previous methods on every domain pair. We will update the new experiment in the revision.\n### Weakness 3\n- We understand your concern about the performance of our method as shown in Figure 3. Nevertheless, our method achieves the best performance even in extreme poses (column 4) and out-of-domain (column 7). Because this is the first work for interspecies face understanding, we have more chance to boost the performance in future work. We will update this in the revision.\n### Weakness 4 + Question\n- We add the interspecies face parsing experiment in Figure 5 of the supplementary material. Following the segmentation in style, we use k-means clustering by DIFE for unsupervised face parsing. The eye, nose, mouth, and hairy parts are discovered with a simple method which means DIFE has proper semantic information for the interspecies face. We will update the result in the revision.\n- The comparison to existing supervised/unsupervised methods is unfair because they focus on intra-domain.\n- Nevertheless, we provide the human keypoint transfer experiment with our method trained on the human-only dataset in Appendix E. Our method show compatible performance to pre-trained DVE which is an unsupervised keypoint detection method for identity-invariant learning inside the human domain.\n- We also add a new baseline CATs [R3] which is the state-of-the-art for finding visual semantic correspondence. We use the pre-trained model from the original paper trained on SPair-71k [R4] that contains annotations including the landmarks of dogs, cats, and other animals. Our method outperforms the CATs both quantitatively and qualitatively, which indicates the applicability of our method to discovering landmarks in unlabeled animal domains. The quantitative results are provided in Table 1 and Table 2. And the qualitative results are provided in Figure 3 and Figure 4 of supplementary materials.\n### Typo\n- We update the rebuttal revision. Thank you for reading in detail.\n### Limitations\n- Our method is dependent on the performance of pre-trained CSE and StyleGAN2. There are failure cases on the occlusion, the dark illumination, or the rare species. We will update this in the revision.\n### References\n- [R1] Wu, Wayne, et al. \"Look at boundary: A boundary-aware face alignment algorithm.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n- [R2] Khan, Muhammad Haris, et al. \"Animalweb: A large-scale hierarchical dataset of annotated animal faces.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\n- [R3] Song, L., Wu, W., Fu, C., Qian, C., Loy, C. C., & He, R. (2021). Pareidolia Face Reenactment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2236-2245).", " ### Table 1: The interspecies keypoint transfer on CelebA+AP-10K\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 19.00 | 18.15 | 8.00 | 17.69 |\n| DVE | 17.78 | *17.40* | *6.75* | 15.58 |\n| CATs | *14.78* | 17.63 | 8.47 | *14.05* |\n| Ours | **11.73** | **11.00** | **6.51** | **10.37** |\n### Table 2: The interspecies keypoint transfer on WFLW+AnimalWeb\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 14.43 | 14.95 | 16.85 | 16.08 |\n| DVE | *14.16* | *13.28* | *13.40* | *15.74* |\n| CATs | 20.84 | 18.84 | 19.35 | 22.20 |\n| Ours | **12.01** | **12.84** | **11.70** | **14.03** |\n### Weakness 1 + Question 1 + Question 2 + Question 3\n- We appreciate the experiment suggestion to evaluate our method from various angles.\n- (1)+(2)+Q2 In Table 2, we report the quantitative result of the interspecies keypoint transfer on WFLW [R1] and AnimalWeb [R2] with 9 landmarks including the corners of the eye and mouth. The qualitative results of the experiment are updated in Figure 4 of supplementary material. Our method shows the best performance compared to previous methods on every domain pair. We will update the new experiment in the revision. In the case of face contour, we cannot get the quantitative results because of the absence of an animal dataset. However, the qualitative results are provided as dense keypoint transfer results in Figure 8.\n- (3)+Q1 We understand your concern about the isolated results from previous works. However, the direct comparison to existing supervised/unsupervised methods is unfair because they focus on intra-domain. Nevertheless, we provide the human keypoint transfer experiment with our method trained on the human-only dataset in Appendix E. Our method shows compatible performance to pre-trained DVE provided by the original paper.\n- (4)+Q3 Sorry for the typo in Table 2. As you understand, CSE [23] results in Table 1 are the output of pre-trained CSE which is described in Section 4.1(L230-L231). The correct pixel error of DIFE trained with $L_{K1}$ on Table 2 is $19.33$ not $19.00$. We update the rebuttal revision for this.\n### Weakness 2 + Question 4\n- Because our domain converter just maps the input to the output in the same location with the channel-wise operation, the spatial consistency is preserved.\n### Weakness 3 + Question 5\n- Although the real image distribution is more helpful to train the model, we cannot use real images as DIFE needs images and corresponding features of StyelGAN2[18] to learn face geometry. Therefore, we use extreme data augmentations like color jittering and thin-plate-spline warping to mimic real image data distribution. We also point out that the model trained with a synthesized dataset achieves the best performance on the real image dataset in Table 1 and Table 2.\n- Q5 Because there is no proper label for synthesized data, we cannot show the quantitative result. However, we conduct dense keypoint transfer on the synthesized images in Figure 8.\n### References\n- [R1] Wu, Wayne, et al. \"Look at boundary: A boundary-aware face alignment algorithm.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n- [R2] Khan, Muhammad Haris, et al. \"Animalweb: A large-scale hierarchical dataset of annotated animal faces.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.", " ### Weakness 1\n- The performance of DIFE is dependent on the performance of pre-trained CSE [23] and StyleGAN2 [18].\n- It's also true that our method requires more prior compared to DVE [36], but DVE failed to deal with extreme shape and texture variance as shown in the interspecies keypoint transfer. Our approach is still a more simple way than collecting the dataset for interspecies facial understanding because there are already plenty of unlabeled face images and body-annotated data.\n### Weakness 2\n- As described in Section 2.3(L126-L128), no studies try to align StyleGAN2 learned in different data domains at the same time. We suggest a new way to utilize the teacher model prior by using the interspecies body model to find the common space for different face generative models. Although the pseudo-paired data synthesis is similar to the latent space exploration method of Image2StyleGAN++ [1], our approach suggests a new way to synthesize paired data on multiple different data domains. In general, our main contribution is a new paradigm to deal with the common space of cross-domain data.\n### Question 1\n- As shown in the experiment on the human and wild domain, DIFE can handle out-of-domain animals to some extent, meaning that it already found a common space for the cross-domain face.\nIf you want to train DIFE from scratch on a new animal with extremely different shapes and textures like fish, you need a 3D reference model for the animal, and three keypoint annotations for the body images to train CSE model. Also, unlabelled face images are required to train the StyleGAN2 model.\n### Question 2\n- For the performance influence from CSE, our method shows better performance when we change the backbone of CSE to ResNet-101 from ResNet-50 [R1]. The cycle loss for different categories suggested by UniversalMap [24] is also helpful.\n- For the amount of data, our method suffers from overfitting when the number of synthesized images is less than 5k. However, we observe the amount of training data is not relevant when the number is bigger than 5k. Instead, extreme data augmentations like thin-plate-spline warping and color jittering are more helpful to boost the final performance.\n### Question 3\n- We only mention the pose of synthesized pseudo-pair data in Section 4.3(L291-L294) because there is no clear conclusion about such a phenomenon. However, we hypothesize a large eye is a more safe way to handle extreme shape variance of eyes. There are more examples of pseudo-paired data in Appendix G.\n### References\n- [R1] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).", " ### Table 1: The interspecies keypoint transfer on CelebA+AP-10K\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 19.00 | 18.15 | 8.00 | 17.69 |\n| DVE | 17.78 | *17.40* | *6.75* | 15.58 |\n| CATs | *14.78* | 17.63 | 8.47 | *14.05* |\n| Ours | **11.73** | **11.00** | **6.51** | **10.37** |\n### Table 2: The interspecies keypoint transfer on WFLW+AnimalWeb\n| | Human+Dog | Human+Cat | Dog+Cat | Human+Wild |\n|------|:---------:|:---------:|:-------:|:----------:|\n| CSE | 14.43 | 14.95 | 16.85 | 16.08 |\n| DVE | *14.16* | *13.28* | *13.40* | *15.74* |\n| CATs | 20.84 | 18.84 | 19.35 | 22.20 |\n| Ours | **12.01** | **12.84** | **11.70** | **14.03** |\n### Weakness 1\n- (i) Our key idea to finding an image with the same face geometry of input is to generate StyleGAN2 [18] images conditioned on the facial geometry of the input. This is achieved by minimizing the distance between the embedding of the input and the latent vector of StyleGAN2 as described in Eq. (3). Since the embedding/latent vectors contain the information of face geometry, the StyleGAN2-generated face is optimized to resemble the input. The species-specific latent vector is only used to bridge the domain gap between the input and the StyleGAN2.\n- (ii) The pre-trained StyleGAN2 is trained on each data domain: FFHQ [18], AFHQ-Dog, and AFHQ-Cat [8]. The shared encoder and species-specific domain-converters are trained in an end-to-end manner with the pre-trained teacher models.\n- (iii) We have tried training StyleGAN2 on the merged dataset but found that the training does not converge. The extreme shape and style variance make StyleGAN2 training unstable. The same reason is behind the absence of pre-trained StyleGAN2 on AFHQ-Wild. In addition, we could not find any publicly available model of StyleGAN2 pre-trained on the merged dataset.\n### Weakness 2\n- We appreciate the experiment suggestion to evaluate our method from various angles.\n- (i) In Table 2, we report the quantitative result of the interspecies keypoint transfer on WFLW [R1] and AnimalWeb [R2] with 9 landmarks including the corners of the eye and mouth. The qualitative results of the experiment are updated in Figure 4 of supplementary material. Our method shows the best performance compared to previous methods on every domain pair. The corners of the mouth are hard to match exactly compared to the eye. However, the transferred landmarks by our method lie in the same region, meaning DIFE has an understanding of the semantic face region.\n- (ii) In Table 1 and Table 2, we report the quantitative result of interspecies keypoint transfer by CATs[R3] which is the state-of-the-art method of semantic visual correspondence. The qualitative results of the experiment are updated in Figure 3 and Figure 4 of supplementary material. Because CATs is trained on SPair17k [R4] whose annotations include the annotations on animal bodies, the performance of CATs is limited for the face.\n- (iii) In Figure 5 of the supplementary material we report the results of interspecies face parsing. Following the segmentation in style [26], we apply k-means clustering for DIFE. The eye, nose, mouth, and hairy parts are discovered with a simple method which means DIFE has semantic information for the interspecies face.\n- (iv) We kindly remind you that we have visualized additional qualitative results of representation, keypoint transfer, and pseudo-paired data in the supplementary material. We will also add more results in the revision.\n### Weakness 3\n- Even though the CSE embedding is mainly a representation of the body, it contains coarse information about faces. It is also observed in experiments shown in Table 1 and Figure 3. Despite the limited quality of face embedding, the CSE [23] provides a good initialization of the common space.\n### Weakness 4\n- As described above the pre-trained domain converters are not required. However, our method is dependent on the performance of pre-trained CSE and StyleGAN2. There are failure cases on the occlusion, the dark illumination, or the rare species. We will update this in the revision.\n### References\n- [R1] Wu, Wayne, et al. \"Look at boundary: A boundary-aware face alignment algorithm.\" CVPR. 2018.\n- [R2] Khan, M. H., McDonagh, J., Khan, S., Shahabuddin, M., Arora, A., Khan, F. S., ... & Tzimiropoulos, G. Animalweb: A large-scale hierarchical dataset of annotated animal faces. CVPR. 2020.\n- [R3] Cho, S., Hong, S., Jeon, S., Lee, Y., Sohn, K., & Kim, S.. Cats: Cost aggregation transformers for visual correspondence. NeuIPS. 2021.\n- [R4] Min, J., Lee, J., Ponce, J., & Cho, M. Spair-71k: A large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543. 2019.", " - Representational Biases in human dataset\n- Preventing misuse is acknowledged but could go deeper.\n- Psychologically expressions can differ by species as to what that emotion means.\n- Animal rights \nWhile there is a helpful \"Broader Impact\" section that discusses potential misuse (great!), all examples of human faces shown are of pale and often white faces. There is no discussion of potential to underperform for faces underrepresented in the dataset or for darker skin tones. It would be helpful to explore biases (e.g. potential associations between lighter skin faces and animals versus darker skinned and Black people's faces).\nThe potential positive social impacts are helpful, as is acknowledging animal rights. However, facial expressions in one species do not automatically correlate with an emotion or similar facial expression in another animal species. A good example is smiling for humans versus some animals showing teeth is a sign of aggression.\nThe datasets listed in Appendix C seem like fair use and some are relatively new, so there were not explicit ethical concerns outside of representational biases. The authors would make this a stronger paper by testing for bias or at minimum sharing the limitation of showing primarily white faces.\nAdditionally, they should acknowledge that facial expressions don't explicitly transfer or share emotions between species. A helpful paper could be \"Facial Expression in Nonhuman Animals\", but generally more psychology understanding would be more helpful here.", " The paper presents a new research topic “interspecies face embedding”, which aims to extract common features from faces of several species, including humans, as a dense embedding. The process enables the discovery of other species' facial semantics without enough annotations by transferring knowledge from well-annotated human data. Algorithmically, a multi-teacher knowledge distillation paradigm is introduced to guide unsupervised embedding learning. Experimentally, based on the learned interspecies face embedding, the authors perform interspecies facial keypoint transfer on MAFL and AP-10K datasets. Strengths: \n+ The problem addressed in the paper is interesting.\n+ The proposed method of using multi-teacher knowledge distillation for interspecies face embedding learning is technically sound.\n+ The paper is well-organized. \n\n\nWeaknesses:\nHere are some concerns:\n1. It’s not clear why the species-specific StyleGAN2 models share geometric consistencies in the learned W latent space.\n\ni) In the pseudo-paired data synthesis stage, it is unclear to me why the pseudo-paired data x^d’ generated from the found latent code should have the same face geometry with the original image x^d (L178-179). In my understanding, there are not necessarily correlations in the learned species-specific W latent spaces. \n\nii) Is the domain converter species-specific? Did you train separate StyleGAN2 models for different species? Or one universal model for all the animals in AFHQ.\n\niii) Have you considered training a single StyleGAN2 model by combining FFHQ and AFHQ? Would this make it easier to learn interspecies face embedding? Then the domain converter is not necessary.\n\n2.\tThe experiment section is quite weak.\n\ni) Qualitative and quantitative results on more keypoints are desirable. The authors only perform keypoint transfer on 3 landmarks. I suggest considering more semantic landmarks, i.e., corners of the mouse etc.\n\nii) The authors should include more baselines. The current baselines CSE and DVE are not designed for the face. The comparison is thus not convincing to me. I understand there is no literature addressing the same problem. But I think the authors could compare with visual semantic correspondence methods. Although these methods deal with intra-class semantic correspondence, the intra-class deformation of some categories (i.e., car) is more challenging than this interspecies face deformation. \n\niii) I suggest the authors could add one experiment on interspecies face parsing, which can clearly demonstrate the effectiveness of the proposed method.\n\niv) More visual results are needed.\n\n3. It’s unclear to me why the pre-trained CSE model is relevant to the face embedding learning and can improve the keypoint transfer performance (based on the results in Tab.2). In my understanding, the CSE model provides Interspecies body priors.\n\n4. Discussions on limitations are needed. Such as, the proposed method may need to train species-specific StyleGAN2 models and domain converters.\n\n Please see the weaknesses. Please see the weaknesses.", " This paper presents a face embedding method to obtain shared semantics between different animal species. The basic idea is to use a knowledge distillation paradigm to extract information from face synthesis (StyleGAN2) and interspecies surface embedding (CSE) models. Specifically, the main encoder is trained so that 1) the embedding is close to the one from CSE and 2) the domain-specific embeddings through converters are close to the features of StyleGAN2. The model also synthesizes pseudo-paired face images using the SyleGAN2 generator to enforce facial semantics correspondence further. ## Strengths\n- The target task is interesting, and the proposed approach is reasonable to address this goal.\n- The performance of the proposed method has been qualitatively and quantitatively verified from various angles, and the improvement from the baseline method is acknowledged.\n\n## Weaknesses\n- The proposed method takes the knowledge distillation approach and requires the pre-trained StyleGAN2 and CSE models for training. Compared to previous studies such as DVE [36], it can be seen that the data and annotations required for learning have increased. The performance of the final embedding is expected to be highly dependent on the performance of these two pre-trained models.\n- Related to the above, the main contribution of the proposed method is the combination of these two models, and the pseudo-paired data synthesis is not that significant in terms of technical novelty. - As commented above, I would like a more detailed discussion of the training data and pre-training model that this method requires. For example, what images and annotations would be needed to target animals not included in AP-10K or AFHQ?\n- Likewise, it would be good to see how much the performance of the pre-training model and the amount of data used for training affects the final performance.\n- This is not necessarily a weakness, but it is interesting to note that the eyes are emphasized in the generated images seen in Figure 7 and elsewhere. It would be good to have a discussion on what reason this artifact occurs. - The above issue of generalization to other animal categories is also mentioned in the text as a limitation.\n- As the method deals with faces, sufficient consideration must be given to social impact, which is also mentioned in the text.", " This paper presents an \"Interspecies face understanding\" task to predict unified spacial features of both human and animal faces.\nTo solve this problem, it presents a multi-teacher knowledge distillation framework that combines the advantages of different models, i.e. CSE for cross-domain features and StyleGAN2 for facial embedding.\nIt also explores the latent space of StyleGAN2 to synthesise paired data for semantic matching. Strengths:\n1. This paper explores the interspecies facial landmark detection model, which has many potential applications.\n2. It is interesting to me to combine two orthogonal models to obtain the cross-domain facial embedding.\n\nWeaknesses:\n1. I think the \"Interspecies Keypoint Transfer\" experiment is not sufficient for the following points:\\\n a) Dataset. I think 300W [1] or WFLW [2] for human facial landmarks, and [3] for animal facial landmarks, would be helpful to show the performance of this paper;\\\n b) Point Number. Only 3 points are presented as the qualitative results. Can this method handle landmarks in mouth and face contour?\\\n c) Comparison. If using the datasets mentioned in 1. a), there can be a comparison with previous supervised/unsupervised methods, which help readers better understand the performance of this paper. The current comparison looks isolated.\\\n d) Ablation Study. The ablation study results are partly the same as the \"Human+Dog\" result in Table 1. So the CSE results in Table 1 are actually DIFE trained with L_{K1}? If so, I think it is improper to call it \"CSE\" here.\n2. There is no constraint to enforce the spatial consistency of the input and output of the Domain Converter.\n3. No real images are used when training the Encoder.\n\nReference\n\n[1] Sagonas, Christos, et al. \"300 faces in-the-wild challenge: The first facial landmark localization challenge.\" Proceedings of the IEEE international conference on computer vision workshops. 2013.\n\n[2] Wu, Wayne, et al. \"Look at boundary: A boundary-aware face alignment algorithm.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n\n[3] Khan, Muhammad Haris, et al. \"Animalweb: A large-scale hierarchical dataset of annotated animal faces.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. 1. Why not compare on the common face alignment datasets with commonly used metrics?\n2. Can this method handle landmarks in mouth and face contour?\n3. Are the CSE results in Table 1 actually DIFE trained with L_{K1}? If not, is there any illustration for the similarity between Table 2 and \"Human+Dog\" column in Table 1?\n4. How to guarantee the spatial consistency between the input and output of the Domain Converter? How is this Domain Converter trained?\n5. Only synthesised images are used to train the Encoder. If testing on a synthesised dataset, how is the performance? Do you have any experience with this? The authors have discussed the broad impact and potential negative social impact of their work.", " This paper addresses the problem of dense interspecies face embedding. To cope with the lack of data, first multi-teacher knowledge distillation of Continuous Surface Embedding (CSE) and StyleGAN is used. Then, pseudo pair images are synthesized through the latent space exploration of StyleGAN2 to find implicit associations between different animal faces. Finally, a semantic matching loss is introduces to overcome the problem of extreme shape differences between species. Interspecies facial keypoint transfer is performed on MAFL and AP-10K. Results for applications in interspecies face image manipulation and dense keypoint transfer are also provided.\n Strengths\nThe main contributions of the paper are:\n- A cross-domain face understanding study is presented that considers human faces\nas well as faces of animals. The approach domain adaptation to avoid tedious and expensive animal data collection.\n- A multi-teacher knowledge distillation paradigm is proposed that extracts and combines the information from models with different architectures and data domains. The proposed framework learns continuous face embedding across interspecies data from CSE and StyleGAN2. With this solution dense interspecies face embedding (DIFE) are learnt.\n- A method for synthesizing paired data is proposed for learning the semantic matching using the latent space exploration of StyleGAN2.\n\nWeaknesses\nThe contribution of the paper is limited. It is more on the application side but it fails to convince about the effective transfer between human an animal faces and application domains.\nIn particular:\n- The baselines used in the comparison of Table 1 are not designed for the same task proposed in this work and have been adapted to it so the comparison is somewhat inconsistent.\n- the keypoint transfer is shown only for three keypoints (nose tip and eyes center, but the transfer on the animal face is not particularly accurate)\n- the results in figure 6 also are not particularly accurate\n- the applications are not particularly convincing\n\nIn general, the paper is well written, but there are some errors to check. See for example:\n- page 3, line 124: \"In StyleGAN2 distillation [? ],\" --> missing reference\n- page 5, line 177: \"is an latent space\" --> a latent\n\n============= POST REBUTTAL ================\nAuthors have answered to most of my questions, so I changed my score to borderline accept. However, I still think the contribution is not very strong. - Authors should convince more about the effective applicability of the method. In particular, I think it would be interesting to see a comparison with a solution that learns face keypoints from animal images rather than transferring them from face images. This would support more the method and is something missing in the paper. \n Authors reported about the impact of the work but in a too general way about limitations. I think they should have indicated more specifically the limitations possibly also providing failure cases.\n " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "ZtGEzoo0Nm9", "iFaMW8IArv6", "nips_2022_m67FNFdgLO9", "yCh0N5kBLv", "0K8kgcfEz9i", "VQstlncmXJv", "n3mlSG42241", "nyMveVtmfjJ", "nips_2022_m67FNFdgLO9", "nips_2022_m67FNFdgLO9", "nips_2022_m67FNFdgLO9", "nips_2022_m67FNFdgLO9", "nips_2022_m67FNFdgLO9" ]
nips_2022_H3JObxjd8S
Self-Supervised Visual Representation Learning with Semantic Grouping
In this paper, we tackle the problem of learning visual representations from unlabeled scene-centric data. Existing works have demonstrated the potential of utilizing the underlying complex structure within scene-centric data; still, they commonly rely on hand-crafted objectness priors or specialized pretext tasks to build a learning framework, which may harm generalizability. Instead, we propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning. The semantic grouping is performed by assigning pixels to a set of learnable prototypes, which can adapt to each sample by attentive pooling over the feature and form new slots. Based on the learned data-dependent slots, a contrastive objective is employed for representation learning, which enhances the discriminability of features, and conversely facilitates grouping semantically coherent pixels together. Compared with previous efforts, by simultaneously optimizing the two coupled objectives of semantic grouping and contrastive learning, our approach bypasses the disadvantages of hand-crafted priors and is able to learn object/group-level representations from scene-centric images. Experiments show our approach effectively decomposes complex scenes into semantic groups for feature learning and significantly benefits downstream tasks, including object detection, instance segmentation, and semantic segmentation. Code is available at: https://github.com/CVMI-Lab/SlotCon.
Accept
This paper proposes an object-centric representation learning based on a data-driven semantic slots from scene-centric data. In specific, the proposed SlotCon simultnesobly performs semantic grouping and contrastive representation learning over groups (slots), which naturally leads to obtaining object-level representations without any prior knowledge. The proposed algorithm is technically sound and novel. It is clearly distinct from the previous dense contrastive learning in that it jointly learns the target grouping of pixels. Most of all, it first shows encouraging performances by object-centric representation learning on natural image datasets. Even though the performance improvements seem to be somewhat marginal in comparison to the previous SOTA algorithms, the proposed method fairly demonstrates the effectiveness and feasibility in the use of object-centric representation learning for scene-centric data and the corresponding downstream tasks. In addition, the authors properly addressed almost all concerns and questions raised by the reviewers. In conclusion, I would like to recommend to accept this paper.
train
[ "xe3S7S4E8fe", "xBl5UdKFLdw", "qiFH23XpXxW", "hnQHxUnLLE_", "Z7ZOMLoPjPl", "WEYyPBhKQkq", "wxjpN5wHYgZ", "87dwEGTP75o", "KTb9huf6V0", "RcFqzT3FFUG", "sDnGhRDEnS", "5IAG0HcYYMjD", "Bg0xtGqcMFfH", "q8cxHaptNfd", "Y6-KBBHctDu", "aaPQLtPU-Du", "L6PBSjATGb1", "6o6MoJFFgRmz", "ONPwqLsMIZM", "KJn_YdmDjoO", "CltN9Irkkz", "_mLQ_283L-w", "ZrwrI6s0sdP" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer NYjs,\n\nThank you for the time you devoted to reviewing this paper and your further constructive comments in the prior issue.\n\nAfter carefully reading your comments and reflecting on the experiment results, we agree that our previous statements regarding the priors are too strong and the scope is not well-specified. Here we would like to make the following clarifications:\n\n* The motivation of SlotCon is more to explore \"can we learn good object-centric representations without hand-crafted objectness priors?\". This exploration led us to a much simpler architecture that performs yet even better. However, we agree that our critique of prior should be limited to our problem setting (self-supervised pretraining on scene-centric data), and the scope of priors should be limited to objectness priors. Things can vary much if the setting and scope are not specified.\n* The use of prior can be viewed as a way of injecting human knowledge into parametric models. Sometimes this kind of knowledge avoids data-guided bias (as in your example), while sometimes, it can limit the model's upper bound or generality (as in our examples). It is impractical to comment generally on whether priors are good or bad. In fact, the geometric-covariance, photometric-invariance, and small cluster number are also helpful priors that we used to train SlotCon and guide it to learn to discover objects/semantics. We need to limit the scope when talking about the influence of priors.\n* The data itself can be not reliable. Yes, as pointed out by the reviewer, totally relying on the data may lead to \"bad\" biases. Although our experiments are limited to scene-centric data, we also notified similar phenomena (detailed in sec. 8 of our response to Reviewer oHQN). The model allocates more prototypes to human-related concepts (as humans occur most frequently in COCO), while many other kinds of animals only have one prototype. When pretraining on a more long-tailed and less discriminative scenario (e.g., autonomous driving data, detailed in our response to Reviewer LB2h), the data may lead the model to learn highly biased prototypes and representations, and harm downstream performance. In this case, human priors adopted for long-tail settings may help.\n\nOverall, we believe the discussion with the reviewer helps improve our understanding of the problem, clarify the biased statements, and discover possible limitations of the model. We will incorporate the discussions into the next version. Thank you again!\n\nBest regards,\n\nPaper 271 Authors", " Dear reviewer MdX5,\n\nThank you for your time. We appreciate the efforts and comments, which help improve the paper.\n\nRegarding the novelty issue, in a recently posted general response, we summarized and emphasized the major contributions of this paper in a more general view, and we strongly suggest taking your time for a look and see if it is helpful. Besides, we would also appreciate it if you could point out which specific part in our previous response regarding the novelty issue confuses you, thus we can better resolve your concern accordingly.\n\nRegarding the closing performance with PixPro in a longer schedule, our response is four-fold:\n\n* Indeed, the performance in instance segmentation is close (0.1 in AP); however, the performance of SlotCon is still much better in object detection (0.4 in AP). It should be noted that an improvement of 0.4 AP in COCO object detection is significant in this community.\n* Besides object detection, SlotCon is also evaluated extensively in semantic segmentation, where SlotCon surpasses PixPro by 0.7 points in the most challenging setting ADE20K. In our response to Reviewer LB2h, SlotCon is also evaluated with challenging autonomous driving data in both pretraining-finetuning and unsupervised semantic segmentation and shows satisfactory results. Sticking to one result in one specific setting may overlook the whole picture.\n* It is reasonable that the performance gap between pretraining methods gradually squeezes with a longer downstream training schedule. In fact, in the pretrain-finetune setup, if the downstream data is adequate and the training length is long enough, even random initialization can catch up with imagenet pretraining [1]. \n* Currently, COCO detection with the 1x schedule is one common config [2-5] to evaluate the quality of pretrained representations. We agree that this setting may be limited, and a more comprehensive benchmark is required, and the extensive evaluation [MdX5, LB2h, NYjs] of this paper in fact, a response to this limitation and a confirmation of SlotCon's superiority.\n\n[1] He et al., Rethinking ImageNet Pre-training, ICCV 2019.\n\n[2] Wang et al., Dense Contrastive Learning for Self-Supervised Visual Pre-Training, CVPR 2021.\n\n[3] Xie et al., Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021.\n\n[4] Xie et al., Unsupervised Object-Level Representation Learning from Scene Images, NeurIPS 2021.\n\n[5] Bai et al., Point-Level Region Contrast for Object Detection Pre-Training, CVPR 2022.", " Dear Reviewers,\n\nThe discussion with each reviewer is greatly constructive and helps understand SlotCon from a specified perspective. To better show the full picture, we would like to re-emphasize the significance of this paper in a more general view.\n\nSlotCon mainly sheds light on two important subfields of representation learning: self-supervised representation learning (pre-training) and object-centric representation learning (object discovery). In one attempt, it jointly solves two challenging problems in them, as detailed following:\n\nFor the pre-training community, how to move beyond the object-centric dataset ImageNet, and *learn from scene-centric datasets* that are *imbalanced*, more *cluttered* and usually *with fewer images to learn from* has been a challenging problem [LB2h]. It is commonly believed that adopting the complex structure (e.g., multiple objects) in scene images can benefit feature learning [1], and how to expose the structure of the data (produce objectness) has been the key to solving this problem. Unlike current approaches that are limited by the hand-crafted priors (e.g., objects as superpixels) or too specialized pretext tasks, *SlotCon shows that directly optimizing a clustering objective is enough to produce satisfactory and even better objectness*, which paves the ground for a contrastive learning objective (InfoNCE loss) at the slot level and help learn significantly stronger representations (evaluated extensively on multiple downstream tasks). During the rebuttal period, we find it also successes in the challenging autonomous driving data, highlighting the generality of SlotCon.\n\nFor the object discovery community, it has long been the common pursuit to *learn object-centric representations from unlabelled images*. Directly extracting objects from images is challenging due to the lack of supervision, and thus prior works have long been restricted to synthetic data. Some recent works take the first step to real-world video data, yet they adopt motion flow as a shortcut for objectness prior to solving this problem and still follow the philosophy of directly extracting objects from an image. On the contrary, SlotCon shows that we can first explicitly optimize for desirable properties of clusters (that can describe an object) over the dataset, then retrieve the objects from an image with the learned cluster centers [oHQN]. More importantly, *SlotCon first shows the possibility of learning object-centric representations from large-scale unlabelled natural scene-centric images in the wild*.\n\nSlotCon also shows good ability in unsupervised semantic segmentation. Still, we want to clarify that this is just to help understand the learned prototypes, but not to propose a new SOTA.\n\nIn terms of originality, SlotCon does share the spirit with some prior works (e.g., DINO and Slot Attention). In the paper and rebuttal, we have well discussed its relationship with them, and we appreciate that SlotCon is recognized as a fairly original work [LB2h], the design of the learning procedure has its merit [NYjs], and the authors state similarity, inspirations from and differences with other methods clearly [oHQN, NYjs].\n\n[1] Hénaff et al. Object discovery and representation networks, arXiv preprint.\n\nBest regards,\n\nPaper 271 Authors", " I thank the authors for answering all my questions and doubts. They discussed appropriately almost all my comments.\n\nI would, however, further comment on the strong critique of priors, which is supported by the authors with examples in which certain priors are not contributing to an increase of performance or train better models, or provide any particular benefits. Relying completely on data without any prior knowledge can, on the other side, be very dangerous. For instance, in [1] it was shown that relying only on ImageNet data to train a classifier, easily results in models that are bias towards sexist and racist predictions. My point was indeed contrasting the absolute statement that prior knowledge is necessarily \"bad\" to train models.\n\nI am, nevertheless, happy with the response of the authors, and keep my already positive score.\n\n[1] J. Zou and L. Schiebinger, “Ai can be sexist and racist — it’s time to make it fair,” Nature, vol. 559, pp. 324–326, 07 2018.\n\n", " I'd like to thank the authors for answering all my questions.\nThe rebuttal resolves some of my concerns. \n\nMy main concern for the submission is 1) limited novelty and 2) with longer schedule, the performance is closer to PixPro.\nI'll keep my previous rating.", " Thank you for pointing out the details of PiCIE's Cityscapes setting. We will try it out in the future.", " The discussion is definitely constructive and produces new insights. We would like to thank the reviewer for helping improve our paper.", " The discussion is thorough, constructive, and helpful for improving the quality of our paper. Thank you again for your time.", " Thank you for these additional results. I find them quite encouraging, given that there was surely no time for tuning.\n\nI have just a remark regarding the results from Cityscapes pre-training. PiCIE and IIC (from the PiCIE implementation) use a particular config for pre-training on this dataset: they use do not use only the original train dataset, but also the test set and the remaining coarsely annotated images, summing about to more than 20k images. \nI suspect the authors pretrained only on the original 3k train images from Cityscapes, which may reduce the performance of SlotCon.\nEven so, the results are not bad at all!", " I would like to thank the authors for the (very) detailed rebuttal. \nThe rebuttal addresses all my questions and offers convincing responses and additional insisghts (across the responses to all reviewers).\n\nI don't have any other questions for the authors and I confirm my positive recommendation for this submission.\nNice work!\n\nJust a comment regarding a potential misunderstanding of some of my comments:\n- response 2.1: On the use of SlotAttention. As mentioned before, I find that [a,b] are different approaches from SlotCon. My initial comment was regarding the phrasing of the use of the SlotAttention principle (I'm aware about the architecture and implementation differences). There are no concerns from side on the quality of this work; the suggestion was to better emphasize the contribution of this work. I think that a compressed variant of the author's answer would be useful in the paper.\n- response 2.2: Thanks for the detailed answer about Odin. I did not ask for a comparison with Odin as they are concurrent works. I've just mentioned that they are aiming for similar things, though by different means. Again, there is no concern from my side regarding Odin and SlotCon\n", " The rebuttal addresses all of my points and provides additional insights that I hope will be integrated in the final version. My initial review was already quite positive and I confirm my rating.", " We thank all the reviewers for their unanimously positive reviews and insightful comments.\n\nWe are glad to find that our paper is consistently considered to be well written, clear, and concise [LB2h, MdX5, oHQN, NYjs].\n\nWe also appreciate that the reviewers think our method is nice and effective [LB2h, MdX5], and is a convincing approach [LB2h], and the uses of our method are very broad [NYjs].\n\nIndividual concerns have been addressed carefully in the response to each reviewer.\n\nWe will revise the paper following the suggestions.", " Thanks for your constructive comments. Our responses to them are given below.\n\n## 1. Code availability\n\nOur code will be released as promised in the checklist.\n\n## 2. Discussion with prior works\n\n### 2.1 Relationship with prior object-centric methods\n\nLearning object-centric representations (object discovery) from unlabelled images has long been the pursuit of the object-centric representation learning community. Directly extracting objects from images is hard due to the lack of supervision and thus prior works have long been restricted to synthetic data. \n\nThe recent works [a, b] take the first step to real-world video data, yet they adopt motion flow as a shortcut for objectness prior to solve this problem, and still follow the philosophy of directly extracting objects from an image. In contrary, SlotCon shows that we can first explicitly optimize for desirable properties of clusters (that can describe an object) over the dataset, then retrieve the objects from an image with the learned cluster centers. (This superiority in philosophy is also recognized by Reviewer oHQN.) \n\nMore importantly, we first show the possibility of learning object-centric representations from large-scale unlabelled natural scene-centric images in the wild.\n\nMinor correction: SlotCon do share similar spirit with SlotAttention in the competition mechanism between slots, but they are two distinct models in architecture. SlotCon only consists of a set of prototypes to learn, while SlotAttention is a multiple-layered transformer-like model.\n\n### 2.2 Lacking detailed discussion with concurrent work Odin\n\nWe agree that Odin is a highly related concurrent work. Both Odin and SlotCon perform clustering on the feature map to segment objects. However, while Odin just applies kmeans on feature maps to generate masks which are further used to construct the constrastive objective, SlotCon starts from a set of prototypes shared by all samples, which attaches consistent semantic meanings to each cluster at the initial stage and can be adapted to different images to extract image-specific slots.\n\nIt is hard to compare with Odin in performance as their reported setting is too computational heavy (trained on ImageNet for 1,000 epochs with batch size 4096 over 128 Cloud TPU v3 workers), but the superiority of deep clustering over kmeans clustering has been shown in the literature [1]. We will make the discussion clearer in the next version of this paper.\n\n## 3. Lacking discussion with some literature in unsupervised semantic segmentation\n\nWe will add the discussion with these papers in the next version.\n\n## 4. Straightforward computational cost comparison\n\nAs requested by the reviewer, here we give a direct computational cost comparison between SlotCon and two previous works. The experiments are conducted on the same machine with 8 NVIDIA GeForce RTX 3090 GPUs. Both PixPro and SlotCon adopt a batch size of 1024 and have amp turned on, and DenseCL adopts a batch size of 256 by default. The training time of DenseCL might be higher as we failed to install apex. \n\n| Method | Time/epoch | Memory/GPU |\n| ------- | ---------- | ---------- |\n| DenseCL | 2′46′′ | 7.9GB |\n| PixPro | 2′19′′ | 15.1GB |\n| SlotCon | 2′23′′ | 16.0GB |\n\n## 5. Details about baseline results and implementations\n\n### 5.1 Source of DetCon results in Table 3\n\nYes, as anticipated by the reviewer, these results are eye-balled from Figure 4 in the DetCon paper. The COCO detection result is taken from their 1st version on arXiv (https://arxiv.org/pdf/2103.10957v1.pdf), which was deleted in their 2nd version (https://arxiv.org/pdf/2103.10957v2.pdf). And other results are taken from Figure 4 in the `v2` paper. We apologize that some results should be corrected in Table 3 concerning DetCon: the COCO detection result should be 40.6 rather than 40.5, and the Cityscapes result should be 75.5 rather than 76.5. We'll make the data source clearer and correct the results in the next version.\n\n### 5.2 Lower PixPro results in Table 3\n\nThe highest IN-100ep result reported in Table 1 of PixPro (AP 41.3) is produced by pretraining with a FPN (see their Section 3.3 and Table 2(e) for details). This setting is, however, not adopted in their released code & models (https://github.com/zdaxie/PixPro), on which our re-implementation is based. We'll clarify this in the next version.\n\n### 5.3 COCO downstream protocol\n\nOur COCO downstream implementation is directly copied from PixPro, so yes we also adopt the 4 convolution layers + 1 FC layer configuration. This setting can trace back to InfoMin (https://github.com/HobbitLong/PyContrast/tree/master/pycontrast/detection), which is also widely adopted (e.g., InsLoc, DenseCL, PixPro, ORL, SoCo according to their official codebase). As all results reproduced by us adopt the same COCO downstream config with SlotCon, this won't harm fair comparison. We'll clarify this in the next version.", " ## 6. Results on autonomous driving data\n\n### 6.1 Pretraining on BDD100K\n\nAs requested by the reviewer, we show the results with BDD100K pretraining and evaluated on Cityscapes semantic segmentation. The model is trained on BDD100K for 800 epochs with 64 prototypes. The result is notably weaker than its COCO counterpart, yet still surpasses MoCo v2 pretrained on COCO. Tough the hyper-parameters might not be well tuned, the BDD100K dataset is indeed challenging for pretraining as its images are less discriminative, and the pretraining on autonomous driving data is a valuable direction to explore. We thank the reviewer for pointing it out.\n\n| Dataset | Method | mIoU |\n| ------- | ------------ | ---- |\n| - | Random init. | 65.3 |\n| COCO | MoCo v2 | 73.8 |\n| COCO | SlotCon | 76.2 |\n| BDD100K | SlotCon | 73.9 |\n\n### 6.2 Pretraining on Cityscapes\n\nAs requested by the reviewer, we show the results pretrained on Cityscapes for 800 epochs with 27 prototypes, and evaluated on unsupervised semantic segmentation. The results are also notably weaker than PiCIE, but surpass MaskContrast and IIC in mIoU. It should be noted that we do not aim to propose a new SOTA for unsupervised semantic segmentation, SlotCon is trained with a much lower resolution from scratch, while the compared works adopts a pretrained model and output high-resolution results. We show these results just to analyze how well the prototypes bind semantics qualitatively and quantitively.\n\n| Method | mIoU | pAcc |\n| ------------ | ----- | ----- |\n| MaskContrast | 3.14 | 40.22 |\n| IIC | 6.35 | 47.88 |\n| PiCIE | 10.29 | 72.13 |\n| SlotCon | 8.92 | 27.95 |\n\n[1] Caron et al., Deep Clustering for Unsupervised Learning of Visual Features, ECCV 2018.", " Thanks for your constructive comments. Our responses to them are given below.\n\n## 1. Lack of related works and novelty concerns\n\nWe thank the reviewer for pointing out these related works in pixel-level clustering by contrastive learning. After carefully reading through these papers, we find that there are several key differences between SlotCon and them:\n\n* Setting: SlotCon targets *unsupervised* representation learning (pretraining), while the mentioned works target (weakly-)*supervised* semantic segmentation. While semantic segmentation only cares about the segmentation performance on the source dataset, the performance on various downstream tasks and datasets counts most in pre-training.\n* Motivation: The start point of SlotCon is to learn object-centric representations from unlabeled scene-centric images. Towards this target, it adopts pixel-level clustering for object discovery and builds a contrastive learning objective upon the discovered object-centric representations (slots) to optimize the discriminability of features. Clustering/segmentation is a proxy to learn good features, but not the target.\n* Method: In SlotCon, the clustering process does not require any supervision, and thus the formulation of the learning target is distinct from the mentioned methods. Besides, the clustering is performed with low resolution, and only a coarse objectness estimation is required.\n\nWe will include this discussion in the revised version.\n\n## 2. Symbol clarity\n\nYes, $\\mathcal{A}\\_{\\theta}^{l}$ in Eq. 4 is identical to $\\mathcal{P}\\_{\\theta}^{l}$ in Eq.1. We adopt $\\mathcal{A}\\_{\\theta}^{l}$ in Eq.1 to cater to the common practice for the cross-entropy loss, and adopt $\\mathcal{P}\\_{\\theta}^{l}$ in Eq.1 to represent assignment/attention. \n\nThe $q_{\\theta}$ in Eq.6 stands for the predictor, as stated at L188. And the contrastive learning is performed among a batch, as explained at L187-188.\n\nWe are sorry for the confusion in symbols and equations caused by inconsistency and unclarity and will revise this part in the next version.\n\n## 3. COCO detection results with larger backbone\n\nWe thank the reviewer for pointing out this issue. In the following table we show the results of SlotCon with ResNet-101 backbone pretrained on COCO for 800 epochs and finetuned with the $1\\times$ schedule.\n\n| Method | Backbone | AP$^{\\text{b}}$ | AP$^{\\text{b}}_{50}$ | AP$^{\\text{b}}_{75}$ | AP$^{\\text{m}}$ | AP$^{\\text{m}}_{50}$ | AP$^{\\text{m}}_{75}$ |\n| ------- | -------- | --------------- | -------------------- | -------------------- | --------------- | -------------------- | -------------------- |\n| SlotCon | R-50 | 41.0 | 61.1 | 45.0 | 37.0 | 58.3 | 39.8 |\n| SlotCon | R-101 | 42.6 | 62.7 | 46.7 | 38.3 | 59.8 | 41.0 |\n\n## 4. COCO detection results with longer schedule\n\nWe thank the reviewer for pointing out this issue. In fact the results with $2\\times$ schedule is available at Table 1 of the supplementary. Here we extend it for better evaluation. It shows that the performance gain of SlotCon is still significant with a longer finetune schedule.\n\n| Method | Sche. | AP$^{\\text{b}}$ | AP$^{\\text{m}}$ | Sche. | AP$^{\\text{b}}$ | AP$^{\\text{m}}$ |\n| ----------- | --------- | --------------- | --------------- | --------- | --------------- | --------------- |\n| **IN-sup.** | $1\\times$ | 39.7 | 35.9 | $2\\times$ | 41.6 | 37.6 |\n| **PixPro** | $1\\times$ | 40.5 | 36.6 | $2\\times$ | 42.2 | 38.1 |\n| **SlotCon** | $1\\times$ | 41.0 | 37.0 | $2\\times$ | 42.6 | 38.2 |\n\n## 5. Image classification results\n\nWe thank the reviewer for pointing out this issue. In fact SlotCon mainly targets dense prediction tasks like object detection or semantic segmentation. Adding an instance-level loss like DenseCL and PixPro can further contribute to image classification results. Due to the limit in time and computational resources, currently we haven't finished the experiments on ImageNet. Following the setting of [56], with SlotCon pretrained on COCO for 800 epochs with batch size 512, we show the following results:\n\n| Method | VOC | CIFAR10 | Cars | Food | Pets | SUN |\n| ------------- | ---- | ------- | ---- | ---- | ---- | ---- |\n| SlotCon | 84.2 | 76.6 | 18.9 | 63.0 | 51.2 | 82.3 |\n| SlotCon + ins | 85.9 | 75.9 | 24.6 | 70.0 | 60.4 | 86.4 |\n", " Thanks for your constructive comments. Our responses to them are given below.\n\n## 1. Comparison with DINO\n\nWe are grateful for the reviewer's explanation that scene-level semantics are more complex and thus require more ptototypes, while object-level semantics only need a small number of prototypes due to compositionality. We find it inspiring and will incropate it into the next version.\n\n## 2. Notation clarity\n\nWe will clarify the terms \"prototypes\", \"assignments\" and \"slots\" better in the next version.\n\n## 3. Error bars\n\nWe understand that reporting an error bar for all experiments could make the reported metrics more reliable. However, pretraining is too computation-consuming and thus an error bar is rare to be found from pervious works. Here, we managed to train our model for four independent runs following the settings in Table 1, where our most important results locate. It shows that our method is quite robust across different runs.\n\n| Exp. ID | AP$^{\\text{b}}$ | AP$^{\\text{b}}_{50}$ | AP$^{\\text{b}}_{75}$ | AP$^{\\text{m}}$ | AP$^{\\text{m}}_{50}$ | AP$^{\\text{m}}_{75}$ | City | VOC | ADE |\n| ------- | --------------- | -------------------- | -------------------- | --------------- | -------------------- | -------------------- | ----- | ----- | ----- |\n| No. 1 | 41.03 | 61.13 | 44.97 | 37.03 | 58.32 | 39.80 | 76.24 | 71.62 | 39.00 |\n| No. 2 | 40.92 | 60.99 | 44.62 | 36.79 | 58.13 | 39.42 | 75.84 | 71.72 | 38.94 |\n| No. 3 | 41.09 | 61.20 | 45.04 | 36.89 | 58.26 | 39.56 | 76.17 | 71.49 | 38.34 |\n| No. 4 | 40.97 | 61.03 | 45.08 | 36.98 | 58.02 | 39.84 | 75.86 | 71.72 | 38.66 |\n\n## 4. Ablation studies\n\nWe agree that besides analyzing the hyper-parameters's affect on downstream task performances, it is a good suggestion do dig into the factors that contribute to object-centric representations. We thank the reviewer for pointing this out.\n\n### 4.1 Are geometric augmentations necessary to learn object-centric representations?\n\nYes, geometric augmentations are necessary to learn object-centric representations. We made an ablation to train SlotCon on COCO for 800 epochs with two identical crops applied for each image, thus only photometric-invariance is adopted as the supervision. We then visualized the slots the same way as Figure 2, and found that almost none of the slots can bind to a meaningful semantic. Most of them attend to a similar-shaped region that locates at the same position across different images, yet these regions have diverging semantics. And some of them learns textures like animal fur, cloudy sky, snowland, or leaves. A fast evaluation on PASCAL VOC semantic segmentation shows a significant performance drop, yet the representation is still better than random initialization.\n\n| Method | mIoU |\n| -------------------------- | ----------- |\n| Random init. | 39.5 |\n| SlotCon | 71.6 |\n| SlotCon w/o geometric aug. | 62.6 (-9.0) |\n\n### 4.2 What happens when the binary indicator is not used and all slots contribute to the loss?\n\nThe binary indicator is necessary to perform object-level contrastive learning. Omitting it could drastically increase the computational cost and make it infeasible to train the model.\n\nIt should be noted that the contrastive learning objective ($\\mathcal{L}^\\text{Slot}$) is mainly for object-level representation learning based on the slots. Omitting it (and of course also the binary indicator) does not harm the clustering objective ($\\mathcal{L}^\\text{Group}$), and the model can still learn meaningful slots.", " ### 4.3 Would an image-level contrastive objective improve or hinder learning object-level prototypes?\n\nWe tried to add a MoCo-v3 style instance-level contrastive learning loss to SlotCon (COCO 800 epoch pretraining setting), and the experiment results on COCO show that it depends on the batch size. With a smaller batch size 512, the detection AP drops by 0.2 points, while with a higher batch size 1024, the detection AP raises by 0.4 points. Our explanation is that the instance-level objective is more sensitive to the batch size, and it requires a larger batch size to learn holistic representations that are complementary to the object-level objective. Besides, according to the ablation studies in DenseCL and PixPro, the loss weight of the instance-level loss should also be studied, which is out of our current experiment quota.\n\n| Method | Batch size | AP$^{\\text{b}}$ | AP$^{\\text{b}}_{50}$ | AP$^{\\text{b}}_{75}$ | AP$^{\\text{m}}$ | AP$^{\\text{m}}_{50}$ | AP$^{\\text{m}}_{75}$ |\n| -------------- | ---------- | --------------- | -------------------- | -------------------- | --------------- | -------------------- | -------------------- |\n| SlotCon | 512 | 41.0 | 61.1 | 45.0 | 37.0 | 58.3 | 39.8 |\n| SlotCon w/ ins | 512 | 40.8 | 61.1 | 44.5 | 36.8 | 58.1 | 39.5 |\n| SlotCon | 1024 | 40.7 | 61.0 | 44.4 | 36.7 | 58.0 | 39.4 |\n| SlotCon w/ ins | 1024 | 41.1 | 61.5 | 45.0 | 37.0 | 58.7 | 39.8 |\n\n## 5. Statics about the binary indicator\n\n### 5.1 How many slots are active on average for each image?\n\nIt depends on the number of categories/semantics per image. As in Figure 1 of the supplementary file, seven slots are active on average for one image after convergence.\n\n### 5.2 How often is one slot active over the whole dataset?\n\nIt depends on the category/semantic distribution of the dataset, as the slots are roughly bound to real-world semantic categories. We studied the activeness of the slots over the COCO val2017 set that contains 5,000 images, and found that 40 out of the 256 slots are dead, and not active to any image. The activeness of the remaining slots follows a long-tailed distribution. The top-5 active slots correspond to tree (376), sky (337), streetside car (327), modern building exterior wall (313), and indoor wall (307); and the bottom-5 active slots correspond to skateboarder (44), grassland (45), train (56), luggage (57), and airplane (57).\n\n### 5.3 How many terms are excluded in eq. 6 because either $1_\\text{teacher}$ or $1_\\text{student}$ are 0?\n\nGiven the slot number set as 256, if all slots are active for one image and the two crops overlap well, there should be 256 positive pairs. We studied a well-converged model and found that on average only around 3.6 pairs are active, so around 252.4 terms are excluded. It is reasonable considering that around 7 slots are active on one image, and the overlapping area between the two crops can be small. Besides, considering a total batch size of 512, the number of excluded negative samples should be around 512 x (256 - 7) = 127488.\n\n### 5.4 Is there any regularization to ensure that all slots are used evenly?\n\nAs stated at L152-156, the grouping loss by design avoids two types of collapsing: one slot dominates all pixels, or all slots contribute evenly to every pixel.\n\n## 6. Why the number of prototypes is 256 for COCO and 2048 for ImageNet?\n\nIt is basically an empirical conclusion that setting the number of prototypes close to the number of human-annotated categories can help downstream performance, with detailed discussion at L289-292. It should be noted that though one COCO image is more complex than that from ImageNet, the number of semantic categories that the whole COCO dataset covers is much smaller than ImageNet.\n\n## 7. Semantic mismatch about unsupervised semantic segmentation\n\nWe visualized the prototype that finds the laptop, and found the nearest neighbours are also laptops, which means that the cluster is semantic-cosistent. The problem lies in hungarian matching, where each prototype is assigned to a semantic category. This process adopts a criterion that maximizes the overall pixel accuracy considering the overall performance of all categories. As the semantics of the prototypes are not perfectly aligned with the category labels, it can assign wrong semantic categories to some prototype.", " ## 8. What makes for object-centric representations rather than parts?\n\nOur intuition is that the prototype number and the dataset distribution generate a bottleneck for the granularity of groups. We simply define geometric-covariance and photometric-invariance as the guiding signal, and the model is required to decompose a *large* complex dataset into a *small* number of clusters by optimizing the feature space and the cluster centers. The only solution to this problem is to find the objects/parts that are compositional and thus occupy a reasonable number of prototypes. Concerning granularity, it depends on whether it is helpful to solve the problem given the aforementioned constraints. For example, in our COCO setting with 256 prototypes, the model finds that splitting the animals into cats, dots, elephants, etc., is enough and won't further split them. In contrast, for humans (the most occupying category of COCO), as shown in Figure 7, 8 in the supplementary, the model discovers not only human parts but also human-related activities, indicating that parts are more helpful in this scenario and deserve more prototypes.\n\n## 9. Mention of limitations in the main text\n\nWe thank the reviewer for pointing this out, and will revise it in the next version.", " Thanks for your constructive comments. Our responses to them are given below.\n\n## 1. Clarification on motivation\n\nWe would like to clarify that the task of SlotCon is to perform unsupervised object-centric representation learning, with scene-centric datasets as the main objective. Existing methods in this task that rely on hand-crafted priors, are also limited by the prior. For example, ORL that adopt selective-search to find objects, show weaker performance in Table 1; and SoCo that is specilized for detection pretraining, is limited in detection. As the Occam's razor principle goes, entities should not be multiplied beyond necessity. In contrast, SlotCon defines the desirable properties that objects should have, and let the model finds the proper image decomposition by fitting the data. The strength in localizing objects, discovering semantics is supported by experiments (Table 5), and the strength in pretraining with scene-centric data is also clear (Table 1). \n\n## 2. \"Marginal improvement\"\n\nWe would like to clarify that SlotCon mainly targets scene-centric data, and thus the results on COCO(+) better show SlotCon's improvement. The result on ImageNet is to show that SlotCon is also stronger than or comparable with approaches optimized for object-centric data, but not to propose a new SOTA on it. And the results on unsupervised semantic segmentation are to qualitatively and quantitively show how well the prototypes bind semantics, we yet also do not aim to propose a new SOTA on it.\n\n## 3. Error analysis with other methods\n\nTo better understand why SlotCon improves over previous methods, we extend the COCO results on Table 1 with AP on different objects scales. It shows that SlotCon surpasses the previous SOTA PixPro mainly due to its ability to locate small objects (25.6 vs 24.4 for AP$_\\text{s}^\\text{b}$).\n\n| Method | AP$^{\\text{b}}_{\\text{s}}$ | AP$^{\\text{b}}_{\\text{m}}$ | AP$^{\\text{b}}_{\\text{l}}$ | AP$^{\\text{m}}_{\\text{s}}$ | AP$^{\\text{m}}_{\\text{m}}$ | AP$^{\\text{m}}_{\\text{l}}$ |\n| ------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- | -------------------------- |\n| PixPro | 24.4 | 43.5 | 52.0 | 18.2 | 38.9 | 51.5 |\n| DetCon | 23.8 | 43.1 | 51.1 | 17.6 | 38.3 | 50.8 |\n| SlotCon | 25.6 | 43.8 | 52.1 | 18.7 | 39.2 | 51.7 |\n\n## 4. Limitation discussion\n\nThe limitation analysis locates at L154-162 of the supplementary. And the most suitable scenario for SlotCon is self-supervised visual representation learning on scene-centric data. We will try to make it clearer in the revised version.", " This paper advances a self-supervised representation learning strategy, dubbed SlotCon, that can learn from scene-centric data by reasoning and processing visual information at the pixel level. This is in contrast with the large majority of the self-supervised methods designed for object-centric data (e.g., ImageNet) where image-level reasoning is sufficient. \nRecent methods have addressed scene-centric data, however the authors argue that such approaches are limited by the hand-crafted priors (e.g., objects as superpixels) or too specialized pretext tasks as they can limit their generalization.\nSlotCon builds upon the SlotAttention method and uses slots as learnable prototypes for pixels that are thus grouped via the assignments to their corresponding slots. SlotCon is composed of a teacher and student network. The pixel grouping supervision resembles both SWaV (aligning the pixel assignments between teacher and student spatial-aligned pixels from different views) and DINO (student + EMA teacher, centering of teacher logits, different temperatures on student and teacher). This loss encourages grouping of pixels into object-like structures via the learned slots.\nIn order to discriminate slots that carry the same visual or semantic information across views from other non-informative or redundant slots, SlotCon has also a contrastive objective (InfoNCE loss) at the slot level that encourages similarity between different views of the same slot and discourages similarity of slots from different views and slots from other images.\n\nSlotCon is evaluated on a number of pre-training settings (ImageNet, COCO, COCO+) and downstream tasks (object detection, instance segmentation, semantic segmentation, unsupervised segmentation) and datasets (COCO, Cityscapes, Pascal VOC, ADE20k), with nice performance and results. ### Post-rebuttal update\n\nThe detailed rebuttal addresses all my questions and offers convincing responses and additional insisghts (across the responses to all reviewers).\nI don't have any other questions for the authors and I confirm my positive recommendation for this submission.\n\n==========================================\n\n\n### Recommendation\nThis paper advance a nice and effective idea for self-supervised learning from scene-centric images. The approach looks sound and the results are encouraging. I'm overall positive about this work and leaning towards recommending it for acceptance.\n\n### Paper strengths\n\n- _Clarity_: I find this paper is mostly well written and argued. The intuition, reasoning and formalism of the method are rather well explained and overview figure 1 is very nice. Then the authors conduct a number of ablation and sensitivity studies (number of prototypes, loss balancing weights, temperature of the teacher) for better understanding. There are also some interesting qualitative results (Figure 2) visualizing pixels grouped by slots and their nearest neighbors. The approach is convincing.\n\n- _Significance_: The paper addresses a challenging problem related to self-supervised representation learning: how to learn from scene-centric datasets that are not as well balanced as ImageNet, more cluttered and usually with fewer images to learn from. The results reached by SlotCon are good.\n\n\n- _Quality_: The approach seems technically sound. There are several experiments on different settings and different relevant baselines considered. The authors do not provide the code in the supplementary, but provide rich information about the implementation -- it would be better to release the code though. The supplementary is abundant in additional experiments, ablations and implementaiton details. \n\n- _Originality_: This work combines various good practices from the recent literature (SWAV, DINO, Slot Attention, see my comments from the summary), however the output method does still have originality and is effective. The masking of uninformatives slots (eq. 5) is an interesting idea.\n\n\n- _Misc_: I appreciate that the authors stick to the lower epoch regime (100-200 epochs) which is more reasonable and allows comparison with a larger spectrum of methods\n\n\n### Paper weaknesses\n\nMostly minor concerns:\n\n#### Originality:\n- as mentioned above, to me SlotCon is a fairly original work aggregating in an effective manner various good practices and approaches from the literature.\n- from the text it can be understood that using SlotAttention for unsupervised pixel-level representation learning is a finding of the authors. However other prior methods have used it for weakly-supervised or self-supervised learning [a], [b]. Those are different approaches and I don't think a discussion is necessary, however the text should be adjust to better delimitate the contributions and prior works\n- the approach is similar in spirit with Odin [31] as both aim to learn representations without human object priors. The qualitative examples in both papers are similar. I think it would be good to acknowledge the relatedness of these concurrent works\n\n\n#### Scope of experiments:\n- COCO is indeed a challenging scene-centric dataset and it's great to see pre-training and evaluation done on it, moving beyond ImageNet\n- I think this work would benefit from showing results on more complex settings, like autonomous driving data where the amount of objects per scene can be larger (BDD100K, Cityscapes). Some options could be:\n + self-supervised pretraining on an autonomous driving dataset, e.g., BDD100K[56]\n + self-supervised pretraining on smaller autonomous driving datasets, e.g., Cityscapes, and evaluation on unsupervised semantic segmentation on these datasets, see PiCIE, STEGO[c]\n\n#### Computational cost:\n- the authors discuss the computational cost in appendix D and give exact number of FLOPS for forward operation compared to a standard ResNet-50 backbone. It appears SlotCon is 12.3% more expensive in terms of FLOPS.\n- However this does not tell us much on how fast are forwards and backward passes and how does SlotCon fare against other methods. It's nice that SlotCon can be trained in fewer epochs, but it would be great to be able to compare also the training times\n- To get an idea of more straightforward and easy to visualize and understand format I recommend the time and memory cost studies from MoCov2, OBoW[d] and iBOT[e]\n\n\n#### Baseline results and implementations:\n- In Table 3 (transfer results with ImageNet-1k pre-training), it's not clear where to the scores from DetCon (200 epochs) come from. In the original paper there are some result on a few plots and maybe the authors eye-balled them (e.g., COCO instance segmentation), but not all results can be found in the original paper. The authors don't mention another reference for them. Are these scores reproduced by the authors, taken from a different paper (if so, please cite the corresponding source).\n- PixPro results in Table 3 seem lower than originally reported, about 0.8-1.0 for COCO. Do the authors know why?\n\n- The common protocol for COCO downstream as proposed in MoCO uses the default `ROI_BOX_HEAD` configuration with 2 FC layers. PixPro seems to have modified the default configuration of 2FC to 4 convolution layers + 1 FC layer that seems to improve scores. Was this setting used for PixPro or SlotCon?\n\n\n#### Related work:\n- this work mentions a whole lot of relevant works from the literature\n- however for the unsupervised semantic segmentation section there are several key methods in addition to the now usual PiCIE and IIC and the very recent ones mentioned by the authors (SegDiscover).\n- here are a few suggestions for this area of the literature:[f],[g],[h],[c]\n\n\n**References:**\n\n\n[a] T. Kipf et al., Conditional Object-Centric Learning from Video, ICLR 2022\n\n[b] Z. Bao et al., Discovering Objects That Can Move, CVPR 2022\n\n[c] M. Hamilton et al., Unsupervised Semantic Segmentation by Distilling Feature Correspondences, ICLR 2022\n\n[d] S. Gidaris et al., Online Bag-of-Visual-Words Generation for Unsupervised Representation Learning, CVPR 2021\n\n[e] J. Zhou et al., iBOT: Image BERT Pre-Training with Online Tokenizer, ICLR 2022\n\n[f] J. Hwang et al., Segsort:Segmentation by discriminative sorting of segments, ICCV 2019\n\n[g] M. Chen et al., Unsupervised object segmentation by redrawing, NeurIPS 2019\n\n[h] X. Wang et al., FreeSOLO: Learning to Segment Objects without Annotations, CVPR 2022\n \nHere are a few questions and suggestions to help improve the paper, that could be potentially addressed in the rebuttal\n\n\n1) Conduct a comparison of the computational cost for training SlotCon compared to related methods, MOCOv2, PixPro, etc.\n\n2) Clarify results and implementations for baseline methods\n\n3) (Optional) How is SlotCon performing on more complex scene-centric datasets, e.g., autonomous driving datasets?\n Yes", " The paper proposes a self-supervised learning framework, SlotCon, from unlabeled scene-centric data. The method adopts joint semantic grouping -- softly assigning pixels to learnable prototypes shared by the datasets, and contrastive learning -- first perform attentive pooling on the prototypes to from slots, and then conduct contrastive learning on slots from two different views. In experiments, the authors evaluated on COCO, ImageNet-1K and COCO+ dataset. On transfer learning tasks (COCO detection, COCO segmentation, cityscape & Pascal VOC & ADE20k semantic segmentation), the proposed method achieves performance on par or surpassing previous methods. It also achieves better mIoU on unsupervised semantic segmentation results compared with previous methods. Strengths:\n- The method is purely data-driven without the need of hand-crafted priors or specialized pretext tasks.\n- The proposed method perform self-supervised learning on pixel level, which is shown to perform better on dense prediction downstream tasks compared with image-level self-supervised learning methods.\n- Extensive experiments are conducted to show the effectiveness of the proposed methods.\n- The paper is relatively easy to follow.\n\nWeaknesses:\n- Limited novelty and lack of citation on related works. Pixel-level clustering with contrastive learning are already studied in previous works, e.g., [1]. Prototype-based semantic segmentation are also already studied, e.g., in [2]. But many related works are not discussed. \n\n- The representation in the method section is not very clear. e.g., is $A^l_{\\theta}$ (Eq. 4) identical to $P^l_{\\theta}$ (Eq. 1)? Eq.6 is unclear, since $q_\\theta$ is not introduced. Whether it's contrastive learning among two views of a single image, or among a batch is not clear from the equation.\nSome symbols in the Method Section are not explained.\n\n- The experiments only show performance on ResNet50, whether the proposed method scales to larger backbones is not clear.\n\n- The COCO detection task is only trained and compared with on 1x schedule, while recent study show that 1x schedule is far from convergence. Hence the results are not very conclusive, it's likely that the proposed method only converges faster, but after training longer, the performance will become similar with comparison methods.\n\n- Type: L211 \"solarization\".\n\n[1] Ke, Tsung-Wei, Jyh-Jing Hwang, and Stella X. Yu. \"Universal weakly supervised segmentation by pixel-to-segment contrastive learning.\" ICLR 2021.\n[2] Zhou, Tianfei, et al. \"Rethinking Semantic Segmentation: A Prototype View.\" CVPR 2022. - How does the proposed method perform on image classification tasks? e.g., ImageNet.\n The authors addressed the limitations and negative societal impact in the supp.\n", " The paper proposes a method for learning objectness from visual data by combining deep clustering in feature space and a contrastive objective that is made invariant to geometric augmentations.\n\nThe method involves two losses:\n- $L_\\text{group}$ which operates per-pixel and that encourages each pixel to be clustered according to the clusters produced by an EMA-updated teacher,\n- $L_\\text{slot}$ which operates at the slot level and ensures consistency between slot representations in different augmentations of the same image through a contrastive objective.\n\nAfter self-supervised pre-training, the encoder is used as the backbone for several vision tasks, namely object detection, semantic segmentation and instance segmentation. In all cases, fine-tuning the pre-trained model yields better results than starting from a random initialization, provided that the number of fine-tuning epochs is equal for the pre-trained and from-scratch models. The paper is well written and easy to follow, and I am overall positive about it.\nThe structure of the text introduces the method with gradual complexity, which helps navigating the multitude of symbols and variables.\nAlthough figure 1 appears confusing at first because it has \"too much going on\", it becomes progressively clearer with the contents of section 3.1 and 3.2.\nMost details are not given in the main text, but the supplementary material goes into great detail about datasets and training schedules.\n\nIn terms of originality, the paper fits into the popular category of deep clustering methods.\nThis category has recently seen a shift from image-centric clustering to a finer level of detail.\nThe strength of this method is to explicitly optimize for desirable properties of clusters, rather than trying to extract clusters from models trained for another objective.\n\nIndeed, the formulation of $L_\\text{group}$ resembles a per-pixel extension of DINO because of the prototype-based unsupervised clustering.\nThus, I am happy that the authors anticipated this discussion at the end of Section 3.1.\nI agree with the distinction that DINO uses a large number of prototypes while this method uses a smaller number of per-object prototypes.\nMy intuition is that DINO prototypes capture scene semantics that can be more complex than patch semantics due to compositionality, hence the need for more prototypes.\nI am not sure, however, how the prototypes of this method can be \"adaptive to each image\" (L164).\nEq. 1 clearly shows that the assignments for $L_\\text{group}$ are obtained by computing a similarity with learned dataset-wide prototypes, same as DINO. It's only in eq. 4 that slots are extracted by pooling image features using prototype-based attention, these slots can be said to be \"adaptive\". It would be good to clarify the terms \"prototypes\", \"assignments\" and \"slots\".\n\nRegarding the experiments, I appreciate their exhaustiveness and the choice of datasets.\nMy main complaint is directed at the lack of confidence intervals for the reported metrics.\nConsidering how much results can vary due to randomness it is important to report (at least) the mean and std of 3 runs to be sure that improvement is due to the proposed method.\nIf the number of pages is a constraint at least provide extended versions of tab. 2-6 in the appendix.\n\nLast, a critique for the choice of ablation studies.\nOf course, it is interesting to study how the method behaves with a different number of prototypes or different weights for the losses.\nHowever, I do not think the chosen ablations target the most important points of the method.\nIt would be more interesting to ask: are geometric augmentations (and the subsequent inversion) necessary to learn object-centric representations? Would the model learn something different if only color augmentations were employed on two identical crops of each image? What happens when the binary indicator is not used and all slots contribute to the loss? Would an image-level contrastive objective improve or hinder learning object-level prototypes?\n\nMinor:\n- L132 revise \"two-layer multilayer perceptron\"\n- L162 \"much less prototypes\" -> \"fewer\" It would be interesting to discuss statistics about the binary indicator in eq. 5 and 6:\n- How many slots are active on average for each image? This is mentioned in the supplementary for estimating FLOPS but not discussed from the perspective of representation learning.\n- How often is one slot active over the whole dataset?\n- How many terms are excluded in eq. 6 because either $1_\\text{teacher}$ or $1_\\text{student}$ are 0?\n- Is there any regularization to ensure that all slots are used evenly? Or to ensure a min/max number of slots per image?\n\nL226: Why the number of prototypes is 256 for COCO and 2048 for ImageNet? Intuitively COCO contains more diverse objects than ImageNet and should use more prototypes.\n\nL281: the text mentions that the method \"successfully localizes small objects\" w.r.t. column 5 of the figure in table 5. However, that colum depicts a laptop on a bed that are recognized as a small piece of \"sky\" on the \"floor\". How come the method has learned to distinguish small portions of pixels but hasn't learned much about semantics? Shouldn't the two losses in eq. 8 optimize for semantic consistency?\n\nIs there anything in the method that explicitly biases the optimization towards learning object-centric representations as opposed to parts?\nIf not explicitly, why do you think the model learns to group together entire objects rather than parts? Alternatively, do you have evidence that no preference is made between objects and parts? Yes, but only in the supplementary material.\nIf possible, I would appreciate a mention of limitations in the main text with a link to the corresponding section in the appendix.", " The authors design a method to learn visual representations in a self-supervised fashion from scene-centric data. The work is built upon the choice of exploiting intrinsic scene-centric characteristics of the data, and contrasting with existing literature that exploits object-related priors to guide the learning process. \n\nThe method includes a pixel-level semantic grouping learning mechanism based on learning a set of prototypes to which the pixels are assigned. The set of prototypes can adapt to each sample by an adaptive pooling mechanism. *Strenghts*\n- novelty: the work approaches the self-supervised learning strategy in a novel way favoring pixel-level semantic grouping rather than image-level or object-level representation learning pre-text tasks. The design of the learning procedure has its merit, and the authors state clearly similarity, inspirations from and differences with other methods (e.g. DINO)\n- technical soundness: the description of the method is clear and well-argumented, From my perspective, it has a good balance of mathematical formulation, motivations and textual explanation of the several concepts proposed.\n- experiments on benchmark datasets: the experimental analysis is done on ImageNet1K and COCO(+), which guarantees that the results can be analysed and commented in full\n- the paper is very well-written, clear and concise\n\n*Weaknesses*\n- results: results are comparable with those of existing approaches, which have different working principles. Most of the time the slight improvements are marginal (or in some cases the results are also marginally lower than others). This does not allow to fully appreciate the quality and usefulness of the method. \n- motivation/background: the main difference with other works is stated to be the fact that no priors related to instance discrimination are used in the pre-text task for self-sup training as they will limit the learning potential. This moves the authors to focus on only exploiting the data intrinsic properties and characteristics. I find this a stretched motivation, as priors related to human knowledge about any problem have been demonstrated to steer the learning process in favor of good performance and efficient use of the data (avoiding to re-learn that prior knowledge somehow from the data). The fact that this argumentation is weak, is reflected also in the results, which do not show that using only knowledge from the data would contribute to overall better performance (see point above). I would revisit the statements about the use of prior knowledge as a guidance to the training process, and relate this work better with existing streams of work that show successful use of priors both for performance improvement and data-efficiency (e.g. in the VIPriors workshop series). \n\nThe results to not clearly show superiority of the proposed approach, lacking a full support of some statements in the introduction. Neither the results show complementarity, in the way they are presented. An error and overlap prediction analysis wrt other methods would contribute to a better understanding of the cases in which one method is better than another. Also, marginal improvements might be checked with statistical tests of significance. If not done, these marginal improvements are bound to be interpreted as a result of randomness. The authors do not state limitations explicitly. They could discuss more extensively in which cases this approach would be better than others and in which cases not. From the results it is not clear why and when this method would be better to be used.\nI do not find direct implications as potentially negative societal impact. It is a general approach to learn CV models, and its uses are very broad." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "hnQHxUnLLE_", "Z7ZOMLoPjPl", "nips_2022_H3JObxjd8S", "ONPwqLsMIZM", "Y6-KBBHctDu", "KTb9huf6V0", "RcFqzT3FFUG", "sDnGhRDEnS", "q8cxHaptNfd", "Bg0xtGqcMFfH", "6o6MoJFFgRmz", "nips_2022_H3JObxjd8S", "KJn_YdmDjoO", "KJn_YdmDjoO", "CltN9Irkkz", "_mLQ_283L-w", "_mLQ_283L-w", "_mLQ_283L-w", "ZrwrI6s0sdP", "nips_2022_H3JObxjd8S", "nips_2022_H3JObxjd8S", "nips_2022_H3JObxjd8S", "nips_2022_H3JObxjd8S" ]
nips_2022_uV_VYGB3FCi
Flexible Neural Image Compression via Code Editing
Neural image compression (NIC) has outperformed traditional image codecs in rate-distortion (R-D) performance. However, it usually requires a dedicated encoder-decoder pair for each point on R-D curve, which greatly hinders its practical deployment. While some recent works have enabled bitrate control via conditional coding, they impose strong prior during training and provide limited flexibility. In this paper we propose Code Editing, a highly flexible coding method for NIC based on semi-amortized inference and adaptive quantization. Our work is a new paradigm for variable bitrate NIC, and experimental results show that our method surpasses existing variable-rate methods. Furthermore, our approach is so flexible that it can also achieves ROI coding and multi-distortion trade-off with a single decoder. Our approach is compatible to all NIC methods with differentiable decoder NIC, and it can be even directly adopted on existing pre-trained models.
Accept
Thanks for your submission to NeurIPS. The reviewers are all in agreement that the paper is ready for publication. They in particular appreciated your rebuttals and changes to the paper, and increased their scores as a result. The proposed method is novel, interesting, and performs well.
train
[ "5xf2LelLZEC", "zvoHPN_KTN", "uPUOFp4mrZU", "yeklw7DVh_Y", "0TWWsPGBX3C", "puvrT_n-kle", "JkP74r1qiE", "mn-Fn6D-GzcB", "cNGrLqwt-fV", "l4m8-spFs1z", "l9VGqdAoNT7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your kind additional explanation. Your additional explanation and revision helped me understand your claim (Yang et al. 2020 requires multiple models, whereas the proposed method can achieve nearly the same RD-rate with a single model). I will raise my evaluation of the paper.", " Thanks for the response. The reply convinces me that the authors have sound evaluation and will be able to improve the formulation. I will change my rating to accept.", " Thanks for your detailed review. We have uploaded the revised main text and supplementary material, with all the revisions marked in blue. Due to the space limitation, we could not include all the amendment in the main text. Below is a summary of revisions:\n\n### Main Text\n* Sec 2.1: We rewrite a more extensive formulation of the relationship between lossy NIC and VAE. (as suggested by kNRz)\n* Sec 2.3: We modified the wrong expression of \"differentiating cdf\" into \"taking the difference of cdf\", and provide a stricter formulation on the discrete entropy model by formally distinguish pdf, cdf and pmf (as suggested by kNRz)\n* Sec 2.5, 4.4: We rephrase \"distortion-perception trade-off\" into \"multi-distortion trade-off\". (as suggested by kNRz)\n* Sec 2.6: We emphasis that the grid search is limited to quantization stepsize of $z$. (as suggested by JZkG)\n* Sec 4.1: We emphasis that actual range encoding/decoding is used in results reported. (as suggested by kNRz)\n\n### Appendix\n* A.1: We add additional discussion on results of [Yang et al. 2020] and discuss the difference of our work compared with it. (as suggested by Q7gH)\n* A.3: We add additional discussion on the difference between our quantization stepsize optimization and [Choi et al. 2019]. (as suggested by Q7gH)\n* A.4. Go without Grid Search: We add a new section to discuss the impact of grid search and possibility of fixing the $\\Delta_z=1$ & abandon grid search for speed. (as suggested by JZkG)\n* A.5. Go beyond $bpp=1.0$: We add a new section to discuss the experimental results on very high bitrate ($bpp>1.0$). (as suggested by JZkG)\n* B.2. We add additional results and discussion on segmentation based ROI results. (as suggested by JZkG)\n* C.1. We add additional discussion on the scenarios where our method works/fails. (as suggested by Q7gH)\n\n### Reference\n* Y. Yang, R. Bamler, and S. Mandt. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33:573–584, 2020.\n* M. Song, J. Choi, and B. Han. Variable-rate deep image compression through spatially-adaptive feature transform. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, pages 2360–2369. IEEE, 2021.\n", " Thanks for your detailed review. And we are glad to provide our answer to your questions: \n\n### Q1 & W1 Results in the very high quality/high rate range\n* With [Ballé et al. 2018] as the baseline, we tested Code Editing Enhanced at a much longer bitrate range (up to 2.3 bpp) and these results are available at [JZkG-1](https://anonymous.4open.science/api/repo/NeurIPS268_Materials/file/JZkG-1.pdf). It can be seen that Code Editing Enhanced does start to flatten for higher bit rates and cannot outperform the baseline at very high rate (>1.5 bpp). We hypothesis the bottleneck of the base model trained at a relatively low bitrate (around 0.5 bpp) limits R-D performance in the very high bitrate range. So we tested Code Editing Enhanced at another base model where the R-D trade-off parameter $\\lambda_0$ is set to 0.045 (around 1.0 bpp). We can see that after changing the base model's R-D trade-off to a higher bitrate, the flattened R-D curve has been lifted again.\n* __For revision__: We have included the discussions.\n\n### Q2 & W2 Segmentation ROI\n* Thanks for your advice. We selected the image 13e9b6 from the CLIC2022 [CLIC, 2022] test set to test the segmentation ROI. There are 4 people in this image. We use separate segmentation for each person. Unlike the high contrast ROI shown in the main paper, we give the background a weight of 0.04 instead of 0. These results can be found at [JZkG-2](https://anonymous.4open.science/api/repo/NeurIPS268_Materials/file/JZkG-2.pdf) ([Ballé et al., 2018] as the base model, $\\lambda_0=0.015$). We can see that code editing is effective for complex semantic ROI. The visual quality and fidelity measured by PSNR of each person are improved accordingly as the ROI masks shift. \n* __For revision__: We have included those results.\n\n### Q3 & W3 grid search for quantization step size\n* As we discussed in Appendix A.3, grid search is only used to optimize $\\Delta_z$, and $\\Delta_y$ is optimized by gradient descent. As the bitrate of $z$ is only a small part of the encoding (less than 10\\%), so the impact of $\\Delta_z$ is marginal. In fact, even the current grid search framework for $\\Delta_z$ is a bit of unnecessary. In the extreme case, we can choose to fix $\\Delta_z=1$ and only optimize $\\Delta_y$. This result is shown in [JZkG-3](https://anonymous.4open.science/api/repo/NeurIPS268_Materials/file/JZkG-3.pdf) ([Ballé et al., 2018] as the base model, $\\lambda_0=0.015$). We can see that without grid search at all, the performance of our approach is only marginally effected.\n* __For revision__: We have clarified that grid search is only used to optimize $\\Delta_z$ in Sec. 2.6.\n\n### Q4 Many of the baseline models have increased capacity for the larger rates in order to keep quality high. For example, in the Balle 2018 case, was the model using the 192 or 320 throughout? Are higher capacity models needed to allow for the flexibility needed for code editing?\n* For all the experiments in the paper, we follow the exact setting of the baselines. This means that for [Ballé et al., 2018] we use the model with $320$ channels for all range as the original paper uses $320$ channels model to train $\\lambda=0.015$.\n* The impact of model capacity is indeed an interesting topic, while we are not sure about how to include this in our experiment. As our major claim is to achieve variable bitrate with single decoder, changing channel capacity in the middle does not look like an option. However, using channel size $192$ for high bitrate range degrades the performance for sure as it also degrades the performance for the baseline [Ballé et al., 2018].\n* Without changing the model capacity, we can improve the R-D performance in the high quality range by changing the R-D trade-off of the base model. This is dicussed in Q1 & W1 and relative results are shown in [JZkG-1](https://anonymous.4open.science/api/repo/NeurIPS268_Materials/file/JZkG-1.pdf).\n\n### Reference\n* J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston. Variational image compression with a scale hyperprior. In International Conference on Learning Representations, 2018.\n* CLIC. Workshop and challenge on learned image compression (clic). http://clic.compression.cc/, 2022.", " ### Q1 When comparing the R-D curves of Code Editing Enhanced and Yang et al. [2020], what is the difference of the proposed method and is it reasonable?\n* In Appendix A.1 Fig A.1., we can see that our proposed approach achieve continuous rate control with little loss compared with [Yang et al. 2020] based on [Ballé et al. 2018] and [Minnen et al. 2018], and marginal R-D loss in very low/high bpp based on [Cheng et al. 2020]. This result is reasonable. We use a single decoder to achieve this while [Yang et al. 2020] require multiple decoder. It would be unreasonable if our method outperforms [Yang et al. 2020]. Again, we are neither superior nor inferior to [Yang et al. 2020], as the task is very different.\n* __For revision__: we have included the discussions.\n### Q2 Why does combining semi-amortized inference and adaptive quantization step size improve when low-rate results?\n* As stated in Sec 2.3, the trainable $\\Delta$ enables an extra parameter to the entropy model, which reduce the mismatched bitrate\nbitrate $E_{q(y|x)}[\\log p_{θ_{λ_1}}(⌊y⌉)−\\log p_{θ_{λ_0}}(⌊y⌉)]$. And the analysis in Sec 4.2 and Fig. 2 verifies this.\n### Q3 Why are the low-rate results in Fig. 3 approaching baseline?\n* This is probably because the latent becomes too sparse in low bitrate region and the majority of them are very close to $0$ in the initial SGA. Despite the annealing temperature, this sparsity makes SGA difficult to generate samples other than $0$.\n### Q4 what are the cases in which the authors' method does not work well\n* In general, our method does not work for the cases where encoding time matters, such as real-time communication. Our method is extremely useful for the cases where we encode just once but decode/view plural number of times, such as content delivery network.\n* __For revision__: we have included this in limitation.\n### Reference\n* Y. Kim, S. Wiseman, A. Miller, D. Sontag, and A. Rush. Semi-amortized variational autoencoders. In International Conference on Machine Learning, pages 2678–2687. PMLR, 2018.\n* Y. Yang, R. Bamler, and S. Mandt. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33:573–584, 2020.\n* L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy image compression with compressive autoencoders. In 5th International Conference on Learning Representations, ICLR 2017, 2017.\n* M. Song, J. Choi, and B. Han. Variable-rate deep image compression through spatially-adaptive feature transform. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, pages 2360–2369. IEEE, 2021.\n* J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston. Variational image compression with a scale hyperprior. In International Conference on Learning Representations, 2018.\n* D. Minnen, J. Ballé, and G. D. Toderici. Joint autoregressive and hierarchical priors for learned image compression. Advances in neural information processing systems, 31, 2018\n* Z. Cheng, H. Sun, M. Takeuchi, and J. Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7939–7948, 2020.", " Thanks for your detailed review. And we are glad to provide our answer to your questions, and a few clarification on some misunderstandings: \n\n### W1 The new part of method has not seem to be clearly explained\n* The major difference between our work and [Yang et al. 2020] is that our approach \nrequires only one decoder for continuous bitrate control, ROI and perception-distortion (multiple-distortion) trade-off. And [Yang et al. 2020] require multiple decoders for them. [Yang et al. 2020] adopt SAVI [Kim et al., 2018] to improve the R-D performance of a pair of encoder-decoder. We find SAVI can also be adopted to achieve bitrate control, ROI and perception-distortion (multiple-distortion) with only a single decoder. In fact, even without the SGA of [Yang et al. 2020], the semi-amortized inference of the simple AUN implementation can still achieve flexible bit rate control (Sec 4.2--SGA vs. AUN). Our main contribution is to explore flexible NIC with semi-amortized inference, instead improving R-D performance. \n* We disagree that the the claim of a new paradigm of controlling R-D trade-off is exaggerated. As [Yang et al. 2020] only use SAVI to improve R-D performance, it does not consider controlling R-D trade-off. And we are indeed the first to adopt SAVI for controlling R-D trade-off. Thus, it is a new paradigm of controlling R-D trade-off.\n* We agree that changing the quantization step size has been proposed and we have properly cited [Choi et al. 2019]. However, our approach is very different from previous ones. Specifically, we find there is train-test mismatch of the entropy model in Code Editing Naïve, which damages R-D performance. And then we propose adaptive quantization step to alleviate this problem (Sec 2.3 and Sec 4.2). On the other hand, [Choi et al. 2019] adjust the quantization step to fine-tune the bitrate. From the perspective of results, our proposed adaptive quantization works in the a wide bitrate region while the quantization step adjustment of [Choi et al. 2019] works in a narrow bit rate region. \n* Moreover, [Choi et al. 2019] sample $\\Delta$ during training, which requires a carefully designed prior on $\\Delta$. According to the original paper, training $\\Delta \\in [0.5,2]$ brings best performance, and making it larger or narrower brings performance decay. While for us, the $\\Delta$ is learned during SAVI stage and no deliberate prior is required. And during training we keep $\\Delta=1$ like a normal model. The advantage is that our method can be directly applied to any pre-trained neural compression model, while [Choi et al. 2019] can not. Furthermore, we study the effect of optimizing $\\Delta$ jointly with SAVI, which is never studied before. Moreover, we provide non-trivial extra insights into why this approach might work by theoretical analysis (Sec 2.3) and empirical study (4.2 and Fig. 2).\n* __For revision__: we have included the discussions.\n### W2 The R-D curve of Code Editing Enhanced vs [Yang et al. 2020]\n* It has been compared in Appendix A.1 Fig A.1. We will move it to main text to make it more obvious. We are neither superior nor inferior to [Yang et al. 2020], since we are considering vastly different tasks. [Yang et al. 2020] aim at improving R-D performance, and we aim at continuously control bitrate with one decoder. We can not compare with [Yang et al. 2020] in terms of the overhead to support variable bitrate, as [Yang et al. 2020] does not support variable bitrate.\n* __For revision__: we will make some space and move this experiment to main text.\n### W3 typo: p.2 L53 bitrtae -> bitrate\n* Thanks for pointing it out, we will fix it.\n* __For revision__: we have fixed it.\n", " ### Q1 $\\Delta$ during training\n* We fix $\\Delta=1$ during training, which is different from [Theis et al. 2017] and [Choi et al. 2019]. Training with different $\\Delta$ requires careful design of prior on $\\Delta$, and ties our method to encoder-decoder trained with specific approach. According to [Choi et al. 2019], training $\\Delta \\in [0.5,2]$ brings best performance, and making it larger or narrower bring performance decay. The major advantages of our approach are: (1) it is prior free, (2). it can be directly applied to any pre-trained neural image compression model. Training with $\\Delta$ ruins those features.\n* As our distortion is measured in $0-255$ instead of $0-1$, the $\\lambda=0.015$ is around the middle bpp (0.5). It is not surprising that a decoder trained with middle bpp can decode the image with low and high bpp. Despite The empirical result is very promising, it is authentic as the reported R-D performance has been through actual entropy encoding and decoding.\n\n### Q2 No negative societal impact described in the main text, hidden in the supplementary material.\n* __For revision__: we will make some space for this section in the main text.\n\n### Reference\n* J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston. Variational image compression with a scale hyperprior. In International Conference on Learning Representations, 2018.\n* D. Minnen, J. Ballé, and G. D. Toderici. Joint autoregressive and hierarchical priors for learned image compression. Advances in neural information processing systems, 31, 2018\n* R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018\n* Y. Blau and T. Michaeli. The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6228–6237, 2018.\n* Y. Blau and T. Michaeli. Rethinking lossy compression: The rate-distortion-perception tradeoff. In International Conference on Machine Learning, pages 675–685. PMLR, 2019.\n* Y. Yang, R. Bamler, and S. Mandt. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33:573–584, 2020.", " Thanks for your detailed review. And we are glad to provide our answer to your questions, and a few clarification on some misunderstandings: \n\n### W1 Bad formulation in Sec 2.1: \n\n* The formulation presented in L72-L84 (before revision) comes from variational autoencoder (VAE) [Kingma and Welling 2013], which is widely adopted in NIC works such as [Ballé et al., 2018, Yang et al. 2020]. Specifically, the encoder in compression corresponds to the inference model in VAE, the decoder in compression corresponds to the generative model in VAE.\n* The word “connected” means that the iid additive uniform noise used to relax discrete latent is equivalent to the reparameterization of variational posterior $q(\\tilde{y}|x)$. The \"factorized uniform distribution is used to simulate the quantization noise\" means that we add iid uniform random noise just as you supposed. We emphasis “factorize” and “uniform” here as the variational posterior $q(\\tilde{y}|x)$ is factorized uniform distribution. \n* The data likelihood term $\\log p(x|\\tilde{y})$ is known to be related to the distortion metric $d(x,\\bar{x})$. For example, when the distortion metric is MSE and data likelihood is factorized Gaussian, we can set the decoder’s output $\\bar{x}$ be the $\\mu$ and $\\sigma^2=1/2\\lambda$, then the data likelihood term $\\log p(x|\\tilde{y})$ in ELBO is just the $\\lambda * MSE + constant$. This is where the equivalence comes from. In fact, [Minnen et al., 2018] even extent this connection beyond Gaussian and MSE. The general idea is that if we treat the distortion $d(.,.)$ as the energy function, then the likelihood term is equivalent to the likelihood of Gibbs distribution defined by such energy function. \n\n* __For revision__: we have rewritten this section to make it clearer. \n\n### W2 Incorrect/bad formulation in L112 (before revision) \n\n* We think the phrase \"differentiating cdf\" is abused in L112 (before revision). We want to express taking the difference of cdf by \"differentiating cdf\", instead of taking the gradient. For factorized Gaussian $p(\\tilde{y}|\\tilde{z})$ in hyperprior based approaches, integrating the pdf and taking the difference of the cdf produces the same result. As a matter of fact, denote the pdf of random variable as $p(x)$, and the cdf as $F(x)$, we have $P(x_1<x\\le x_2)=\\int_{x_1}^{x_2}p(x)dx=F(x)|_{x_1}^{x_2}=F(x_2)-F(x_1)$. In practice, we can also implement this using cdf as $dist.cdf(y+0.5)-dist.cdf(y-0.5)$. \n* __For revision__: we have revised the expression here.\n\n### W3 soundness of the evaluation\n* For all the results reported in the R-D curve, the bitrate is measured by actual bits of range encoder, and the reconstruction is computed by the latent coded from the actual the range decoder. For all visualization involves spatial bitrate distribution, the theoretical bitrate is used. We base our paper on a mature internal library for neural compression and we did not implement the range codec by ourselves. That is the reason why we did not mention it in paper as we took it for granted. To clarify, we will add a section to emphasis this.\n\n* __For revision__: we have added a section to emphasis this.\n\n### W4 (minor) distortion perception\n* We agree. In fact the original plan is to use GAN loss but we end up with LPIPS [Zhang et al., 2018]. We will rename the distortion-perception trade-off into multiple-distortion trade-off. We fully admire [Blau and Michaeli, 2018, 2019] and we do not want to use the term \"distortion-perception trade-off\" with LPIPS.\n* __For revision__: we have renamed distortion-perception trade-off into multiple-distortion trade-off.", " - The authors propose Code Editing that control bitrate of neural image compression with semi-amortized inference.\n- The authors solved the performance decay in low rate of Code Editing by making the quantization step size adaptive. Strengths\n- The authors shows an interesting experimental result (Fig. 1 Left) it seems to solve the performance decay in low rate of Code Editing.\n\nWeakness\n- The new part of method has not seem to be clearly explained. The basic parts of amortized inference strategy in neural image compression has already been proposed (Yang et al. [2020]). Therefore, I thought it was a bit exaggerated when section 2.2 states that they propose a new paradigm of controlling R-D trade-off by semi-amortized inference. Variable bitrate compression by changing the quantization step size has also been proposed for neural image compression (Choi et al. [2019]). It would be great if you could clarify what is new when compared to these previous studies.\n- The R-D curve of Code Editing Enhanced versus conventional semi-amortized inference based neural image compression (Yang et al. [2020]) has not been compared. If quantitative comparisons with previous studies were made (e.g. overhead to support variable bitrate), it would be easier to argue the superiority of this study.\n- typo: p.2 L53 bitrtae -> bitrate - When comparing the R-D curves of Code Editing Enhanced and Yang et al. [2020], what is the difference of the proposed method and is it reasonable?\n- Why does combining semi-amortized inference and adaptive quantization step size improve when low-rate results?\n- Why are the low-rate results in Fig. 3 approaching baseline? - In Chapter 5, the authors told that efficiency of Code Editing could be improved with regard to limitations, but it would be good to be more specific about what cases the efficiency should be improved. For example, what are the cases in which the authors' method does not work well?", " The paper investigates obtaining flexible-rate neural image compression (NIC) methods by adapting representations during encoding. They obtain a new representation y' by adapting the quantization width as well as the latent values themselves. Parameters are frozen so no overhead except for longer encoding time. Overall solid idea. I'm not super familiar with the adaptive rate literature but it seems novel to me. I like the idea of adapting $\\Delta$. I also appreciate that there are various interesting ablation studies in Fig 1, and that authors investigate using the formulation for perceptual quality optimization and adaptive ROI-based rates (neat!). The results in Fig. 3 are promising.\n\n## Weaknesses\n\n- W1) Bad formulation in Sec 2.1: To me, who is very familiar with this field, Sec 2.1, in particular L72-L84 sounded weird. What is the meaning of \"connected\". What is it supposed to mean that the \"factorized uniform distribution is used to simulate the quantization noise\" (I suppose that we add IID uniform random noise?). What about \"the decoding process is connected to compute the likelihood ..., which is equivalent to distortion evaluation\" (some link to VAE?). This might be a language barrier issue but it sounds to me like some badly pieced together bits from Balle et al's original paper. I would strongly suggest rewriting Sec 2.1. \n- W2) Incorrect/bad formulation in L112: \"In NIC, the probability mass function (pmf) [...] over quantized y is computed via differentiating cumulative distribution function\" [sic]. That is only true for the factorized prior from Balle et al, which is formulated via a CDF. For e.g. hyperprior based approaches (used in almost every compression paper since), we parameterize the density directly and calculate the PMF by integrating over boxes.\n- W3) The above makes me question the soundness of the evaluation. Did the authors use range coding/arithmetic coding in the end to calculate real bitrates? Was care taken to make sure the model does not accidentally cheat?\n\n- W4) (minor) similar to the above, the recap of distortion perception left a bad taste in my mouth. The authors cite Blau and Michaeli's work, where the perceptual loss is formulated via a _divergence_, yet authors use LPIPS, a normal pair-wise distortion, to do perceptual optimization.\n\n## Conclusion\n\nOverall, I will have to give a Reject rating, but if authors can convince me that they i) did a solid evaluation including range coding that implies we can trust the results and that they ii) will make sure the formulation is sound and understandable, I am willing to change my mind, since the approach itself looks interesting.\n\nI'm giving a 4/5 confidence rating since there might be related work to adaptive-rate NIC that I'm missing\n Q1) Did you experiment with adapting $\\Delta$ during training also? I found it surprising that you can use a model trained for lambda = 0.015 No negative societal impact described in the main text, hidden in the supplementary material.", " This paper proposes a method called \"Code Editing\" for rate control, which is an optimization of the latents at encode time. This has commonly been used in the past to boost rate distortion performance, but not as a continuous rate control mechanism. Second, this paper proposes a method of combating the rate-distortion decay of latent optimization with the use of an adaptive quantizations step size. Last, the paper allows for a per pixel spatial rate-allocation mechanism that grants even more fine-grained control of the rate.\n\nGiven the authors responses, I have raised my overall review. Strengths:\nThrough \"code editing\", the latents can be additionally optimized for the given rate-distortion trade-off over a large range of bitrates [0.1 to 1.0] bpp. This allows one model, with additional encoding time to target a wide swatch of the rate-distortion curve. \n\nPer pixel rate control allows a very explicit, fine grained rate control on a per image basis. Additionally, this information is only used at encode time and isn't required to be transmitted to the decoder.\n\nThe empirical results of naive code editing only being successful in a narrow rate range (around 0.1 bpp).\n\nWeaknesses:\nNo results were shown in the very high quality/high rate range. From the rate-distortion curves, it appears as if the Code Editing method starts to flatten at the high rate (see fig 3), versus the ideal range of around 0.5 bpp.\n\nThe ROI based coding examples seem very much designed to show high contrast (checkerboard, gradient, NIC). Show the result of a semantic segmentation model's ROI and showing if bit allocation on small text or faces can enhance the overall image at low bitrates would make an obvious visual difference. (Perhaps the ROI shifts between one of several faces in the same image for example).\n\nThe quantization step size is currently determined via grid search (brute force). 1. What is the rate distortion performance at very high rates (>1.0 bpp, say 2.0)? Is code editing still effective at high rates?\n2. Is code editing effective with ROI across complex ROI masks? e.g. imagine you have an image with 3 faces roughly the same size, can shifting the ROI to each of the three faces increase fidelity equally? \n3. How important is the granularity of the quantization step size being searched? If the grid were two or four times as dense, would you have increased performance or just more points on the RD curve?\n4. Many of the baseline models have increased capacity for the larger rates in order to keep quality high. For example, in the Balle 2018 case, was the model using the 192 or 320 throughout? Are higher capacity models needed to allow for the flexibility needed for code editing? Author sufficiently discussed the limitations of their work and any potential negative society impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "0TWWsPGBX3C", "mn-Fn6D-GzcB", "nips_2022_uV_VYGB3FCi", "l9VGqdAoNT7", "cNGrLqwt-fV", "cNGrLqwt-fV", "l4m8-spFs1z", "l4m8-spFs1z", "nips_2022_uV_VYGB3FCi", "nips_2022_uV_VYGB3FCi", "nips_2022_uV_VYGB3FCi" ]
nips_2022_8rZYMpFUgK
DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization
The combinatorial problem of learning directed acyclic graphs (DAGs) from data was recently framed as a purely continuous optimization problem by leveraging a differentiable acyclicity characterization of DAGs based on the trace of a matrix exponential function. Existing acyclicity characterizations are based on the idea that powers of an adjacency matrix contain information about walks and cycles. In this work, we propose a new acyclicity characterization based on the log-determinant (log-det) function, which leverages the nilpotency property of DAGs. To deal with the inherent asymmetries of a DAG, we relate the domain of our log-det characterization to the set of $\textit{M-matrices}$, which is a key difference to the classical log-det function defined over the cone of positive definite matrices. Similar to acyclicity functions previously proposed, our characterization is also exact and differentiable. However, when compared to existing characterizations, our log-det function: (1) Is better at detecting large cycles; (2) Has better-behaved gradients; and (3) Its runtime is in practice about an order of magnitude faster. From the optimization side, we drop the typically used augmented Lagrangian scheme and propose DAGMA ($\textit{Directed Acyclic Graphs via M-matrices for Acyclicity}$), a method that resembles the central path for barrier methods. Each point in the central path of DAGMA is a solution to an unconstrained problem regularized by our log-det function, then we show that at the limit of the central path the solution is guaranteed to be a DAG. Finally, we provide extensive experiments for $\textit{linear}$ and $\textit{nonlinear}$ SEMs and show that our approach can reach large speed-ups and smaller structural Hamming distances against state-of-the-art methods.
Accept
Overall, reviews for this paper are quite positive. The paper presents an interesting and effective new approach to incorporating a DAG constraint into an optimization problem by using a characterization of DAGs in terms of the logdet function. During discussion, the reviewers raised several important questions/points for clarification, which the authors largely addressed in their responses. I encourage the authors to use these responses to guide editing of the paper for the final version. There was some disagreement during author-reviewer discussion in regards to a comparison to GOLEM made by one of the reviewers. After the discussion period, the reviewer has provided some useful information on possible reasons for inconsistencies. I hope that the authors will investigate these points carefully and update empirical results/discussions as needed in the final version. From the reviewer: I carefully compared my version of GOLEM with the version of GOLEM used by the author (released with the GOLEM paper), and there are several differences. 1 My version is implemented in PyTorch, theirs is implemented in TensorFlow, 2 In my version, a learning rate scheduler is used to apply smaller and smaller learning rates to solve the problem as it approaches a local optimum, and in theirs, a fixed learning rate is used. Considering this, I think it is reasonable that the authors did not observe the same performance as I did. Here the learning rate scheduler might play an important role since I also observed that with some large learning rate GOLEM may not converge so that a fixed learning rate may finally converge to a bad solution, or fail to converge. … I think the paper can be further enhanced if the authors can replace the DAG constraint in GOLEM with theirs to obtain a new algorithm. From my experience, it is highly possible that with a proper optimization algorithm it can achieve far better performance than the current version.
test
[ "5DHRulmpAz2", "v59xslewGw", "S1afBmjmO2v", "ZldpXZxiQQ", "gkZ6n-zJTf4", "Tu8BNFJpoC", "x0r1ivAsbhg", "FVlGg_piuvI", "1dO_JJ2MK9m", "sGdluJI7hmF", "HQ21FsNceOq", "oFKEgDPCfyXX", "CweTKY83pBs", "DVPkMCU0AXX", "5nd8bVYTszJ", "-1sAFKRrEvX", "uhkRSY8eq1", "9z7JbEr4EDA", "gJL0fGUUkJG", "T57WmLBTF6B", "lggm4rIfvr", "PbIfP2QdwUS", "2eEjhG7YFQ" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have updated my overall assessment to \"7: Accept\".", " We are glad that our response addressed your concerns! And we are highly grateful for your support towards the acceptance of our work!", " We are happy that our response clarified your doubts! And thank you again for your positive assessment of our work!", " We are glad to see that our response has clarified your concerns! We hope you feel even more positive about our work. We will make the revision for the camera ready", " We thank the reviewer for clarifying their questions and for taking the time to run additional experiments. \n\n**Regarding $\\mu=0$:** \n\nIn fact, as you suggest, **we can prove that the distance between the final solution at $\\mu=0$ and the candidate graph is bounded** (see below for details).\n\n**Regarding GOLEM:** \n\nThank you for flagging single vs double precision in GOLEM. Following your suggestion, we re-implemented GOLEM under double precision, but unfortunately **have not been able to replicate your claims**. In order to properly evaluate your claims, we feel it is necessary to see your results, including means and standard errors for each run and setting tested. (See below for a summary of our experiments.)\n\nNevertheless, we will update our figures to include GOLEM with double precision, and add a note regarding this detail.\n\n## More Details:\n### **Proof sketch for $\\mu=0$:** \nBelow we prove the bound requested by the reviewer.\n\n1. Pick $\\mu’>0$ and let $W’$ be a solution to $\\min_W \\mu’ Q(W) + h(W)$. Stationarity implies $\\mu’ \\nabla Q(W’) + \\nabla h(W’) = 0$. Let $L = || \\nabla Q(W’)||$, then $|| \\nabla h(W’) || = \\mu’ L$\n\n2. Let $B$ be the local Lipschitz constant of $\\nabla h$ at $W’$. Suppose now that next we solve Line 3 of Algorithm 1 with $\\mu = 0$, and we solve it by gradient descent **starting at $W’$**. Then, a short calculation shows that the $t$-th GD iteration is given by: $W^{(t)} = W^{(t-1)} - \\eta \\nabla h(W^{(t-1)})$. Thus $|| W^{(t)} - W^{(t-1)} || \\leq \\eta \\mu’ L (\\eta B +1)^{t-1}$. \n\n3. Summing and telescoping we obtain $|| W^{(t)} - W’ || \\leq \\eta \\mu’ L \\sum_{k=0}^{t-1} (\\eta B +1)^k$. \n\n**It follows that for small $\\mu’>0$, the distance between the final solution at $\\mu=0$ and the candidate graph for $\\mu’$ is bounded, as desired.** As a final remark, note that since $h$ is smooth, by the GD lemma, GD will find a stationary point which is a global min of $h$ by invexity in $O(1/\\epsilon)$ iterations, for an $\\epsilon$ suboptimality error. \n\n\n### **Additional experiments:** \n\nWe compared GOLEM to NOTEARS and DAGMA under the reviewer's set of hyperparameters with double precision enabled. See this figure for the results: https://postimg.cc/RJv8YWvh. For reproducibility, we also report the seeds used to generate the graphs and run GOLEM, $[3223, 189, 860, 1256, 3239, 2727, 4178, 1108, 4361, 4486]$. While there is an improvement in GOLEM’s performance, GOLEM with double precision (GOLEM_DOUBLE in the figure) still performs worse in SHD than NOTEARS and DAGMA for ER4/SF4 and $d \\in \\\\{ 200, 300, 500 \\\\}$. The runtime is much slower as well, being now in the same order as NOTEARS. Finally, although we used the set of hyperparameters you suggested, unfortunately, we did not observe that GOLEM performs similarly to DAGMA.", " The authors addressed my questions in the rebuttal, I have no further concerns. I support the acceptance of this paper.", " My sincerest apologies for entering this discussion so late. I truly appreciate the authors taking the time and effort to address each of my questions and concerns. Thank you. Reading your response helped clarify me as to the doubts I had. ", " I would like to thank the authors for addressing my comments in detail. \n\nI think the reply has addressed my concerns! I would suggest revising the parts in the paper accordingly so that the quantitative statements can be made where possible. ", " Sorry for the late reply. I would like to thank the authors for the informative response.\nI was trying to do some experiments in the paper. \n\nI understand that the paper are using the central path algorithm. However, the limit point of the central path algorithm will be the solution, and it may require infinite number of steps. \n\nIf you would like to use the central path algorithm for finite steps, you will have to provide a prove some properties as follows:\n \n+ From a sufficient small $\\mu$, you get a candidate graph\n+ Use the candidate graph as initial solution, solve the optimization problem in line 3 of algorithm 1 with $\\mu=0$ to obtain the final solution.\n+ Show that the distance between the final solution and the candidate graph is bounded.\n\nWithout doing this you can not say you will obtain a meaningful DAG in finite steps. This is because arbitrary DAG is a solution of the optimization problem in line 3 of algorithm 1 with $\\mu=0$. \n\n\nAlso recently I have tried to use GOLEM on some of the experiments in the paper. My finding is as follows:\n\n+ using the hyper parameter provided in the paper, GOLEM achieves similar performances as reported in the paper.\n+ By setting learning rate to $2\\times 10^{-4}$, and number of iterations to more than $2\\times 10^5$, GOLEM achieves similar performance as the DAGMA on 500 node ER4 graphs, but the running time for GOLEM than is more than 2 hours on a 3090 graphics card. Here it is slow because I use double precision and the double precision on 3090 is very slow. I do not know if the authors are using single precision or double precision. If it is single precision, I believe by converting to double precision the performance can be improved.\n\nWhy I am trying to do this experiment is because the GOLEM algorithm are specifically designed for sparse graphs. For very large graphs with 500 or 1000 nodes, since the number of degree for each node is fixed, larger number of nodes results in sparser graphs. In this case it would be hard to believe GOLEM would be perform worse than NOTEARS on large scale problems. \n\nThe experiments suggests that if we only consider the accuracy of DAG recovery, GOLEM may have similar performance as the proposed methods, but it requires more running time, and the larger the graph is, the slower GOLEM is.", " Thank you for taking the time to respond to our comments. \n\nDo you have any comments regarding items (2) and (3)? We have put a lot of effort into running the additional experiments, and *we would appreciate it if you could let us know if it has clarified your concerns*. You had also mentioned that you found an issue with Corollary 2: *Can you please provide more details on this?*\n\nRegarding the last iteration when $\\mu=0$: it will **not be an arbitrary DAG**. This is due to our use of the central path in Algorithm 1, which ensures the result is not arbitrary. Indeed, if setting $\\mu=0$ returned an arbitrary DAG, our results would be random instead of close to the ground truth. **We have emphasized our central path approach several times in the paper**, see L17 in the Abstract, L71-72 and L103 in the Introduction, L293, L296, and $\\mu$ being the central path coefficient in Algorithm 1 in Section 4.\n\nWe have provided a more detailed explanation of this in parts (1) and (2) of our response above; we would appreciate it if the reviewer could provide a response to our specific points there. (Note that the point about the FJ condition is meant only to provide some intuition behind our approach compared to existing approaches, and is not a rigorous argument.) _To help illustrate the central path more clearly, please see the following example (**https://postimg.cc/75cnskpb**) of a two-node graph **following Figure 1 in the paper**. In this case, the ground truth is the DAG with $w_1=1.2$ and $w_2=0$ with standard Gaussian noises. The title of the plot shows the value of $\\mu$, the initial point (red point in the plot), and the final point after performing gradient descent (the cyan point in the plot). The example clearly illustrates the central path behavior of our algorithm, and certainly does not return a \"random\" DAG._\n\nFinally, although property (ii) of Theorem 1 suggests LICQ does not hold, LICQ is a necessary qualification for KKT to be a necessary condition, so *KKT is not a necessary condition in this case*. Regardless, **our logdet formulation is still a valid characterization of acyclicity, and it is clear that a similar reformulation as given in NOFEARS will work for our formulation too.** Since the KKT re-formulation is clearly documented in NOFEARS, we opted not to repeat these details in our submission. Nonetheless, we are happy to add a remark and pointer to the NOFEARS paper on this.", " It is true that you can set the series $\\mu$ to {1, 0.1, 0.01, 0.001, 0}. However it will cause a problem.In the last iteration of the algorithm, the objective only include the term that encourages the graph to be DAG, and thus arbitrary DAG is the solution. In this case, a meaningless DAG is obtained in finite steps. Once you want the DAG to be meaningful, it has to take infinite steps. \n\nFor the FJ condition, it is equivalent to KKT condition if $v_0\\neq 0$. However if $v_0 =0$, it is just the Mangasarian–Fromovitz constraint qualification, which is much weaker than the KKT condition. In this case the proposed DAG constraints must suffer from property (ii) of Theorem 1.", " We thank all reviewers for putting the time to read our work and for providing insightful comments and suggestions to improve our work. We are thrilled to see that in general there is a positive assessment of our contributions and that there is a consensus on the good quality of the presentation of our results. We have addressed your questions and hopefully our responses will help clarify them. We encourage the reviewers to ask any further clarification questions during the discussion week.", " *We thank the reviewer for appreciating the novelty of our contributions and for considering our work to be well-written*. We next address your comments/concerns in order.\n\n### Strengths And Weaknesses:\n\nRegarding your comment ``several aspects in the comparison are based on heuristic arguments or empirical observations, which weakens the claim that the new penalty should be preferred over the earlier ones’’, the only purely empirical observation is given in Argument (iii) on the observed runtimes, for Arguments (i) and (ii) we provide an explicit justification (Lemmas 4 and 5) for preferring the logdet function over existing ones. As this concern is related to your questions, we elaborate further below.\n\n### Questions: \n\nBefore addressing the questions, we would like to point out that **we provided more details in Section B.2 of the supplement regarding the arguments given in Section 3.2.**\n\n> On discounting long cycles.\n\n* Great question! We shall note that the statement in Lemma 4 follows by a **precise comparison** between $h_{\\mathrm{expm}}$ and $h_{\\mathrm{logdet}}$ via the spectrum of $W\\circ W$ (see its proof in Section A.5). However, following your suggestion, we can provide the stronger asymptotic statement for cycle graphs (as in Fig 2), and we will update Lemma 4 as follows:\n\n > **Lemma 4.** For all $W \\in \\mathbb{W}^{s=1}$, we have $h_{\\mathrm{poly}}(W) \\leq h_{\\mathrm{expm}}(W) \\leq h_{\\mathrm{ldet}}^{s=1}(W)$. Moreover, when $W$ is a cycle graph (as in Figure 2), we have that $h_{\\mathrm{expm}}(W) = o(h_{\\mathrm{ldet}}(W))$.\n\n We note two things. First the inequality in Lemma 4 is tight in the sense that, when $W$ is a DAG, all three formulations are exactly equal to zero. Second, *for cycle graphs*, $h_{\\mathrm{expm}}(W) = o(h_{\\mathrm{ldet}}(W))$ states that while asymptotically both $h_{\\mathrm{expm}}(W)$ and $h_{\\mathrm{ldet}}(W)$ get to zero, the logdet functions does so **at a much slower rate** when compared to the expm (and hence also the poly) characterization. In other words, DAGMA penalizes cycles more heavily in a precise way. We corroborate the latter, in the following table where we show the values of the three acyclicity functions for much larger values of $d$. We will elaborate this discussion in the revised version.\n\n |Function|10|15|50|100|500|1000|2500|5000|7000|\n |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|---|\n | $h_{\\mathrm{logdet}}$ | 4.6 | 4.2 | 3.0 | 2.3 | 9.3e-1 | 4.5e-1 | 8.5e-2 | 6.7e-3 | 9.1e-4 |\n | $h_{\\mathrm{expm}}$ | 2.7e-6 | 1.1e-11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n | $h_{\\mathrm{poly}}$ | 1.0e-9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |\n\n We also note that since our method is gradient-based, it is perhaps more important to understand the behavior of the gradient of $h_{\\mathrm{ldet}}$, and how it compares to the gradients of existing acyclicity functions, which we discuss next.\n\n> On vanishing gradients.\n\n* Thank you for raising this question! We realize that Lemma 5, as written, is not too informative. We offer to make the following update to Lemma 5:\n\n > **Lemma 5.** For any walk of length $k$, its contribution to the gradients $\\nabla h_{\\mathrm{expm}}(W)$ and $\\nabla h_{\\mathrm{poly}}(W)$ are diminished by $1/k!$ and ${ d-1 \\choose k}/(d-1)^k$, respectively. In contrast, $\\nabla h_{\\mathrm{ldet}}^{s=1}(W)$ does not diminish any walk of any length. This implies that $\\left|\\nabla h_{\\text {poly }}(W)\\right| \\leq\\left|\\nabla h_{\\operatorname{expm}}(W)\\right| \\leq\\left|\\nabla h_{\\text {ldet }}^{s=1}(W)\\right|$.\n\n Note that the update above does not require any new proof technique, **the argument is already contained in Section A.6**. Specifically, the above follows by looking at the expansions of the respective gradients of the different acyclicity functions, see the proof of Lemma 5 in Section A.6. These expressions, given in Line 562, clearly show how the gradient of the logdet function weighs all walks/cycles equally irrespective of their lengths, whereas the polynomial and exponential functions diminish walks/cycles of length $k$ by ${ d-1 \\choose k} / (d-1)^k$ and $1/k!$, respectively, hence both being prone to vanishing gradients. \n\n> On computational speedups.\n\n* We meant by ``an order of magnitude faster’’ the fact that one can compute the logdet function and its gradient about 10x faster, as stated in Line 280. If the wording creates confusion, we can plainly say it is about 10x faster. Regarding the explanation for this, our best explanation is already given in Section B.2, that is, computing the logdet and its gradient enjoy the large body of work on optimized libraries for matrix factorizations and solving linear systems, while computing the matrix exponential is notoriously tricky, see Lines 618-624 and the references therein.\n\n> Minor issues\n\n(1) Yes, it should be $\\mathrm{Tr}(I + \\frac{1}{d} W \\circ W)^d - d$. Thank you for catching the typo!\n\n(2) Thank you for the suggestion, we will revise this accordingly.", " We thank the reviewer for appreciating the *novelty and relevance of our contributions* and for their positive evaluation of our work. We next address your comments in order.\n\n### Strengths And Weaknesses:\n\n> ``In line 177, (4)⇒(3) required a bit of thought to understand so some brief comments would help instantaneous understanding.’’\n\nIn Line 177, we simply present the domain $\\mathbb{W}^s$ where (4) implies (3), but the formal statement is given in Theorem 1 item (i), which is proved in Section A.1. To make sure this is crystal clear, we will add a sentence stating that the implication is proved in Theorem 1.\n\n> In Lemma 3, the statement that ⊗ is the Kronecker product may be added so that reader can look up its definition.\n\nAgreed! We, unfortunately, missed specifying that ⊗ denotes the Kronecker product.\n\n\n### Questions\n\n> In Corollary 1, what does it mean by 'point towards the interior'. Is this term 'interior' is the topological term or a general term for somewhere inside? It seems that W^s is an open subset of Rd×d.. Does this mean that its direction is always directly toward a stationary point?\n\nYes, the term ‘interior’ is in the topological sense. It is true that since $\\mathbb{W}^s$ is an open set, any element of $\\mathbb{W}^s$ is an interior point of $\\mathbb{W}^s$. While this is somewhat redundant, our aim was to emphasize that the direction of the negative gradient of $h^s_{\\mathrm{ldet}}$ will provide a direction to remain inside $\\mathbb{W}^s$. Note that we missed the word *\"negative\"* in Corollary 1, we will add this in the revision. Finally, the direction does not directly point towards a stationary point; however, a simple gradient descent scheme will lead to a stationary point of $h^s_{\\mathrm{ldet}}$ which corresponds to a DAG, as prescribed by Theorem 1 and Corollary 2.\n\n> It seems that in eq(5) the condition that W≥0, that is, all entries are nonnegative is missing. It seems that some Theorem may hold without this nonnegative entry condition but others do not. Theorems/Lemma critically relying on Proposition 1 should require the nonnegative entry condition. However, for example, Lemma 2 seems to consider the cases without nonnegative entry conditions, otherwise, sign(Wi,j) is always nonnegative. It would be better if the authors clarify this.\n\nIt is correct that many of our derivations rely on the properties of M-matrices given in Proposition 1. This is the reason our logdet function in Theorem 1 leverages the nonnegativity of the Hadamard product $W \\circ W$. Then, we just need to make sure that the spectral radius of $W\\circ W$ is less than $s$, which is precisely what eq.(5) is about! This should hopefully clarify your confusion about Lemma 2, if not, we are happy to address any other follow-up questions.\n\n\n### Limitations\nThank you for this suggestion! In fact, since Bhattacharya et al. also leverage differentiable acyclicity functions, it would be interesting to see the performance of our logdet formulation under the ADMG setting. We will add the reference for completeness.\n", " > It seems like hyperparameter tuning could play a large role in the performance of these approaches. Do I understand correctly that previous work has tuned hyperparameters for these same exact datasets in the cases of re-use? Is there clarity on how all hyperparemters were selected? I wonder what you think about evaluating methods using bayesian hyperparameter optimization to find, for each dataset (e.g., setting of generative model hyperparameters) best setting of hyperparameters.\n\nConsistent with previous work in this area (e.g. NOTEARS and its follow-ups), we have not performed any hyperparameter optimization: This is to avoid presenting unintentionally biased results. As a concrete example, in our experiment section, we simply chose a reasonable value for the $\\ell_1$ penalty coefficient and used that same value for all ER and SF graphs across many different numbers of nodes. That is to say that one could find even better SHD with proper hyperparameter tuning for each dataset. Therefore, currently, there is no clarity on what the optimal set of hyperparameters for a given dataset is. Finally, we fully agree that Bayesian hyperparameter optimization would be a fascinating approach for this problem to explore in the near future!\n\n> There is a note about the proposed approach being better at detecting large cycles. Forgive me if I have missed something, but was there analysis in the experiments section, which noted how much of the improved performance (e.g. SHD) was due to better recovery of such large cycles?\n\nGreat point! For a direct comparison between acyclicity regularizers, we ran experiments using the *exact same Algorithm 1* and *just replacing the acyclicity regularizer*. We ran simulations on Linear Gaussian SEMs for ER2/ER4/SF2/SF4 graphs. The table below shows the average SHD with 95% confidence intervals on 10 repetitions. DAGMA corresponds to Algorithm 1 using the logdet function, while DAGMA_EXPM corresponds to Algorithm 1 using the trace exponential function from NOTEARS. We observe clearly that our logdet function plays an important role in obtaining DAGs with significantly lower SHD.\n\n|method |graph_type |20 |40 |60 |80 |100 |\n|:----------|:----------|:----------|:------------|:------------|:------------|:-------------|\n|DAGMA |ER2 |0.5 ± 0.61 |1.7 ± 2.06 |1.8 ± 1.15 |0.6 ± 0.9 |4 ± 3.78 |\n|DAGMA_EXPM |ER2 |2.1 ± 1.61 |10.4 ± 6.81 |16.9 ± 9.3 |20.9 ± 7.67 |33.4 ± 7.44 |\n|DAGMA |ER4 |3.8 ± 2.78 |6.5 ± 4 |9.8 ± 5.72 |14.3 ± 6.52 |12.3 ± 5.66 |\n|DAGMA_EXPM |ER4 |10.7 ± 4.5 |41.4 ± 16.47 |65.5 ± 15.5 |91 ± 27.6 |113.2 ± 27.37 |\n|DAGMA |SF2 |0.1 ± 0.23 |0.6 ± 0.77 |0.3 ± 0.48 |1.4 ± 2.47 |1.1 ± 1.09 |\n|DAGMA_EXPM |SF2 |2 ± 2.51 |2.2 ± 1.99 |4.8 ± 7.28 |1.8 ± 1.58 |7.4 ± 6.94 |\n|DAGMA |SF4 |5.9 ± 4.12 |11.2 ± 6.99 |18.6 ± 17.69 |3.5 ± 1.92 |9.9 ± 13.21 |\n|DAGMA_EXPM |SF4 |8.1 ± 4.84 |17.3 ± 9.73 |28 ± 16.24 |16.4 ± 12.46 |29.9 ± 26.69 |\n\n> If these methods were used as part of some downstream approach, do you think the empirical analysis here is predictive of performance? Why/why not / what other measurements would be interesting to consider?\n\nGreat question! Unfortunately, we cannot make a formal claim. This is a fundamental question with roots in transportability/generalization that makes for exciting future work.", " We thank the reviewer for their positive assessment of our work and for their insightful comments and suggestions. We next address your questions in order.\n\n> Rather than sampling a DAG structure by ordering the nodes in an undirected graph, why not sample DAGs from something like Price's model? I realize that your experiment setup is following earlier work, however. I am curious what if any properties we should be aware of for your generative model of dags. How do in-degree / out-degree distributions look like? How do they differ from degree distribution from undirected graph?\n\nThanks for bringing up this question! In fact, we were imprecise in L640 by saying *\"The random graph models above are undirected graphs\"*, when in fact **only the ER graphs are undirected graphs and then oriented via a random ordering of the nodes**. For the case of scale-free (SF) graphs, they are directly sampled as directed graphs, i.e., they are DAGs from the start. Interestingly, the \"original\" Barabasi-Albert (BA) model is precisely the undirected version of the Price’s model, and the Python library we used (igraph) supports generating a directed BA model, which is equivalent to the Price’s model when the exponent in the preferential attachment process is 1. **This means that the scale-free networks we generated follow Price’s model!** \n\nThanks to your comment, we have flagged this and will make the DAG generation schemes more precise in the camera ready. We will correct L640 and add a note regarding Price’s model.\nRegarding the (undirected) degree distributions, these would follow from known results in ER and BA models. That is, for ER graphs the degree distribution is $P(k)= {n-1 \\choose k} p^{k}(1-p)^{n-1-k}$, while for BA graphs the degree distribution is $P(k) \\sim k^{-3}$. Regarding the in- and out- distributions, for ER models it is very challenging to characterize this since it is sensitive to the random ordering of the nodes. For BA models, due to the generative process, the in-degree distribution is basically just a shift of the (undirected) degree distribution. For illustration, we computed the empirical in-, out-, and undirected degree distributions for graphs ER1/SF1/ER2/SF2/ER4/SF4 of 50 nodes over 500 repetitions. The figures are contained in this link: https://postimg.cc/gallery/BJ09dHH. It is worth noting that SF graphs have a large number of nodes with in-degree equal to zero (i.e., nodes with no parents) and a few nodes with a large number of parents (i.e., hubs). In contrast, in ER graphs the in- and out- distributions follow a similar pattern.\n\n\n> When / why would quality Q(⋅) make sense / not-make sense for an evaluation measure of the DAGs? Could it be evaluated on held-out data of the same underlying distribution?\n\nThis is a good question, and has a surprisingly deep answer. In short, it is well-known that predictive measures such as Q(.) are **not** good measures in the context of structure learning (although we hasten to point out, it could be useful for downstream tasks such as prediction). In fact, it can be proven that predictive metrics are *provably* suboptimal in the sense that they lead to false discoveries; see Meinshausen and Buhlmann (2006) for more details. Since our focus is on structure learning, it does not make sense to use Q for evaluating a DAG (i.e. learning a DAG with the best accuracy possible or equivalently the lowest SHD).\n\n> When / would it make sense to evaluate not just wall clock time, but number of samples n?\n\nWhile of course the dependence on the number of samples is relevant, recall that our contribution is a new acyclicity regularizer, *which does not depend on the number of samples*. This acyclicity function defines the feasible set (namely DAGs) for the score-based optimization problem, i.e. the feasible set is the same for all methods. Indeed, the only term that depends on $n$ is the score/loss function, for which we are not proposing anything new.\n\n\n", " ### Questions\n> ``Is the performance from different implementation? In NOTEARS, the authors does not decrease mu but increase the coefficients of h(W dot W). In GOLEM, the authors use fixed coefficient for L1.’’\n\nWe used the authors’ implementations: As explained above and in Section C.1., we use the original GOLEM implementation provided at https://github.com/ignavierng/golem. For NOTEARS, we used the code available at https://github.com/xunzheng/notears.\n\n\n>``If put the DAG constraint in the same optimisation framework as NOTEARS, or if we put the NOTEARS in the same framework as the work, what is the performance?’’\n\nThank you for the insightful question!\n* We can indeed use the Augmented Lagrangian scheme using our logdet characterization. However, part of our contributions (namely Algorithm 1) is to argue that such a scheme or the quadratic penalty method are not necessary for obtaining more accurate structures *using the logdet formulation*. Implementation-wise, simply replacing the trace exponential by the logdet function in the NOTEARS code will not necessarily work since NOTEARS uses the scipy minimize function, which does not take into account the fact that we need to stay in the interior of $\\mathbb{W}^s$. \n\n* We can, however, replace our logdet function by the trace exponential in Algorithm 1, the performance is reported in the table below. We ran experiments on Linear Gaussian SEMs for ER2/ER4/SF2/SF4 graphs. DAGMA_EXPM refers to Algorithm 1 using the trace exponential function from NOTEARS instead of logdet. We thank the reviewer for bringing up this question as this a clear example of the benefits of the logdet formulation when directly compared to the trace exponential function under the same scheme. We will add these experiments in the supplement. \n|Method |graph |20 |40 |60 |80 |100 |\n|:----------:|:----------:|:----------:|:------------:|:------------:|:------------:|:-------------:|\n|DAGMA |ER2 |0.5 ± 0.61 |1.7 ± 2.06 |1.8 ± 1.15 |0.6 ± 0.9 |4 ± 3.78 |\n|DAGMA_EXPM |ER2 |2.1 ± 1.61 |10.4 ± 6.81 |16.9 ± 9.3 |20.9 ± 7.67 |33.4 ± 7.44 |\n|DAGMA |ER4 |3.8 ± 2.78 |6.5 ± 4 |9.8 ± 5.72 |14.3 ± 6.52 |12.3 ± 5.66 |\n|DAGMA_EXPM |ER4 |10.7 ± 4.5 |41.4 ± 16.47 |65.5 ± 15.5 |91 ± 27.6 |113.2 ± 27.37 |\n|DAGMA |SF2 |0.1 ± 0.23 |0.6 ± 0.77 |0.3 ± 0.48 |1.4 ± 2.47 |1.1 ± 1.09 |\n|DAGMA_EXPM |SF2 |2 ± 2.51 |2.2 ± 1.99 |4.8 ± 7.28 |1.8 ± 1.58 |7.4 ± 6.94 |\n|DAGMA |SF4 |5.9 ± 4.12 |11.2 ± 6.99 |18.6 ± 17.69 |3.5 ± 1.92 |9.9 ± 13.21 |\n|DAGMA_EXPM |SF4 |8.1 ± 4.84 |17.3 ± 9.73 |28 ± 16.24 |16.4 ± 12.46 |29.9 ± 26.69 |\n", " 3. Also, in the GOLEM paper, the authors only experimented on ER2 graphs for the high dimensional setting (Figure 8 in the GOLEM paper). We replicated their experiments in Section C.1.3. We experimented with ER2 graphs for $d \\in [200, 2000]$, and as seen in Figure 8, GOLEM improves its performance in this sparser setting w.r.t. the ER4/SF4 graphs shown in Figure 7. Nonetheless, **DAGMA still attains better SHD in both ER2 and ER4 graphs**. \n\n At the request of the reviewer, for completeness we next show experiments on sparser linear Gaussian SEMs such as ER{1,2,3} and SF{1,2,3} for a number of nodes in $\\\\{20, 40, 60, 80, 100, 200, 400, 600\\\\}$. The table below shows the average SHD with 95% confidence intervals on 10 repetitions. We note that, not surprisingly, DAGMA and GOLEM perform similarly in ER1/SF1, with DAGMA already taking the lead in ER2 graphs. We also point out that NOTEARS already performs better than GOLEM in ER3/SF3 graphs for $d>200$. \n\n If the reviewer still has concerns regarding the comparison of GOLEM against NOTEARS, it would be helpful if the reviewer could report their results explicitly and specify the setting used for NOTEARS. It is worth noting that in the GOLEM paper, the authors used a $\\ell_1$ penalty coefficient of 0.1 for NOTEARS which we argue that it was not a fair comparison to use very different levels of sparsity, in the table below all three methods used the same $\\ell_1$ coefficient of 0.03.\n\n|Method|Graph|20|40|60|80|100|200|400 |600|\n|:-------:|:----------:|:-------:|:---------:|:---------:|:---------:|:---------:|:----------:|:-----------:|:-----------:|\n|DAGMA|ER1|0.1±0.2|0.3±0.7|0.9±0.9|1±1.6|1.5±1.2|0.2±0.4|3.7±2.1|3.6±1.8|\n|GOLEM|ER1|0±0|0.3±0.5|0.3±0.7|0.2±0.4|0.6±0.8|1.9±2.6|1.3±1|1.9±1.4|\n|NOTEARS|ER1|0.5±0.9|0.5±0.7|1.7±1.1|1.7±1.8|2.4±1.8|8.3±6.2|12.6±2.6|14.9±7.1|\n|DAGMA|ER2|0.5±0.6|1.7±2.1|1.8±1.1|0.6±0.9|4±3.8|7.3±4|14.9±9|20.9±8.5|\n|GOLEM|ER2|1.4±1.4|1.4±1.2|2.3±1.8|5.1±3.2|7.5±5|13.2±6.1|44.5±16.4|68.6±16.2|\n|NOTEARS|ER2|2.3±1.6|7.9±5.9|11.1±8.9|17.7±10.9|27.4±12.2|49.4±18.8|108.4±29.2|163.8±33.2|\n|DAGMA|ER3|2±2.3|3.3±3.1|3.3±2.6|6.3±3.4|7±4.6|19.9±14.4|58.8±29.6|109.9±26.1|\n|GOLEM|ER3|6±4.9|6.4±8.6|8.4±3.2|18.4±7.9|33±18.1|77.4±25.9|334.4±223.1|859.3±312.2|\n|NOTEARS|ER3|4.1±2.5|14±9.3|37.2±15.9|37.2±13.9|55.5±18.4|114.8±36.9|285.6±52.6|455.1±72.8|\n|DAGMA|SF1|0±0|0±0|0±0|0±0|0.4±0.6|0.6±1.4|1.5±2|3.1±5.6|\n|GOLEM|SF1|0.1±0.2|0.1±0.2|0±0|0±0|0.2±0.4|0.5±0.5|0.3±0.3|1±1.1|\n|NOTEARS|SF1|0±0|0±0|0.1±0.2|0.1±0.2|0.1±0.2|0±0|1.5±2|4.2±8.5|\n|DAGMA|SF2|0.1±0.2|0.6±0.8|0.3±0.5|1.4±2.5|1.1±1.1|0.3±0.5|1.3±1.4|2.2±1.2|\n|GOLEM|SF2|0.4±0.9|0.7±1.1|3.2±5.4|0.3±0.5|1.3±1.4|2.4±3.6|5.4±5.6|44.3±59.3|\n|NOTEARS|SF2|0.5±0.6|1.3±1.8|2.6±3.8|1±0.7|2.3±2.9|4.3±3.7|5.9±4.6|15.5±13|\n|DAGMA|SF3|2.7±3.3|2.5±4.2|8.1±11.4|2.8±3.4|15.9±21.4|7.6±8.1|5.8±3.5|13.6±10.2|\n|GOLEM|SF3|1.3±1.2|1.7±1.7|1.5±2.5|5.4±6|7.3±6.5|9±7.1|68.6±55.4|510.4±414.2|\n|NOTEARS|SF3|2.7±3.2|8.4±7.3|7.1±8.9|5.1±2.3|12.9±15.9|10.9±6.9|19.3±10.3|68.4±61.8|\n\n", " We thank the reviewer for their comments, which we address in order.\n\n### Summary\n\nIn your summary, it is stated that our method \"achieves better performance on ER4 and SF4 graphs with nodes from 200 to 1000\". While this is correct, **we also provided experiments for small to moderate numbers of nodes** ($d \\in [20,100]$) in the supplement, for **both linear and nonlinear models**, see for instance Sections C.1.1, C.1.4, C.2.1, C.2.2. Moreover, as mentioned in Remark 4 in the supplement, we focused mainly on **ER4 and SF4 graphs as they are harder to learn than ER1, ER2, SF1, SF2**. We regret that we did not mention in the main text that we included experiments on these regimes in the supplement. We will add a sentence explaining this in the revision.\n\n### Strengths And Weaknesses:\n\nAbout the pros, please note that **an important strength of our method is that it also works for nonlinear models** (see Section C.2), in contrast to **GOLEM which is limited to linear models**.\n\nAbout the cons:\n\n1. First, note that in Algorithm 1 we use a constant decay factor for $\\mu$. **This is just for practical reasons**, it is of course possible to fix the values of $\\mu$ for each iteration, e.g., if we run Algorithm 1 for 5 iterations, we can set $\\mu$ to take values in$\\\\{1, 10^{-1}, 10^{-2},10^{-3}, 0 \\\\}$. Then, at the last iteration, by the invexity of $h$, the solution will be a DAG.\n\n Second, **we do not see property (ii) of Theorem 1 as a negative property.** The invexity of $h$ is due to property (ii), and we argue that this is a nice property to have. When $\\mu = 0$, we are solving an unconstrained problem where all stationary points are equally good (global min), hence, as pointed out in Remark 3, all we need to have is a good initial point that sits in a basin of attraction where the attractors are close to the ground-truth. As also stated in Remark 3, the score function is used to guide the initial points as $\\mu$ gets smaller.\n\n Third, we also note that in [1] the authors show that a feasible solution (a DAG) does not satisfy the *constraint qualifications* needed for KKT optimality, i.e., a DAG cannot be a stationary point of the Lagrangian nor the Augmented Lagrangian; thus, the authors proceed to reformulate the problem. A similar argument is provided in [2]. Along these lines, let us take a look at the Fritz-John (FJ) condition, which is a **necessary condition for optimality without the need for constraint qualifications**. The FJ condition states that there exists a non-zero vector $\\nu = [\\nu_0, \\nu_1]$ such that:\n $$\\nu_0 \\nabla Q(W) + \\nu_1 \\nabla h(W) = 0,$$\n where $Q(W)$ denotes the score on W (with or without $\\ell_1$ regularization, as this does not affect the argument). Now, when $W$ is a DAG, we have $\\nabla h(W) =0$, thus, it must hold that $\\nu_0 = 0$ and we can simply set $\\nu_1 = 1$. Our Algorithm 1 also resembles this fact, where $\\mu$ corresponds to $\\nu_0$. \n\n2. We would appreciate it if the reviewer could elaborate on the correctness of Corollary 2. Corollary 2 indeed only considers $h$ and not the score as we discuss the properties of $h$ in Section 3.1. About requiring \"infinite steps to obtain a DAG\", see point 1 above. **Corollary 1 is not meaningless whatsoever!** This is yet another nice and useful property from $h_{\\mathrm{ldet}}$ that also motivates the design of Algorithm 1. In a few words, whenever $W \\in \\mathbb{W}^s$, Corollary 1 ensures that the next iterate remains inside $\\mathbb{W}^s$ with a suitable step size. This is crucial since as discussed at L185-191, the logdet characterization is only valid in this set. Finally, there is a typo in Corollary 1, it should read ``...negative gradient…’’, we will correct this in the revision.\n\n3. **Our results do not contradict the GOLEM results.** Our experiments cover settings not originally studied by GOLEM, so naturally our results are more comprehensive. In the same settings as GOLEM (ER4/SF4 with $d \\in [20, 100]$ see Section C.1.1 and Figure 6, as well as, ER2 with $d \\in [200, 2000]$ see Section C.1.3 and Figure 8), our results are consistent with their original findings (see Figures 6.(a) and 8 in the GOLEM paper).\n\n As stated in L687 of the supplement, we use the GOLEM code available at https://github.com/ignavierng/golem, which is the original implementation from the GOLEM authors. In the same paragraph, we also mention that we use the default values of hyperparameters, namely $\\lambda_1 = 0.02, \\lambda_2=5$, which seems to be the same set of values in your implementation. \n\n As noted above in our response to your summary comments, we also provided experiments for $d \\in [20,100]$. **As shown in Figure 6, GOLEM performs better than NOTEARS for ER4/SF4 graphs for $d \\in [20,100]$**, *which does not contradict your observation*. However, **in Figure 9, observe that GOLEM performs worse than NOTEARS for ER6 graphs in the same regime of $d \\in [20,100]$**.\n\n *Our response continues below.*", " The authors proposed a new DAG constraints based on the log-determinant of the adjacency matrices. Compare to the original polynomial based DAG constraints, the new DAG constraints do have some good properties and it achieves better performance on ER4 and SF4 graphs with node from 200 to 1000. Pros:\n\n1. The proof and derivation of the new DAG constraint is very clear.\n2. The illustration of the behaviour of h_det is very good.\n\n\nCons:\n\n1. The properties (ii) in Theorem 1 is actually as bad as the Exponential or Polynomial based DAG constraint. See [1,2] for more details. With this property you will not be able to obtain an DAG unless you let mu in Algorithm 1 converge to zero. \n\n2. Corollary 2 may not be correct and Corollary 1 maybe meaningless. For Corollary 2, it only considers h_det, without consider the score function. By [1,2] without attain the limit point it would not be possible to obtain a DAG, and this will require infinite many steps of optimisation.\n\n3. In the experiment part, the hyper parameter for GOLEM may not be quite right. I have tried GOLEM on SF{1,2,3,4}, and ER{1,2,3}, for numbers of nodes in {10, 20, 50, 100}. The performance of GOLEM performance is consistently better than NOTEARS in a large margin, which is quite different from the result reported in the paper. Also for all cases it would be good if the authors can provide results on graphs with nodes in {10, 20, 50, 100}. This is because with the same number of expected edges, the more the nodes, the sparser the graph is. Also it would be good if the authors can provide results on ER2, ER3 and SF2, SF3, SF4 with all number of nodes from 10 to 1000. This can provide a fair comparison for different methods in different situations. I have also attached the GOLEM code I have used, you only need to replace the HTorch part with the exponential DAG constraints from NOTEARS.\n\n\n\n[1] Wei, Dennis, Tian Gao, and Yue Yu. \"DAGs with No Fears: A closer look at continuous optimization for learning Bayesian networks.\" Advances in Neural Information Processing Systems 33 (2020): 3895-3906.\n[2] Ng, Ignavier, et al. \"On the convergence of continuous constrained optimization for structure learning.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2022.\n\n\nThe GOLEM code I have used.\n\n\n\"\"\"\n\n import numpy as np\n import torch\n import torch.nn as nn\n from .dag import HTorch\n h_torch = HTorch.apply\n class GolemEVModel(nn.Module):\n \"\"\"\n Set up the objective function of GOLEM.\n Hyperparameters\n (1) GOLEM-NV: equal_variances=False, lambda_1=2e-3, lambda_2=5.0.\n (2) GOLEM-EV: equal_variances=True, lambda_1=2e-2, lambda_2=5.0.\n \"\"\"\n def __init__(self, d, lambda_1, lambda_2, eps=1e-6, h_type='exponential'):\n \"\"\"\n Initialize self.\n Parameters\n d: int\n Number of nodes.\n lambda_1: float\n Coefficient of L1 penalty.\n lambda_2: float\n Coefficient of DAG penalty.\n equal_variances: bool\n Whether to assume equal noise variances\n for likelibood objective. Default: True.\n \"\"\"\n super().__init__()\n\n self.d = d\n self.lambda_1 = lambda_1\n self.lambda_2 = lambda_2\n self.eps = eps\n self.h_type = h_type\n self._B = nn.Parameter(torch.zeros(self.d, self.d))\n\n def forward(self, cov_emp):\n # Placeholders and variables\n self.cov_emp = cov_emp\n self.B = self._preprocess(self._B)\n\n # Likelihood, penalty terms and score\n self.likelihood = self._compute_likelihood()\n self.L1_penalty = self._compute_L1_penalty()\n self.h = self._compute_h()\n self.score = (self.likelihood + self.lambda_1 * self.L1_penalty +\n self.lambda_2 * self.h)\n\n def _preprocess(self, B):\n \"\"\"\n Set the diagonals of B to zero.\n Parameters\n B: tf.Tensor\n [d, d] weighted matrix.\n\n Return\n torch.Tensor: [d, d] weighted matrix.\n \"\"\"\n return (1. - torch.eye(self.d)) * B\n \n def _compute_likelihood(self):\n \"\"\"\n Compute (negative log) likelihood in the linear Gaussian case.\n Return\n torch.Tensor: Likelihood term (scalar-valued).\n \"\"\"\n I = torch.eye(self.d)\n return 0.5 * self.d * torch.log(\n torch.trace((I - self.B).T @ self.cov_emp\n @ (I - self.B))) - torch.linalg.slogdet(I - self.B)[1]\n\n def _compute_L1_penalty(self):\n \"\"\"\n Compute L1 penalty.\n Return\n tf.Tensor: L1 penalty term (scalar-valued).\n \"\"\"\n return torch.norm(self.B, p=1)\n\n def _compute_h(self):\n \"\"\"\n Compute DAG penalty.\n\n Return\n torch.Tensor: DAG penalty term (scalar-valued).\n \"\"\"\n return h_torch(self.B * self.B, self.h_type, self.eps)\n def golem_ev(X,\n lambda_1=2e-2,\n lambda_2=5.0,\n learning_rate=1e-3,\n num_iter=2e+4,\n graph_thres=0.3,\n eps=1e-6,\n h_type='exponential'):\n n, d = X.shape\n cov_emp = np.cov(X.T, bias=True)\n cov_emp = torch.Tensor(cov_emp)\n model = GolemEVModel(d, lambda_1, lambda_2, eps, h_type)\n train_op = torch.optim.Adam(model.parameters(), lr=learning_rate)\n\n for i in range(int(num_iter)):\n model(cov_emp)\n score, likelihood, h, B_est = model.score, model.likelihood, model.h, model.B\n loss = score\n train_op.zero_grad()\n loss.backward()\n train_op.step()\n \n return B_est.detach().numpy()\n\"\"\" Is the performance from different implementation? In NOTEARS, the authors does not decrease mu but increase the coefficients of h(W dot W). In GOLEM, the authors use fixed coefficient for L1. If put the DAG constraint in the same optimisation framework as NOTEARS, or if we put the NOTEARS in the same framework as the work, what is the performance? It is discussed.", " The paper introduces a new penalty for enforcing acyclicity of a weighted adjacency matrix for learning DAGs with differentiable optimization. The penalty is zero if and only if the adjacency matrix is acyclic. In contrast to previous proposals based on matrix polynomials, the introduced penalty is based on the log-determinant and is connected to the theory of M-matrices. The paper studies the properties of the new penalty and compares the penalty with previous proposals. From theoretical analysis and empirical observations, it is argued that the new penalty is more suited for learning DAGs / SEMs. Through numerical experiments, it is shown that the new penalty leads to speedups and higher precisions compared to previous proposals. ### Strengths\n1. The paper makes progress on recent developments for differentiable learning of DAGs. \n2. The proposed penalty seems novel and has interesting connections to M-matrices. \n3. The paper compares the new penalty with previous ones in the literature from various aspects. \n4. The paper is very well-written. \n\n### Weaknesses\n1. Several aspects in the comparison are based on heuristic arguments or empirical observations, which weakens the claim that the new penalty should be preferred over the earlier ones. A theoretical analysis is desirable to show that the advantage is real or \"significant\" in typical scenarios or when the problem size grows large. \n\n 1. On discounting long cycles.\n\nArgument (i) claims that the new penalty does not discount a long cycle in the same way some previous proposals do. From Figure 2, it looks persuasive that this could be a big advantage. I would like to see some asymptotic analysis that makes this comparison precise. \n\n2. On vanishing gradients. \n\nLemma 5 seems insufficient to show that the new penalty does not suffer from vanishing gradients (when the other penalties do). Is there a more compelling argument?\n\n3. On computational speedups. \n\nDespite the same computational complexity, argument (iii) curiously claims that the new penalty \"can be computed in about an order of magnitude faster\" than the other two in practice. Is there an explanation?\n\n4. Minor issues\n\n(1) line 42: missing power in the expression?\n\n(2) Lemma 1: I would rather state (i) as \"DAGs are in the interior of $\\mathbb{W}^s$\". I do not foresee any potential negative societal impact of their work", " * This paper considers the problem of learning a structure (DAG) over a set of observed variables with gradient-based methods. The approach designs a new continuous optimization problem capturing the acyclic property of DAGs, which defines the domain of the log-det function to be M-matrices. \n* The authors provide a detailed technical description of the approach as well as extensive experiments. \n* The authors describe technically that the proposed approach has more well behaved gradients (perhaps indicative of the improved performance empirically) as well as several other theoretical properties of the approach. \n* Empirically, the authors provide extensive experiments which demonstrate the efficiency and effectiveness of the proposed approach. Overall I believe that this is a strong paper with an interesting approach for what is becoming a more well studied area (continuous optimization approaches for structure learning). \n\nIn particular strengths include: \n\n* **Technical Contribution** - I believe the authors have clearly explained the problem they are solving, why their methodical approach solves the problem, and what are the challenges and insights that are required in the contribution. I find the contribution to be interesting and meaningful. \n* **Empirical benefits** - The authors have demonstrated the performance characteristics of their approach with extensive experiments, which would allow a practitioner to consider such results and then determine which methods are most suitable for the application (based on metrics such as time, SHD, TPR, etc). And most often, DAGMA would be a top selection, especially in terms of efficiency compared to methods like NOTEARS. \n\nI have a few questions which indicate places for improvement regarding setup of experiments and measurements, please refer to questions section for detailed description of these weaknesses. \n\n * Rather than sampling a DAG structure by ordering the nodes in an undirected graph, why not sample DAGs from something like Price's model? I realize that your experiment setup is following earlier work however. I am curious what if any properties we should be aware of for your generative model of dags. How do in-degree / out-degree distributions look like? How do they differ from degree distribution from undirected graph?\n* When / why would quality $Q(\\cdot)$ make sense / not-make sense for an evaluation measure of the DAGs? Could it be evaluated on held-out data of the same underlying distribution? \n* When / would it make sense to evaluate not just wall clock time, but number of samples $n$? \n* It seems like hyperparameter tuning could play a large role in the performance of these approaches. Do I understand correctly that previous work has tuned hyperparameters for these same exact datasets in the cases of re-use? Is there clarity on how all hyperparemters were selected? I wonder what you think about evaluating methods using bayesian hyperparameter optimization to find, for each dataset (e.g., setting of generative model hyperparameters) best setting of hyperparameters. \n* There is a note about the proposed approach being better at detecting large cycles. For give me if I have missed something, but was there analysis in the experiments section, which noted how much of the improved performance (e.g. SHD) was due to better recovery of such large cycles? \n* If these methods were used as part of some downstream approach, do you think the empirical analysis here is predictive of performance? Why/why not / what other measurements would be interesting to consider? Yes, the authors have done a very nice job with this.", " This paper proposes a new formulation promoting acyclicity. Going beyond the widely used approaches based on the trace of polynomials of the hadamard square of the adjacency matrix, this paper rather takes a different angle by considering a reduced set of matrices, M-matrices. This M-matrix based acyclicity regularizer is thoroughly investigated so that its empirically superior behavior is also understood theoretically. Strengths\n* A novel approach based on M-matrix is proposed for an acyclicity regularizer free from optimization hassle in exiting approaches.\n* The behavior of the log-det acyclicity regularizer -- the gradient and other regularization properties -- is analyzed to a quite detailed extent with supporting simulation results.\n* DAG discovery with the proposed method outperforms others even with the advantage in the run-time.\n\nWeaknesses\n* For some theoretical results, it would be better if some technical detail is provided at a level to help to understand. \n * In line 177, $(4) \\Rightarrow (3)$ required a bit of thought to understand so some brief comments would help instantaneous understanding.\n * In Lemma 3, the statement that $\\otimes$ is the Kronecker product may be added so that reader can look up its definition.\n* It seems that sometimes theorems assumes slightly different definition for $\\mathbb{W}^2$ with/without nonnegative entry condition (details below) * In Corollary 1, what does it mean by 'point towards the interior'. Is this term 'interior' is the topological term or a general term for somewhere inside? It seems that $\\mathbb{W}^s$ is an open subset of $R^{d \\times d}$. Does this mean that its direction is always directly toward a stationary point?\n\n* It seems that in eq(5) the condition that $W \\ge 0$, that is, all entries are nonnegative is missing. It seems that some Theorem may hold without this nonnegative entry condition but others do not. Theorems/Lemma critically relying on Proposition 1 should require the nonnegative entry condition. However, for example, Lemma 2 seems to consider the cases without nonnegative entry conditions, otherwise, $sign(W_{i,j})$ is always nonnegative. It would be better if the authors clarify this.\n\n * Except for general limitations of differentiable approaches for DAG discovery, there is no other specific limitation. To discuss such such general limitations some other references can be added. In Differentiable Causal Discovery Under Unmeasured Confounding; Rohit Bhattacharya, Tushar Nagarajan, Daniel Malinsky, Ilya Shpitser; AISTATS 2021. there are many interesting discussions and some solutions on such general limitations.\n * Differentiable approach for ADMG for semi-Markovian case\n * How to find MEC not a single DAG" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 3 ]
[ "ZldpXZxiQQ", "Tu8BNFJpoC", "x0r1ivAsbhg", "FVlGg_piuvI", "1dO_JJ2MK9m", "DVPkMCU0AXX", "5nd8bVYTszJ", "CweTKY83pBs", "sGdluJI7hmF", "HQ21FsNceOq", "gJL0fGUUkJG", "nips_2022_8rZYMpFUgK", "lggm4rIfvr", "2eEjhG7YFQ", "-1sAFKRrEvX", "PbIfP2QdwUS", "9z7JbEr4EDA", "gJL0fGUUkJG", "T57WmLBTF6B", "nips_2022_8rZYMpFUgK", "nips_2022_8rZYMpFUgK", "nips_2022_8rZYMpFUgK", "nips_2022_8rZYMpFUgK" ]
nips_2022_xubxAVbOsw
The Minority Matters: A Diversity-Promoting Collaborative Metric Learning Algorithm
Collaborative Metric Learning (CML) has recently emerged as a popular method in recommendation systems (RS), closing the gap between metric learning and Collaborative Filtering. Following the convention of RS, existing methods exploit unique user representation in their model design. This paper focuses on a challenging scenario where a user has multiple categories of interests. Under this setting, we argue that the unique user representation might induce preference bias, especially when the item category distribution is imbalanced. To address this issue, we propose a novel method called Diversity-Promoting Collaborative Metric Learning (DPCML), with the hope of considering the commonly ignored minority interest of the user. The key idea behind DPCML is to include a multiple set of representations for each user in the system. Based on this embedding paradigm, user preference toward an item is aggregated from different embeddings by taking the minimum item-user distance among the user embedding set. Furthermore, we observe that the diversity of the embeddings for the same user also plays an essential role in the model. To this end, we propose a diversity control regularization term to accommodate the multi-vector representation strategy better. Theoretically, we show that DPCML could generalize well to unseen test data by tackling the challenge of the annoying operation that comes from the minimum value. Experiments over a range of benchmark datasets speak to the efficacy of DPCML.
Accept
After the author response, all three reviewers are in favor of accepting the paper. The proposed collaborative metric learning approach was appreciated in terms of novelty by two of the reviewers, whereas one considered the contribution limited. The theoretical analysis was appreciated and the experiment was considered convincing. Overall, this was considered a strong paper. Reviewers seem to have some remaining concerns: - improving clarity be reorganization and some typo etc. fixes - some additional explanation, including clarifying the role of the diversity-promoting regularizer and discussion of variant regularizers - fairness of hyperparameter tuning versus baselines - additional discussion of results in terms of promoting diversity - discussion of limitations In their responses, authors have clarified some of the issues, provided several additional experiment results including results on parameter sensitivity and potential application to the M2F method, and agreed to carry out changes improving clarity. The thorough responses were appreciated. The additional information in the author responses should be taken into account in the final paper version to the extent possible.
train
[ "-uwg0GYytfW", "3jFpg4zdTJy", "AVfgQi3jAVU", "rzER2BXMUZ", "MVp5z6KOSjJ", "Mj13W3DY_a", "_xEICdQCyae", "ZNYsJMhu_-O", "Lwj9B9_UL1J", "8PuRHwDulpH", "lOd8gVbasjP" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their thoughtful reply and willingness to meaningfully engage with my questions and concerns. You have gone above and beyond in your response. I believe the additional results provided above significantly strengthen the empirical experiments and further validate the theoretical results presented in the paper. I will be raising my score accordingly and look forward to reading the final version of the paper.", " We sincerely express our gratitude for your constructive comments and make the following responses to your primary concerns: \n\n> **(Q1)**: Clarification of contribution/novelty.\n\n**(A1)**: Thank you so much for your nice question! We will make the following clarification of contributions to our work: \n1. Although the concept of recommendation diversity has been studied in the RS community, we argue that there are few attentions to the diverse interests of users in the CML framework. Motivated by this, we start the first exploration and propose a simple but effective method to handle the diverse interests of users. The empirical results support the advantage of the CML paradigm.\n2. In addition, we observe that simply leveraging multiple vectors for users is not necessarily effective. Specifically, there is a chance to find trivial solutions where the multiple vectors are almost identical. To avoid these trivial solutions, we propose a diversity-controlling regularization scheme to accommodate users' preferences better.\n3. Most importantly, to our knowledge, we are the first to prove the effectiveness of multiple user representations **in theory**. Specifically, we show that DPCML with multiple embeddings could enjoy a smaller generalization error than single-vector CML. Specifically, we have the following results:\n\n> [**Theorem 1**]. [Generalization Upper Bound of DPCML] Let $\\mathbb{E}[\\hat{\\mathcal{L}}\\_{\\mathcal{D}}(\\boldsymbol{g})]$ be the population risk of $\\hat{\\mathcal{L}}_{\\mathcal{D}}(\\boldsymbol{g})$. Then, $\\forall \\boldsymbol{g} \\in \\mathcal{H}_R$, with high probability, the following inequation holds: \n$$\n\\begin{aligned}\n\\left| \\hat{\\mathcal{L}}\\_{\\mathcal{D}}(\\boldsymbol{g}) - \\mathbb{E} [\\hat{\\mathcal{L}}\\_{\\mathcal{D}}(\\boldsymbol{g})]\\right| \\le \\sqrt{\\frac{2d\\log \\left(3r \\tilde{N}\\right)}{\\tilde{N}}},\n\\end{aligned}\n$$\nwhere we have \n\n$$\n\\tilde{N} = \\left(4r^2\\sqrt{\\left(\\frac{(4 + \\eta)^2}{|\\mathcal{U}|} + \\frac{2}{|\\mathcal{U}|^2} \\sum_{u\\_i \\\\ \\in \\\\ \\mathcal{U}} \\left(\\frac{1}{n\\_i^+} + \\frac{1}{n\\_i^-}\\right)\\right)}\\right)^{-2}.\n$$\n> Intriguingly, we see that our derived bound does not depend on $C$. This is consistent with the over-parameterization phenomenon. On top of Theorem 1, we have the following corollary.\n\n> [**Corollary 1.**] DPCML could enjoy a smaller generalization error bound than CML.\n\n> **(Q2)**: Intuition behind the proposed diversity-promoting regularization term $\\psi_{\\boldsymbol{g}}(u_i)$.\n\n**(A2)**: Thanks for your interesting question! We are sorry that our unclear expression for this makes you confused. \nAs you can see, $\\psi_{\\boldsymbol{g}}(u_i)$ includes two terms below, i.e., \n\n$$\n\\psi\\_{\\boldsymbol{g}}(u\\_i) = \\max(0, \\delta\\_1 - \\delta\\_{\\boldsymbol{g}, u_i}) + \\max(0, \\delta\\_{\\boldsymbol{g}, u\\_i} - \\delta\\_2),\n$$\nwhere \n\n$$\n\\delta\\_{\\boldsymbol{g}, u\\_i} = \\frac{1}{2C(C-1)} \\sum_{\\substack{c_1, c_2 \\\\ \\in \\\\ C \\\\ c_1 \\neq c_2}} \\\\|\\boldsymbol{g}_{u_i}^{c_1}-\\boldsymbol{g}\\_{u_i}^{c\\_2}\\\\|^2,\n$$\nand $C$ is the number of user embeddings.\n\nWe argue that one should minimize $\\psi_{\\boldsymbol{g}}(u_i)$ to get a good performance. Our point here is that extremely large/small values of $\\delta_{\\boldsymbol{g}, u_i}$ might be harmful for the generalization error. It is easy to see that if it is extremely small, then the embeddings for a given user are very close to each other, increasing the model complexity with few performance gains. This obviously will induce overfitting. On the other hand, a too large diversity might also induce overfitting. It might be a bit confusing at first glance. But, imagine that when some noise observations or extremely rare interests far away from the normal patterns exist in the data, having a large diversity will make it easier to overfit such data. Moreover, it is also a natural assumption that a user's interests should not be too different. Hence, we argue that the diversity measure should remain at a moderate magnitude. We have also validated this assumption in Fig.1 in our main paper. We hope the new explanation will help you understand our idea. Moreover, in reply to your question Q3-Q5, we present a more detailed performance comparison to validate this assumption.", " >**(Q3)**: Were other diversity-promoting regularizers explored?\n\n**(A3)**: Thank you for your nice question! We are sorry that we do not explore other alternative regularizers directly in the original paper. But here, we attempt three variants of our proposed Diversity Control Regularization Scheme (DCRS): \n1. $\\textbf{w/o DCRS}$: This is a variant of our method where no regularization is adopted at all. \n2. $\\textbf{DCRS}-\\delta_1$: This is a variant of our method where the punishment on a **large** diversity is **removed**. In other words, we will use the following regularization term:\n$$\n\\psi_{\\boldsymbol{g}}^1(u\\_i) = \\max(0, \\delta_1 - \\delta_{\\boldsymbol{g}, u\\_i}).\n$$ \n3. $\\textbf{DCRS}-\\delta_2$: This is a variant of our method where the punishment on a **small** diversity is **removed**. In other words, we will use the following regularization term:\n$$\n\\psi_{\\boldsymbol{g}}^2(u\\_i) = \\max(0, \\delta_{\\boldsymbol{g}, u\\_i} - \\delta_2).\n$$\n\nNote that w.r.t. cases 2 and 3, $\\psi_{\\boldsymbol{g}}(u_i)$ in Eq.(7) is replaced with $\\psi_{\\boldsymbol{g}}^1(u_i)$ and $\\psi_{\\boldsymbol{g}}^2(u_i)$, respectively. \nThe ablation studies are conducted on the Steam-200k dataset to show the effects of these variants. The empirical results are listed below:\n> Ablation results of DCRS for DPCML1 are listed as follows:\n| **Method** | **P@3** | **R@3** | **NDCG@3** | **P@5** | **R@5** | **NDCG@5** | **MAP** | **MRR** |\n|:------------------:|:-------:|:-------:|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|\n| **w/o DCRS** | 23.86 | 13.06 | 24.90 | 23.57 | 11.56 | 24.77 | 20.37 | 44.38 |\n| **DCRS-$\\delta\\_1$** | 24.28 | 14.38 | 25.61 | 22.48 | 11.35 | 24.13 | 21.02 | 45.76 |\n| **DCRS-$\\delta\\_2$** | 24.56 | 14.36 | 25.41 | 23.82 | 11.97 | 24.74 | 21.67 | 45.14 |\n| **DCRS** | 25.39 | 14.84 | 26.56 | 23.88 | 12.11 | 25.25 | 22.26 | 46.79 |\n\n> Ablation results of DCRS for DPCML2 are listed as follows:\n| **Method** | **P@3** | **R@3** | **NDCG@3** | **P@5** | **R@5** | **NDCG@5** | **MAP** | **MRR** |\n|:-----------------:|:-------:|:-------:|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|\n| **w/o DCRS** | 27.96 | 15.68 | 29.42 | 27.85 | 13.94 | 29.56 | 22.50 | 50.03 |\n| **DCRS-$\\delta\\_1$** | 28.28 | 16.05 | 29.60 | 27.25 | 13.75 | 29.17 | 22.63 | 50.00 |\n| **DCRS-$\\delta\\_2$** | 29.26 | 16.83 | 30.61 | 28.47 | 14.28 | 30.16 | 23.86 | 51.14 |\n| **DCRS** | 29.88 | 17.13 | 31.22 | 28.70 | 14.51 | 30.56 | 24.10 | 51.95 \n\nFrom the above results, we can see that: in most cases, only employing one of the two terms of DCRS could still improve the recommendation performance. However, none of them could outperform our proposed method. This strengthens the effectiveness of our proposed regularization scheme.", " > **(Q4)**: Sensitivity analysis of $\\eta$.\n\n**(A4)**: Thanks for your helpful question! We investigate the sensitivity of $\\eta \\in \\\\{0,1, 3, 5, 10, 20, 30\\\\}$ for recommendation results on the Steam-200k dataset. The experimental results are listed below for DPCML1 and DPCML2, respectively. We can conclude that a proper $\\eta$ (roughly $10$) could significantly improve the performance, suggesting the essential role of the proposed diversity control regularization scheme.\n\n> The Sensitivity results of $\\eta$ for DPCML1 are listed as follows:\n| **$\\eta$** | **P@3** | **R@3** | **NDCG@3** | **P@5** | **R@5** | **NDCG@5** | **MAP** | **MRR** |\n|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|\n| **0** | 23.86 | 13.06 | 24.90 | 23.57 | 11.56 | 24.77 | 20.37 | 44.38 |\n| **1** | 25.04 | 14.65 | 26.01 | 24.60 | 12.55 | 25.81 | 21.65 | 45.55 |\n| **3** | 24.67 | 14.43 | 25.50 | 23.88 | 12.25 | 24.96 | 21.56 | 44.73 |\n| **5** | 25.24 | 14.91 | 26.65 | 23.80 | 12.17 | 25.34 | 22.17 | 47.23 |\n| **10** | 25.39 | 14.84 | 26.56 | 23.88 | 12.11 | 25.25 | 22.26 | 46.79 |\n| **20** | 24.60 | 14.34 | 25.79 | 24.03 | 12.05 | 25.17 | 21.87 | 46.20 |\n| **30** | 25.23 | 14.69 | 26.19 | 24.25 | 12.08 | 25.58 | 21.94 | 46.00 |\n\n> The Sensitivity results of $\\eta$ for DPCML2 are listed as follows:\n| **$\\eta$** | **P@3** | **R@3** | **NDCG@3** | **P@5** | **R@5** | **NDCG@5** | **MAP** | **MRR** |\n|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|:----------:|:-------:|:-------:|\n| **0** | 27.96 | 15.68 | 29.42 | 27.85 | 13.94 | 29.56 | 22.50 | 50.03 |\n| **1** | 28.55 | 16.35 | 29.92 | 27.82 | 13.94 | 29.65 | 22.90 | 50.57 |\n| **3** | 28.68 | 16.32 | 29.96 | 27.71 | 13.90 | 29.59 | 23.13 | 50.19 |\n| **5** | 29.34 | 16.82 | 30.45 | 27.98 | 13.95 | 29.75 | 23.42 | 50.62 |\n| **10** | 29.88 | 17.13 | 31.22 | 28.70 | 14.51 | 30.56 | 24.10 | 51.95 |\n| **20** | 29.81 | 17.12 | 31.08 | 29.11 | 14.65 | 30.77 | 24.35 | 51.90 |\n| **30** | 29.43 | 16.99 | 30.67 | 28.96 | 14.53 | 30.56 | 24.50 | 51.36 |\n\n> **(Q5)**: The effectiveness of regularizer for MF-based systems.\n\n**(A5)**: Thank you for your interesting question! To see this, we attempt to apply the proposed diversity control regularization scheme (DCRS) for M2F [2] method (Literature [3] provided by you will be cited/discussed in our paper in the final version). In addition, we further explore the effectiveness of DCRS for the general framework of joint accessibility (GFJA, Eq.(10) in the main paper). Here we also conduct a grid search to choose the best performance of M2F with DCRS on the Steam-200k and MovieLens-1m datasets, where the parameter space stays the same as DPCML. The experimental results are summarized in the following tables:\n|**Steam-200k** | | | | | | | | |\n|:--------------:|:-------:|:-------:|:--------:|:-------:|:-------:|:--------:|:-------:|:-------:|\n| **Method** | P@3 | R@3 | NDCG@3 | P@5 | R@5 | NDCG@5 | MAP | MRR |\n| **M2F** | 11.33 | 5.69 | 11.95 | 11.44 | 5.73 | 12.98 | 6.43 | 25.05 |\n| **M2F+DCRS** | 10.92 | 5.58 | 11.49 | 10.89 | 5.48 | 12.37 | 6.25 | 24.26 |\n| **GFJA** | 21.53 | 12.60 | 22.52 | 20.37 | 10.16 | 21.49 | 19.32 | 40.69 |\n| **GFJA+DCRS** | 21.63 | 12.40 | 22.72 | 20.38 | 9.98 | 21.74 | 19.53 | 40.92 |\n| **MovieLens-1m** | | | | | | | | |\n| **M2F** | 8.61 | 1.84 | 9.36 | 7.60 | 2.30 | 8.67 | 2.95 | 20.40 |\n| **M2F+DCRS** | 7.59 | 1.49 | 8.16 | 7.10 | 2.02 | 7.92 | 2.53 | 18.51 |\n| **GFJA** | 15.79 | 3.19 | 16.11 | 16.02 | 4.77 | 16.66 | 11.04 | 32.54 |\n| **GFJA+DCRS** | 16.71 | 3.54 | 16.94 | 17.24 | 5.27 | 17.71 | 11.75 | 33.87 |\n\nFrom the above results, we can draw the following observations: 1) The proposed DCRS does not work well for MF-based models. A possible reason here is that the metric space of MF-based and CML-based methods are intrinsically different. MF adopts the inner-product space while CML adopts the Euclidean space. In this paper, we merely consider the DCRS for Euclidean space. The corresponding strategy for the inner-product space is left as future work. 2) In most metrics, GFJA+DCRS could outperform GFJA significantly, which supports the advantages of our proposed DCRS. 3) Compared with M2F, the performance gain of GFJA is sharp on both datasets. This suggests the superiority of our proposed method against the current multi-vector-based competitors.", " > **(Q6)** Does the proposed approach result in more diverse recommendations?\n\n**(A6)**: Thanks for your constructive suggestion! Following your valuable comments, to show the improvement of DPCML in promoting diversity, we evaluate the performance of DPCML against CML-based competitors with a new metric called max-sum diversification (MaxDiv) from [4]:\n\n$$\n\\textit{MaxDiv}@N = \\frac{1}{|\\mathcal{U}|}\\sum\\_{u \\\\ \\\\in \\\\ \\mathcal{U}} \\\\ \\sum\\_{v\\_i, v\\_j \\\\ \\\\in \\\\ \\mathcal{I}^N\\_{u},\\\\ v\\_i \\neq v\\_j} s(v\\_i, v\\_j),\n$$\nwhere $s(v\\_i, v\\_j)=\\\\|\\boldsymbol{g}\\_{v\\_i} - \\boldsymbol{g}\\_{v\\_j}\\\\|^2$ is the square of euclidean distance between item $v\\_i$ and $v\\_j$, and $\\mathcal{I}^N\\_{u}$ is the top-$N$ recommendation items for user $u$.\n\nGenerally speaking, MaxDiv@$N$ measures the recommendation diversification by considering item-side similarity, where a high value implies that the recommendation results are relatively diverse. Then, we compare DPCML with the following competitors for a fair evaluation: a) UniS. b) HarS. c) DPCML1 without (w/o) DCRS and d) DPCML2 without (w/o) DCRS. The experiments are conducted on the Steam-200k and MovieLens-1m datasets with $N \\in \\\\{3, 5, 10, 20\\\\}$.\nThe diversity results are shown as follows:\n| **Steam-200k** | | | | |\n|:-------------------:|:--------:|:--------:|:---------:|:---------:|\n| **Method** | MaxDiv@3 | MaxDiv@5 | MaxDiv@10 | MaxDiv@20 |\n| **UniS** | 1.354 | 4.750 | 23.520 | 117.927 |\n| **HarS** | 1.752 | 6.809 | 40.378 | 236.794 |\n| **DPCML1 w/o DCRS** | 1.643 | 5.857 | 30.425 | 155.193 |\n| **DPCML1** | 1.822 | 6.713 | 34.727 | 179.065 |\n| **DPCML2 w/o DCRS** | 2.958 | 11.398 | 65.398 | 365.458 |\n| **DPCML2** | 2.977 | 11.472 | 65.952 | 369.876 |\n| **MovieLens-1m** | | | | |\n| **UniS** | 1.739 | 6.142 | 30.127 | 140.095 |\n| **HarS** | 2.443 | 8.826 | 46.390 | 244.078 |\n| **DPCML1 w/o DCRS** | 1.623 | 5.857 | 29.500 | 140.057 |\n| **DPCML1** | 1.744 | 6.195 | 30.755 | 145.615 |\n| **DPCML2 w/o DCRS** | 2.827 | 10.423 | 55.612 | 292.089 |\n| **DPCML2** | 3.144 | 11.498 | 60.696 | 313.086 |\n\nWe observe that a) for methods within the same negative sampling strategy (i.e., UniS and DPCML1, HarS and DPCML2), our proposed DPCML could achieve relatively higher max-sum values. This suggests the improvement of DPCML in terms of promoting recommendation diversity. b) In most cases (except for DPCML1 w/o DCRS on the MovieLens-1m dataset), DPCML outperforms other competitors even without regularization. c) Most importantly, equipped with the regularization term DCRS, DPCML could achieve better diversification results against w/o DCRS. This once again shows the rationality/importance of DCRS. \n\n> **(Q7)**: About additional hyperparameters. \n\n**(A7)**: Thank you very much for your valuable comment! Since DPCML adopts multiple user representations, it will inevitably introduce a few additional parameters. Moreover, we notice that, ever since the paradigm of CML appeared in [1], the subsequent algorithms that improve CML are also included some extra hyperparameters in their work, such as the number of memory blocks (LRML[5], HLR[6]) and regularization coefficients $\\lambda_{nbr}$, $\\lambda_{dist}$ (TransCF[7]). Besides, introducing hyperparameters does not always work. Only a proper design of the method can lead to such improvements. To check the effectiveness of such hyperparameters, we have also conducted a series of experiments to answer your questions (Q3)-(Q6). Therefore, we believe this may not be a big problem because the number of parameters in DPCML is not particularly large. In addition, according to our experiments, we find that fixing the hyperparameter with $C=5$ and $\\eta=10$ has obtained a good performance improvement in most datasets. This also shows that our method is not so sensitive to hyperparameter choices.", " > **(Q8)**: Computational resources of baselines. \n\n**(A8)**: Thanks for your nice question! Of course, we do. We are sorry for the unclear expressions in Sec.C.4 in the supplementary materials. To ensure a reasonable comparison for all baselines, we also conduct a grid search to determine their best performance. Specifically, in terms of the same parameters in CML-based competitors (such as batch size, learning rate, margin $\\lambda$, and dimension $d$), we search them within the same space as DPCML. Then, we follow the parameter space provided in their original papers for the remaining different parameters. In addition, for all MF-based approaches, we also follow the same parameter space in the paper therein. For example, we search the predictive factors in MF within $[8, 16, 32, 64]$, etc. We refer the reviewer to referred studies for more experimental details (All literature has already been listed in Sec.C.3). \n\n> **(Q9)**: Some minor concerns. \n\n**(A9)**: Thank you very much for your helpful suggestions, and we are very sorry that some minor issues make some parts of our paper difficult to read. In the final version, we will carefully fix all typos, grammatical errors and improper size of figures, etc. Furthermore, we will invite native speakers to help improve our writing. Finally, we will adjust the order of Figures 6-9, and clear versions of them will be further attached to the supplementary material. \n\n> **(Q10)**: Limitations of DPCML.\n\n**(A10)**: Thanks for your nice question! We are sorry that we do not discuss any limitations of DPCML. Following the paradigm of CML, a possible limitation of our proposed diversity-promoting algorithm is that it generally applies to implicit feedback but not explicit feedback since CML only cares about the relative preference ranking instead of concrete magnitude. In the future, we will explore how to improve the recommendation diversity based on explicit feedback (such as rating records).\n\n> **Reference**: \n- [1] Collaborative metric learning. In WWW, 2017.\n\n- [2] The Stereotyping Problem in Collaboratively Filtered Recommender Systems. In EAAMO, 2021. \n\n- [3] Nonlinear latent factorization by embedding multiple user interests. In RecSys, 2013.\n\n- [4] Max-sum diversication, monotone submodular functions and dynamic updates. In SIGMOD/PODS, 2012. \n- [5] Latent relational metric learning via memory-based attention for collaborative ranking. In WWW, 2018.\n- [6] Hierarchical latent relation modeling for collaborative metric learning. In RecSys, 2021.\n- [7] Collaborative translational metric learning. In ICDM, 2018.\n\n", " We sincerely express our gratitude for your acceptance and constructive comments. The responses to your suggestions are listed below:\n\n> **(Q1)** The writing of this paper is roughly good but could be further improved, including a few typos and mistakes in grammar.\n\n**(A1)** Thank you for your nice suggestion! In the final version, we will fix all typos and grammar mistakes. Furthermore, we will invite native speakers to help cross-check the paper to improve our writing further. \n\n> **(Q2)** Each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow.\n\n**(A2)** Thank you for your valuable comment! In the final version, we would carefully check all hyperlinks in the paper, such as the proof link and the figure/table link, to make the paper easy to follow. ", " We sincerely express our gratitude for your acceptance and valuable suggestions for our work. We would make the following responses to your concerns:\n> **(Q1)**: Some parts of the paper should be reorganized to help readers understand this work better. \n\n**(A1)**: Thank you very much for your constructive comment! In the final version, we will reorganize Corollary.1 to make theoretical/empirical results more precise following your comments.\n\n> **(Q2)**: The authors should complement a further explanation of the Joint Accessibility problem to make this concept concise and understandable.\n\n**(A2)**: Thank you for your nice question! We are so sorry for the unclear explanations. The notion of joint accessibility is first proposed by [1]. It measures whether an item candidate with size $K$ could be jointly accessed by a user in a Top-$K$ recommendation. In other words, joint accessibility also somewhat captures a fundamental requirement of content diversity. If there are sufficient preference records of users, they should be able to be recommended any combination of $K$ items they may be interested in. More discussions of related work in this direction could be found in the paper (Sec.A.2 in the supplementary materials).\n\n> **(Q3)**: There are some typos in this paper.\n\n**(A3)** Thank you for your valuable comment! We will fix all typos and polish our writing in the final version. \n\n> **Reference**:\n- [1] The Stereotyping Problem in Collaboratively Filtered Recommender Systems. In EAAMO, 2021. ", " This paper focuses on Collaborative Metric Learning (CML) based recommendation systems (RS). The authors point out that the current CML-based studies might result in a limited recommendation performance due to the insufficient consideration of the multiple categories of user preferences in practical RS. To alleviate this problem, they propose a novel algorithm named Diversity-Promoting Collaborative Metric Learning (DPCML). The secret of success behind the DPCML is to design multiple representations for each user to focus on different interest groups (i.e., both majority and minority). Going a step further, the authors theoretically prove that the proposed DPCML outperforms the existing CML-based methods and could generalize well to unseen test data. Finally, experiments over a wide range of benchmark datasets demonstrate the efficacy of DPCML. Strengths:\n1. The motivations and contributions of this paper are clear. \n2. The proposed method is novel. Concretely, this work presents a simple but effective method, DPCML, which leverages multiple user representations to capture user interests and adopt a diversity control regularization term to serve their purpose better.\n3. Significantly, this paper is technically valuable. The authors further demonstrate the effectiveness of the proposed algorithm from a challenging theoretical perspective. In my opinion, the generalization analysis of CML-based algorithms is not an easy task, which is barely explored and could motivate more effective work in this direction. \n4. The experiment is persuasive. They conduct comprehensive experiments on four benchmarks and compare DPCML with 14 state-of-the-art competitors. The empirical results show the superiority of DPCML.\n\nWeaknesses:\n1. Some parts of the paper should be reorganized to help readers understand this work better. For example, the proof of Corollary 1 should be mentioned in the main paper. Moreover, the empirical verification should be followed with Corollary 1 at the end of that part. \n2. There are some typos in this paper, such as:\na) Row 186: “when the training is completed…” Here “completed” should be “complete”.\nb) Row 274: “…the performance of the validation/test set…” Here “of” should be “on”.\nc) Row 716 (in Appendix): “NCF” should be NeuMF.\n 1. Some parts of the paper should be reorganized to help readers understand this work better.\n2. The authors should complement a further explanation of the Joint Accessibility problem to make this concept concise and understandable.\nMore questions should be found in the above weakness part.\n None.", " This paper presents a technique for item recommendation which augments a standard collaborative metric learning (CML) with multiple user representations. An additional loss term is introduced in order to encourage diversity among representations for the same user. The proposed approach is compared to alternative matrix factorization (MF) and CML based techniques on common benchmark datasets. Additionally, theoretical justification for the proposed approach in the form of a generalization bound is presented. ### Strengths\nItem recommendation is an important problem, and the experiments demonstrate that the proposed approach performs well in the settings considered.\n\nThe work here suggests that it may be worth revisiting ideas previously developed in the context of MF based techniques in the CMF framework.\n\n### Weaknesses\nThe idea of using multiple user representations is not new and goes back to at least [1], which is identical to the approach presented in [2] (only the later is cited in this work). Showing that something that works in the MF setting also works in the CML setting is a limited contribution.\n\nThe diversity-promoting regularizer is not sufficiently explored. The paper does not adequately motivate the desire to prevent representations for the same user from being too different, the second term of the unnumbered equation between (7) and (8). I find this counterintuitive and would have appreciated more commentary on this, and further exploration of the regularization strategy in general. Does including this regularizer in an MF setting like that explored in [1, 2] improve those results? Were alternative regularizers considered? How sensitive are the results to $\\eta$?\n\nThe extent to which the propose system actually promotes diversity (the motivation for the approach) is not explored outside the result in Figure 5. Providing more more evidence that the proposed approach actually results in more diverse recommendations (for example, using well know measure of diversity [3]) would significantly strengthen the paper.\n\nThe paper introduces several additional hyperparameters ($C$, $\\eta$, $\\delta_1$, $\\delta_2$), and the results here seem highly dependent on their exact choice (Figures 6, 8, 9). This is a significant drawback and I wonder if similar computational budgets were allocated to tuning baselines?\n\nI found the paper somehow difficult to read and poorly organized. There are numerous grammatical, incomplete sentences, and unreadable figures. A couple I explicitly noted are listed below.\n\n- Line 22: “giving them the relevant recommendations” => “giving them relevant recommendations”\n- Line 268-269: The sentence “Moreover, there would induce different performances with different diversity values.” is missing a noun.\n- Figure 6 and 7 are impossible to read without zooming to around 400%.\n- It would help if the numerical order of Figures 6-9 should match the order they are discussed in the subsequent paragraphs.\n\n### References\n1. Jason Weston, Ron J Weiss, and Hector Yee. Nonlinear latent factorization by embedding multiple user interests. In ACM Conference on Recommender systems, 2013.\n2. Guo, Wenshuo, et al. The Stereotyping Problem in Collaboratively Filtered Recommender Systems. Equity and Access in Algorithms, Mechanisms, and Optimization. 2021.\n3. Borodin, Allan, Hyun Chul Lee, and Yuli Ye. Max-sum diversification, monotone submodular functions and dynamic updates. ACM SIGMOD-SIGACT-SIGAI symposium on Principles of Database Systems. 2012.\n\n### Update Post Rebuttal\nIn their reply, the authors have provided thorough answers to my questions and further experimental results which address my concerns. I have raised my score accordingly. - What is the intuition behind wanting to ensure representations for the same user aren't too dissimilar in $\\psi_g(u_i)$?\n- Were other diversity-promoting regularizers explored?\n- Does the diversity-promoting regularizer improve results for MF based systems like those in [1,2]?\n- Does the proposed approach actually result in more diverse recommendations?\n- Were similar computational resources allocated to tuning baseline hyperparameters?\n\n The paper does not discuss any limitations of the proposed approach. Providing some discussion of failure modes would strengthen the paper.", " This work attacks the collaborative metric learning (CML) based recommendation algorithms. The critical point is that the current studies might overlook the minority taste of users in practice due to the limited representation capability, leading to inevitable preference degradation. Based on this recognition, this paper presents a simple and effective approach (i.e., Diversity-Promoting CML, DPCML) to alleviate this preference bias. Then, a decent theoretical guarantee is derived, suggesting the superiority of DPCML against CML in theory. Finally, empirical studies ascertain the effectiveness of the proposed method. Strength:\n1.\tClear motivation and good novelty. \n2.\tSound/important theoretical analysis. The provided theoretical results present novel insights into the strengths of DPCML against the current CML-based recommendation approaches. This is challenging to derive the generalization bound and may arise much more research attention in the community. \n3.\tConvincing validations. The authors conduct comprehensive experiments to show the effectiveness of DPCML against the current CML-based algorithms, and the experiment results seem promising.\n4.\tThe presentation of this paper is good and easy to understand.\n\nWeakness:\nThe writing of this paper is roughly good but could be further improved. For example, there are a few typos and mistakes in grammar:\n1.\tRow 236 in Page 4, “…show its superiority.”: I think this sentence should be polished. \n2.\tRow 495 in Supp. Page 15: “Hard” should be “hard”.\n3.\tRow 757 in Supp. Page 29: “…training/validation/test” should be “…training/validation/test sets”. \n4.\tRow 821 in Supp. Page 31: “Fig.7” should be “Fig.12”.\nLast but not least, each theorem and corollary appearing in the main paper should be attached to its corresponding proof link to make it easy for the reader to follow.\n\nThe primary concerns are motivation, methodology soundness, and experiment persuasion. I believe this is a qualified paper with good novelty, clear theoretical guarantees, and convincing empirical results. \n\n The authors should correct my minor concerns listed in the weakness to make their work more complete. There are no potential societal impacts of this paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Mj13W3DY_a", "8PuRHwDulpH", "8PuRHwDulpH", "8PuRHwDulpH", "8PuRHwDulpH", "8PuRHwDulpH", "lOd8gVbasjP", "Lwj9B9_UL1J", "nips_2022_xubxAVbOsw", "nips_2022_xubxAVbOsw", "nips_2022_xubxAVbOsw" ]
nips_2022_CJGUABT_COm
DOMINO: Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
Adapting to the changes in transition dynamics is essential in robotic applications. By learning a conditional policy with a compact context, context-aware meta-reinforcement learning provides a flexible way to adjust behavior according to dynamics changes. However, in real-world applications, the agent may encounter complex dynamics changes. Multiple confounders can influence the transition dynamics, making it challenging to infer accurate context for decision-making. This paper addresses such a challenge by decomposed mutual information optimization (DOMINO) for context learning, which explicitly learns a disentangled context to maximize the mutual information between the context and historical trajectories while minimizing the state transition prediction error. Our theoretical analysis shows that DOMINO can overcome the underestimation of the mutual information caused by multi-confounded challenges via learning disentangled context and reduce the demand for the number of samples collected in various environments. Extensive experiments show that the context learned by DOMINO benefits both model-based and model-free reinforcement learning algorithms for dynamics generalization in terms of sample efficiency and performance in unseen environments.
Accept
This paper proposes DOMINO, an optimization framework, for contextual meta reinforcement learning. The reviewers generally agree that the paper is well written, the idea is novel and interesting, the evaluation is comprehensive and the results are impressive. Reviewers also raised a few concerns in the initial reviews, such as the proof of Lemma 1 and Theorem 1, and the mathematical definitions. Throughout the discussion phase, most of these concerns were sufficiently addressed, and the review scores were increased accordingly. Overall, the quality of the revised paper has improved significantly during the rebuttal. Thus, I recommend accepting this paper. Please incorporate the remaining reviewers' suggestions in the future version of this paper.
train
[ "znoJkB9IEXT", "2WhTuSzXj3y", "A59VmCvb7uD", "NVvW8P0b2kb", "5VQTja-eCnV", "SsKD54e7cMG", "CxsxixNFzcE", "ecPVLuuFS", "sNddMmbI6uo", "63-Vv8Jz94", "oWbRzj8sFKW", "0kZ_C8gf7CB", "p87kx5ezqV", "w87doH_JV3t", "0flbimho4X5", "Ol0zoOhaM4o", "J8poTxluEXM", "llNVuAaJdxf", "8ltysdzxonvJ", "0nq3jEUoxaj", "rhQabDwjkbv", "rKGV_1FwdaQ", "CZEWqv0v0ZP", "udbxDEYNU7O", "W_dRmkrsbeby", "nWC2wnc-7Mx", "-F4Bf71HwE1", "PsNKYf9Eb5C", "E7jl9x88NZ", "vt1wl94QMjW", "wTJssqx67Bz", "7K-SufHhBb", "KNbR5GdtF6j", "Zprlr4cyA0l", "rG068puM45P", "_bOgSB4vaC", "cn5mV-9jun", "3aa4rQl67Fj", "cmSlk6WSCB8" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate all reviewers' and ACs' time and efforts in reviewing our paper. We truly thank you all for the insightful and constructive suggestions, which helped improve the quality of our paper. We appreciate all reviewers have actively discussed with us in-depth and read our responses carefully, which gave us a great submission experience.\nWe genuinely appreciate the positive 6-7-6-6 evaluation from reviewers oNNM, rYrP, 1VJF, and RkbU. \n\nHere is a summary of our updates:\n\n[Additional Experiments] As suggested by reviewers RkbU and rYrP's, we conducted comparison experiments to compare the performance of DOMINO and RIA and the sensitivity analysis experiment of hyper-parameter N. As suggested by reviewer 1VJF, we provide a visualization analysis of each latent context to show that the learned context vectors are disentangled well. All additional results consistently validate the effectiveness of our proposal.\n\n[Detailed Derivation] As suggested by reviewer oNNM, we provide more detailed proof of Lemma 1 and the explanation of Theorem 1.\n\n[Writing] We owe many thanks to all the reviewers' helpful writing suggestions. We provide clearer definitions, more accurate expressions, and more discussions on the related works in our revision.\n\nWe really thank all reviewers' and ACs' time and efforts again.\n\nBest wishes,\n\nAuthors", " We sincererly thank you for your constructive suggestions and effort. We appreciate your patience and in-depth discussions with us, and the quality of our paper has improved a lot with your help. Many thanks for your kind suggestions and warm help.\n\n", " Thanks for the authors' detailed comment. I think that the last comment resolved my concerns about the clarity problem of theorem 1, so I increase my score from 3 to 6 accordingly.", " We believe that your concerns about the theorem can be addressed by the above clarifications.\n\nWe hope that we can resolve all of your concerns. \n\nWe sincerely thank your effort to review our paper, and we would sincerely appreciate it if you could reconsider your score accordingly.", " I thank the authors for further clarifying my questions and concerns, and for the effort to improve the paper according to the suggestions made by me the other reviewers.\nHence, I increased my score as the paper were significantly improved during the rebuttal period.", " Thanks for your reply. I consider that all your concerns of Theorem 1 come from the data we used to learn disentangled context vectors $c_{1},c_{2},…,c_{N}$. \nWe use the **same data** to infer disentangled context vectors $c_{1},c_{2},..,c_{N}$, since the state transitions in the trajectory are influenced by all the confounders.\n\nWe carefully clarify as follows:\n\n- DOMINO infers a multiple disentangled context vectors $c_{1},c_{2},…,c_{N}$ from a single trajectory whose state transition is influenced by multiple confounders, and the concatenation of the disentangled context vectors $c=[c_{1},c_{2},…,c_{N}]$ represents the dynamics characteristic of the input trajectory. Therefore, we calculate each $I_{NCE}(c_{i};\\mathcal{T})$ by using **same negative and positive trajectory pairs**.\nThus, we don’t need N times $K \\geq e^{I(u_{i };\\mathcal{T})}$ amount of data.\n\n\n- DOMINO divides the mutual information problem into the summation of several smaller ones, which optimizes the mutual information between every disentangled context vectors and historical trajectory separately and regularize the mutual information between the disentangled context vectors. We decompose the calculation of $I_{NCE}(c;\\mathcal{T})$ into the summation of $I_{NCE}(c_{i};\\mathcal{T})$ with the **same data** which contains positive and negative pairs. Thus, we use same $\\log K$ for both case.\n\nThanks for your effort to discuss with us. Looking forward to your reply!", " Thank you for your detailed response to my questions.\nHowever, I still have remaining concerns on Theorem 1.\n\nTo make each $I_{NCE}(c_i;\\mathcal T)$ be a tight bound, we need $K \\ge e^{I(u_i; \\mathcal T)}$ amount of data.\nWe are going to optimize $I_{NCE}(c_i;\\mathcal T)$ by using separate negative and positive pairs for each context $i$.\nThen, don't we need $N$ times of $K \\ge e^{I(u_i; \\mathcal T)}$ amount of data?\n\nThe authors clarified that $I_{NCE}(c;\\mathcal T)$ and $I_{NCE}(c_i;\\mathcal T)$ have different upper bound. And the authors are using the same upper bound $log K$ for both cases. I understood this $K$ as an arbitrary variable that represents the amount of data we need. So, the same $K$ does not mean the same amount of data, right? Then summing up $K$ in the proof of Theorem 1 does not make sense.", " Thank you for your detailed response to my questions.\nHowever, I still have remaining concerns on Theorem 1.\n\nTo make each $I_{NCE}(c_i;\\mathcal T)$ be a tight bound, we need $K \\ge e^{I(u_i; \\mathcal T)}$ amount of data.\nWe are going to optimize $I_{NCE}(c_i;\\mathcal T)$ by using separate negative and positive pairs for each context $i$.\nThen, don't we need $N$ times of $K \\ge e^{I(u_i; \\mathcal T)}$ amount of data?\n\nThe authors clarified that $I_{NCE}(c;\\mathcal T)$ and $I_{NCE}(c_i;\\mathcal T)$ have different upper bound. And the authors are using the same upper bound $log K$ for both cases. I understood this $K$ as an arbitrary variable that represents the amount of data we need. So, the same $K$ does not mean the same amount of data, right? Then summing up $K$ in the proof of Theorem 1 does not make sense.", " We sincerely thank you for your detailed and constructive suggestions, and we have revised the relevant parts of the introduction following your kind suggestion.\n\nWith your help we have revised the paper and improved the quality of our paper. We really appreciate your effort to review our paper and recognition of our work. Thanks a lot.", " Thanks for the authors' further response and effort. I think that the additional experimental results and discussion in the revision resolve my concerns about the clarity problem of the submission, so I increase my score from 4 to 6 accordingly.\n\nMinors: I believe that RIA considers context information and constructs confounder sets with multiple confounders, so I believe that RIA should be discussed in the introduction's confounder discussion (Line 42).", " We sincerely thank you for your recognition and support of our paper. We have added content to the revised paper based on your suggestions. The constructive suggestions you gave during the rebuttal session are greatly helpful in improving the quality of our paper, thanks to your hard work and in-depth discussions with us. ", " Thanks for your kind suggestion. \n\nWe added the discussion of the relationship between DOMINO and RIA in the introduction, and related works, and list RIA as a key baseline in experiments in the third revised paper (mark as purple). \n\n\n**Q1:The relationship between DOMINO and RIA**\n\nANS:\n\nWe carefully explain the relationship between DOMINO and RIA as follows:\n\nFirstly, the key differences between RIA and DOMINO are:\n\n1) DOMINO infers several disentangled context vectors concurrently from the current sequence of state action pairs, while RIA infers a centralized context vector to represent the environmental characteristic from the current sequence of state action pairs. \n\n2) DOMINO divides the mutual information problem into the summation of several smaller ones to reduce the demand for data, which optimizes the mutual information between every disentangled context vector and historical trajectory separately and regularizes the mutual information between the disentangled context vectors, while RIA directly optimizes the whole mutual information between the inferred context vector and historical trajectory, which may also suffer from the problem that the optimize objective may be a loose bound of the true mutual information with not enough data when multiple factors that affect the state transition concurrently.\n\n3) DOMINO considers the trajectories collected in the same episode as positive examples and considers the trajectories generated in other episodes as negative examples, since the environment is randomly initialized at the beginning of each episode, and confounders remain unchanged until the end of the episode. Note that, DOMINO treats all the negative examples equally and doesn't need the specific value of the confounders. RIA doesn't need to record if the two trajectories are collected in the same episode, since the relational intervention approach could optimize the mutual information without environment labels and even without the environment ID, which provides a promising direction of unsupervised dynamics generalization.\n\n\nThe advantage of DOMINO is that it can alleviate the problem of underestimation of mutual information caused by insufficient interaction data, which seriously affects MI optimization. DOMINO aims to alleviate this problem and optimize the mutual information effectively using as less data as possible. Here, we explain in detail: \n\n1) When the number of confounders is increasing, the demand for data to let the $I_{NCE}(c;\\mathcal{T})$ be a tight lower bound increase exponentially. By decomposing the whole $I_{NCE}(c;\\mathcal{T})$ into $\\sum_{i=1}^{N} I_{NCE}(c_{i};T)$, and optimize each $I_{NCE}(c_{i};T)$ separately, while regularizing the mutual information $I(c_{i};c_{j})$ between each disentangled context vectors .The demand of the data amount $K$ can be reduced from $e^{I(c;\\mathcal{T})}$ to $e^{\\frac{1}{N}I(c;\\mathcal{T})}$.\n\n2) The proposed decomposed MI optimization can benefit many contrastive learning-based context learning methods based on the InfoNCE. We believe that DOMINO can also benefit RIA to further improve the sample efficiency with multiple confounders.\n\n\nWe believe that DOMINO and RIA are not in competition, on the contrary, their effective combination will become a stronger baseline, for example, the decomposed MI optimization can be expanded into the relational intervention approach proposed in RIA.\n\n**Q2: RIA only uses one prediction head as their codes show, so it does not add the intervention module to TMCL**\n\nANS:\n\nThanks for your suggestion. We provide the performance comparison between RIA and DOMINO without adaptive planning accordingly(see Figure 4, in section 5.2, third revised paper).\n\n**Q3: Can the authors run additional tests to verify that the suggested approach is reliable when is greater than the actual number of confounders?**\n\nANS:\n\nThanks for your constructive suggestion. We run the additional test to verify that the suggested approach is reliable when $N$ is greater than the actual number of confounders (see Figure 10, Appendix E.3, third revised paper). The results show that when $N$ is greater than the actual number of confounders, the performance is still better than T-MCL and is comparable with the other ablation version of DOMINO. Although the extra context may encode the information irrelevant to the confounder, the policy only needs to learn to pull out the useful ones from multiple context vectors and discard the useless ones, which can be learned by minimizing the prediction loss.\n\n", " **Q4: whether the disentangled context information can be extracted by only optimizing the MI objective function?**\n\nANS:\n\nDOMINO learns the disentangled context not only by optimizing the MI objective function. \n\nTo learn the disentangled context vectors, DOMINO optimizes the mutual information between each disentangled context vector and historical trajectory separately while regularizing the mutual information between the disentangled context vectors each other(see Equation 5). Furthermore, the state transition prediction loss will also help to learn the context encoder(see Equation 8 and Equation 9).\n\n**Q5: About whether the detailed setting of the confounder is used in DOMINO.**\n\nANS:\n\nGood question. DOMINO doesn't use the specific confounder value.\nSince the environment is randomly initialized by specifying the combination of confounders at the beginning of each episode, and confounders remain unchanged until the end of the episode(as same as T-MCL, CADM, and RIA), DOMINO considers the trajectories collected in the same episode as positive examples and consider the trajectories generated in other episodes as a negative example. DOMINO treats negative cases equally and does not needs the specific label of the value of each confounder. \nWe appreciate the relational intervention approach proposed in RIA, we believe that the combination of DOMINO and RIA can be a more sample-efficient baseline without the dependency on any environment label.\n\n\nThanks again for reading our article carefully and giving very constructive suggestions. We hope we resolve all of your concerns and we wish you could reconsider your score.\n", " Dear Reviewer 1VJF:\n\nWe sincerely appreciate your constructive suggestions to improve the quality of our articles. We hope we resolve all of your concerns and we wish you could reconsider your score.\n\nBest, \n\nThe authors.", " We sincerely thank you for your efforts in reviewing our paper and your constructive suggestions again.\n\nWe hope we have resolved all the concerns and showed the improved quality of the paper. \n\nAnd we deeply appreciate that if you could reconsider the score accordingly. \n\nWe are always willing to address any of your further concerns. \n", " I appreciate the author's response and point-by-point response. I believe that some of my concerns have been addressed by the author's response, but I do still have the following concerns:\n\nResponse 1: I believe that the revised version's expanded explanation of the adaptation process clarifies the submission's setting. \n\nResponse 2: My concern about the paper's clarity, in my opinion, has not been fully addressed by the response, and my main concern is how the submission and RIA are related. The following are the explanations:\n\n a) The experimental setting between RIA and the submission is very relevant: CCM[1] and Focal[2] concentrate on the setting of a single confounder, but RIA[3] also performs experiments in a situation with numerous confounders as the submission does.\n\n b) The method of RIA is relevant to the submission: RIA also views context information as a confounder between $s t$ and $s_t+1$. To extract disentangled context information in the multiple confounders setting. RIA uses MI and additional interventional to extract disentangled context information.\n\nI, therefore, think that RIA is quite relevant to the submission regardless of the experimental setting and method, and so the differences/advantages/experimental/visualization comparisons between the submission and RIA should be made clearer in the Introduction/Related Work and Experiment of the main paper so that the contribution and significance of the submission can be better clarified. \nFurthermore, RIA only uses one prediction head as their codes show, so it does not add the intervention module to TMCL as this submission does (as the description in the Relevant work in the revised submission). \n\nResponse 3: I appreciate the author's further experimental sensitive analysis. The analysis demonstrates that when $N$ is less than the actual number of confounders, the suggested method is not sensitive to $N$. In some circumstances, however, $N$ might be higher than the actual number of confounders. Can the authors run additional tests to verify that the suggested approach is reliable when $N$ is greater than the actual number of confounders?\n\nOpen Discussion:\nThe submission makes the straightforward and logical assumption that the confounders are independent and seeks to extract disentangled context information from multiple confounder settings. To extract disentangled context information from historical transitions, however, should be an ill-posed problem because even if the confounders are independent, the transition functions that map from $s_t,a_t$ to $s_{t+1}$ are not identifiable in the presence of multiple confounders. I suspect that disentangled context information cannot be extracted by only optimizing the MI objective function. \n\nSome further questions: I notice that the positive trajectories in this paper are collected in the same setting as the confounders (Line 184), emphasizing the fact that the environment ID is provided in the submission. However, because the confounder value can sometimes be difficult to label, TMCL/RIA assume that the detailed setting of the confounder is not available. As a result, these two methods learn the context encoder unsupervised. Therefore, I am not sure whether the experimental comparisons are fair or not.\n\n\n\n\nMinors: \nRIA is published in ICLR 2022, not 2021.\n\n", " Many thanks for the detailed clarifications on each of my questions. I especially appreciate the usage of colored diffs in the updated manuscript. I am thus increasing my rating to fully support \"Accept.\"\n\nGiven the extra page for the CRC, think it would be valuable for you to include these points from your response in the final version of the paper:\n- How you envision VariBAD can be combined with DOMINO to further improve context adaptation.\n- How you envision DOMINO can be extended to reward generalization.\n- How DOMINO can be extended to support the case of dependent factors of variation in the environment.\n- Include a reference to your RIA comparisons in the Appendix, or even incorporate them into the main results.", " We again appreciate the reviewers for their discussion and valuable comments. \n\nWe submitted the second revision of our paper.\n- The first revision: **red** \n- The second revision: **blue**\n- The third revision: **purple**\n\nWe are looking forward to discussing any concerns of our paper. \n\nWe are ready to address your further concerns and revise the paper if the reviewers suggest anything that can improve the quality of our paper.", " We sincerely thank you for your kind reply. We carefully address your concerns as follow:\n\n**Q1: In the response Part 1/3, which theorem or lemma or equation did you bring from [2] and [3]? It seems inappropriate and unclear to cite the paper itself as above in developing the equations.**\n\n\nANS:\n\nThanks for your kind suggestion. We take [2] and [3] as reference to derive the proof of Lemma 1, we cite the equation we refer from [2] and [3] clearly as follow:\n\nAccording to the section 2.3 in [2], by setting the proposal distribution as the marginal distribution $\\pi(y) \\equiv p(y)$, the unnormalized density of $y$ given a specific set of samples $y_{2: K}=[y_{2}, \\ldots, y_{K}]$ and $x$ is:\n\n\\begin{equation}\nq\\left(y \\mid x, y_{2: K}\\right)=p(y) \\cdot \\frac{K \\cdot e^{\\psi(x, y)}}{e^{\\psi(x, y)}+\\sum_{k=2}^{K} e^{\\psi\\left(x, y_{k}\\right)}}=Kp(y)w_{y}\n\\end{equation}\nwhere $K$ denotes the numbers of samples.\n\nAccording to the equation 3 of section 2 in [3], the expectation of $q(y \\mid x, y_{2: K})$ with respect to resampling of the alternatives $y_{2: K}$ from $p(y)$ produces a normalized density:\n\\begin{equation}\n\\bar{q}(y \\mid x)=E_{p\\left(y_{2: K}\\right)}\\left[q\\left(y \\mid x, y_{2: K}\\right)\\right]\n\\end{equation}\n\nWe also revised the paper accordingly with your kind suggestion(**see Appendix A.1, page 14, second revised version paper**).\n\n\n$$$$\n\n**Q2: don't we need N times of $K \\geq e^{\\frac{1}{N} \\sum I(u_{i};\\mathcal{T})}$ data to make every $I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right)$ to be a tight bound?**\n\nANS:\n\nTo let every $I_{NCE}(c_{i},\\mathcal{T})$ to be a tight bound, we just need $ K \\geq e^{ \\frac{1}{N} \\sum_{i=1}^{N} I(u_{i}; \\mathcal{T})}$ in total rather than $N e^{\\frac{1}{N} \\sum_{i=1}^{N} I(u_{i}; \\mathcal{T})}$. \n\nHere, we explain it in detail:\n\nTo make the $I_{NCE}(c_{i};\\mathcal{T})$ be a tight bound of $I(c_{i};\\mathcal{T})$, we need \n\n$I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right) \\leq I(c_{i} ; \\mathcal{T}) \\leq \\log K$\n\nTherefore, to make $I_{NCE}(c_{i};\\mathcal{T})$ be a tight bound, we need at least \n\n$\\log K \\geq I(c_{i};\\mathcal{T})$\n\nSince $I(c_{i};\\mathcal{T}) \\geq I(u_{i};\\mathcal{T})$ (see detailed derivation in Equation 3 in our paper), under the independent assumption, $c_{i}$ is independent to the other confounders $u_{j}(j!=i)$, we need \n\n$\\log K \\geq I(c_{i};\\mathcal{T}) \\geq I(u_{i};\\mathcal{T})$\n\n\n\nThus, to let every $I(c_{i};\\mathcal{T})$ be a tight bound, we need\n$\\sum_{i=1}^{N} I(u_{i} ; \\mathcal{T}) \\leq \\sum_{i=1}^{N} I(c_{i} ; \\mathcal{T}) \\leq N \\log K$\n\nTherefore, the amount of data K just need to satisfy\n\n$\\log K \\geq \\frac{1}{N} \\sum_{i=1}^{N} I(u_{i}; \\mathcal{T})$ \n\n$ K \\geq e^{\\frac{1}{N} \\sum_{i=1}^{N} I(u_{i}; \\mathcal{T})}$ \n\n\n\n[1] David Barber and Felix Agakov. The IM algorithm: A variational approach to information maximization, 2003.\n\n[2] Oord A, Li Y, Vinyals O. Representation learning with contrastive predictive coding, 2018.\n\n[3] Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting importance-weighted autoencoders, 2017.\n\n", " **Q3: shouldn't the visualization be able to identify such disentanglement. In fact, the visualization results of T-CML, in which the expressions of Setup0 and Setup4 are intertwined, seem to be more reasonable.**\n\nANS:\n\n- The visualization in Figure6 is produced by concatenating the disentangled context vectors together with t-SNE method. This visualization aims to show whether the learned whole context, which contains all disentangled context vectors can be separated well under different settings. The ability to separate the concatenated context vectors under different settings is an important prerequisite for the conditional policy to learn the optimal policy under each setting.\n\n- The phenomenon that the learned contexts in TMCL are intertwined in setup0 and setup4 is detrimental to policy learning, since the MDPs corresponding to setup0 and setup4 are different, if the contexts are intertwined, the policy that is conditioned on the context will learn a suboptimal solution to compromise between setup 0 and setup4. On the contrary, the whole context learned by DOMINO is clearly distinguished between setup 0 and setup 4, then the context-based policy can approximate the optimal solution under each of the two setups. Furthermore, note that setup 0 and setup4 are just similar in Hopper-m-d domain, the visualization in Cripple-Ant-m (Figure 6b), Setup0 (m = 1.15, leg = 3) and Setup4 (m = 1.0, leg = 2), which are quite different, while TMCL also inferred intertwined contexts.\n\n\n- Following your suggestion, we also think it is important to add a visual analysis for verifying whether the contexts is disentangled. Thanks for your constructive suggestion. Accordingly, we add an additional experiment which contains multiple confounders, and we only vary one of the confounders every time and observe the changes of disentangled vectors accordingly. \n - In this experiment, we set up two different confounders: mass $m$ and damping $d$. Under the DOMINO framework, the context encoder inferred two disentangled context vectors: context 0 and context 1. \n - As shown in Figure 10 and Figure 11 (**see Appendix F.1, page 22, the revised version paper**), the context 1 is more related to damping. When the confounders are set as the same mass but different damping, the visualization results of context 1 under different settings are separated clearly from each other, while under the same damping but different mass settings, the visualization results of context 1 are much more blurred from each other. Similarly, context 0 is more related to mass. When the confounders are set to the same damping but different mass, the visualization results of context 0 under different settings are separated clearly from each other, while under the same mass but different damping settings, the visualization results of context 0 are less different from each other.\n\n\nWe again appreciate your discussion and valuable comments. \n\nWe submitted the second revision of our paper.\nThe first revision: blue The second revision: red \n\nWe are always ready to address your further concerns and revise the paper if you suggest anything that can improve the quality of our paper.", " Thanks for your kind reply. We carefully address your concerns as follow:\n\n**Q1: \nEq.1 is only defining the return, but not the optimization problem. For instance, it would be clearer for readers if there was a $\\max_{\\pi}$**\n\nANS:\n\nThanks for your kind suggestion. We define the optimization problem as follow:\nWe aim to learn the optimal policy condition on the context $c$ encoded from the current sequence of state action pairs $[s_{\\tau}, a_{\\tau}, s_{\\tau+1}]_{\\tau=t-H}^{t}$.\n\n\\begin{equation}\n\\max_{\\pi} E_{\\tilde{u} \\sim p(\\tilde{u} \\_{\\\\#})}[\\sum_{t=0}^{\\infty} \\gamma^{t} r\\left(s_{t}, \\mathbf{a}_{t}\\right)]\n\\end{equation}\n\nwhere $a_{t} \\sim \\pi(s_{t}, c), \\\\#=[\\text { ''train\"or ''test\" }]$ and $\\tilde{u} =[u_{0},u_{1},\\ldots,u_{N}]$ represents the multiple confounders.\n\nWe also revised the paper accordingly with your kind suggestion(**see Equation 1, page 3, second revised version paper**).\n\n$$$$\n\n**Q2: Would you please clarify whether the number of data necessary to make a tight bound is not impractical?**\n\nANS:\n\nFor a disentangled context, the demand of data amount $K$ needs to satisfy $K \\geq e^{\\frac{1}{N} \\sum_{i=1}^{N} I(u_{i},\\mathcal{T})}$.\n\nFor a entangled context, the demand of data amount $K$ needs to satisfy $K \\geq e^{\\sum_{i=1}^{N} I(u_{i},\\mathcal{T})}$.\n\nIt means that the demand data in the methods using entangled context is far greater than DOMINO. As the number of confounders increases, this gap will widen exponentially.\n\nIn addition, since RL has high requirements for sample efficiency, we want to use as little data as possible to learn the policy. Therefore, DOMINO has an obvious advantage over methods that use entangled contexts.\n\nWe again appreciate your discussion and valuable comments. \n\nWe submitted the second revision of our paper.\nThe first revision: blue The second revision: red \n\nWe are always ready to address your further concerns and revise the paper if you suggest anything that can improve the quality of our paper.", " Dear reviewer RkbU:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We sincerely hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. We are always ready to address your further concerns. \n\nBest,\n\nThe authors.", " I thank the authors for carefully addressing my questions and concerns.\n\nQ1) Minor: Although I understand the textual problem formulation, I believe the mathematical definition can be better formalized. Eq.1 is only defining the return, but not the optimization problem. For instance, it would be clearer for readers if there was a $\\max_\\pi$ in Eq.1. \n\nQ9) I appreciate the author's effort in providing this visualization. It better elucidates the method's capabilities of learning disentangled contexts\n\nI also share reviewer oNNM concern (in their latest reply) regarding the tight bound for $I_{NCE}$. Would you please clarify whether the number of data necessary to make $I_{NCE}$ a tight bound is not impractical?", " Thank you for the author's detailed explanation.\nHowever, there are still some points to be clarified.\n\n1. In the response Part 1/3, which theorem or lemma or equation did you bring from [2] and [3]? It seems inappropriate and unclear to cite the paper itself as above in developing the equations.\n2. I got the point of your claim in Part 2/3, however, don't we need N times of $K \\ge e^{{1 \\over N} \\sum I(u_i; \\mathcal T)}$ data to make every $I_{NCE}(c_i;\\mathcal T)$ to be a tight bound?\n3. In the response Part 3/3, CCM paper refers to \"Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning\", right? Unlike CCM, DOMINO proposes a new framework for disentangling context representation of each confounding. Then, shouldn't the visualization be able to identify such disentanglement? In fact, the visualization results of T-CML, in which the expressions of Setup0 and Setup4 are intertwined, seem to be more reasonable.\n", " We sincerely appreciate all reviewers' time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our key contributions and clear presentation of our paper:\n\n**Contributions**.\n- **Method**: This paper provides a decomposed mutual information optimization framework for improving context-aware meta RL in the environment with multiple confounders that impact the transition dynamics[RkbU, rYrP]. It is the first method to learn disentangled context by directly exploiting the confounders independent assumption [rYrP]. The idea is novel, intuitive and interesting[oNNM,1VJF]. \n- **Experiment**: Extensive experiments show the effectiveness of the proposed method [RkbU]. The proposed method achieve better performance than the baselines[rYrP,1VJF] and performs well against an ablation that does not learn disentangled context vectors [rYrP]. Experiments are comprehensive, and the results are impressive[oNNM].\n- **Presentation**. The paper is well written and clear to understand [oNNM,RkbU]. The method itself is clearly described and is easy to follow [rYrP]. The figures in this paper are very clear and very well [RkbU]. \n\nAlso, we thank all reviewers for their valuable and constructive suggestions, which help us a lot in improving our paper. In addition to the pointwise responses below, we have updated our paper in the revised version to incorporate the insightful suggestions of the reviewers:\n\n**Experiments.**\n- Following Reviewer RkbU's suggestions,\nwe add a **comparison experiment to compare DOMINO and RIA**[1]. (see Figure 8, Appendix E.2, page21).\n\n- Following Reviewer RkbU's and reviewer rYrP's suggestions,\nwe construct the **sensitivity analysis experiment of hyper-parameter N** to how dependent the performance is on setting N (see Figure 9, Appendix E.3, page 21).\n\n- Following Reviewer 1VJF 's suggestions,\nwe provide **visualization analysis of each latent context** to show that the learned context vectors are disentangled well and each latent context captures one of the confounders of the environment(see Appendix E.3, page21, revised version paper).\n\n- Following Reviewer rYrP's suggestions,\nwe add a **Welch t-test** between the proposed method and the baselines.\n\n**Derivation.**\n\nwe carefully provide more detailed proof of Lemma 1 and the explanation of Theorem 1 (see Appendix A.1 and A.2, page 14-15, revised version paper).\n\n[1] Guo J, Gong M, Tao D. A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning[C]//International Conference on Learning Representations. 2022.\n \nWe hope our pointwise responses below could clarify all reviewers confusion and alleviate all concerns. We thank all reviewers’ time again and we always ready to solve your concerns.\n", " Thanks for your suggestions. We carefully list the derive process in detail and provide more explanations of Theorem 1 to address your concerns as follows:\n\n**Q1:In Appendix A,how does the first equility $q(y \\mid x, y_{2: K})=p(y) K w_{y}$ hold? What is $w_{y} $?**\n\nANS:\n\nSorry for the confusion. Here, we explain the derivation process thoroughly:\n\nAccording to the Barber and Agakov's variational lower bound[1], the mutual information $I(x ; y)$ between $x$ and $y$ can be bounded as follows:\n\n\\begin{equation}\nI(x ; y)=E_{p(x, y)} \\log \\frac{p(y \\mid x)}{p(y)} \\geq E_{p(x, y)} \\log \\frac{q(y \\mid x)}{p(y)}\n\\end{equation}\nwhere $q$ is an arbitrary distribution.\n\nSpecifically, $q(y \\mid x)$ is defined by independently sampling a set of examples $[y_{1}, \\ldots, y_{K}]$ from a proposal distribution $\\pi(y)$ and then choosing $y$ from $[y_{1}, \\ldots, y_{K}]$ in proportion to the importance weights\n\n\n$$w_{y}=\\frac{e^{\\psi(x, y)}}{\\sum_{k} e^{\\psi\\left(x, y_{k}\\right)}}$$\n\n,where $\\psi$ is a function that takes $x$ and $y$ and outputs a scalar. According to [2], by setting the proposal distribution as the marginal distribution $\\pi(y) \\equiv p(y)$, the unnormalized density of $y$ given a specific set of samples $y_\\{2: K}=[y_{2}, \\ldots, y_{K}]$ and $x$ is:\n\n\\begin{equation}\nq\\left(y \\mid x, y_{2: K}\\right)=p(y) \\cdot \\frac{K \\cdot e^{\\psi(x, y)}}{e^{\\psi(x, y)}+\\sum_{k=2}^{K} e^{\\psi\\left(x, y_{k}\\right)}}=Kp(y)w_{y}\n\\end{equation}\nwhere $K$ denotes the numbers of samples.\n\nAccording to [3], the expectation of $q\\left(y \\mid x, y_{2: K}\\right)$ with respect to resampling of the alternatives $y_{2: K}$ from $p(y)$ produces a normalized density:\n\\begin{equation}\n\\bar{q}(y \\mid x)=E_{p\\left(y_{2: K}\\right)}\\left[q\\left(y \\mid x, y_{2: K}\\right)\\right]\n\\end{equation}\n\nThe $I_{N C E}(x ; y \\mid E, K)$ is a typo, it should be $I_{\\mathrm{NCE}}(x ; y \\mid \\psi, K)$.\n\nWe have refined the proof process based on your kind suggestions and added relevant details, see Appendix A in the revised version.\n\n[1] David Barber and Felix Agakov. The IM algorithm: A variational approach to information maximization, 2003.\n\n[2] Oord A, Li Y, Vinyals O. Representation learning with contrastive predictive coding, 2018.\n\n[3] Chris Cremer, Quaid Morris, and David Duvenaud. Reinterpreting importance-weighted autoencoders, \n2017.", " **Q2: 1) Even if the number of confounders increases, the true mutual information does not$I(c ; \\mathcal{T})$. 2)It shows inconsistency to regard $I_{N C E}\\left(c ; \\mathcal{T}\\right)$ and $I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right)$ as having the same upper bound.\nThe following inequality also holds in the same setting. $\\sum I_{N C E}\\left(c_{i} ; \\mathcal{T} \\mid K\\right)=I_{N C E}(c ; \\mathcal{T} \\mid K) \\leq \\log K$.**\n\nANS:\n\nWe explain Theorem 1 in detail below:\n\n1) As the number of confounders increases, although the true mutual information $I(c ; \\mathcal{T})$ does not increase, the necessary condition of $I_{NCE}$ to be a tight lower bound of $I_{NCE}$ becomes more difficult to satisfy, and the demand of data increases significantly.\n\nAs for an entangled context, the necessary condition of the InfoNCE lower bound $I_{NCE}(c ; \\mathcal{T})$ to be a tight bound is \n\n$$I_{N C E}\\left(c ; \\mathcal{T}\\right) \\leq I\\left(c ; \\mathcal{T}\\right) \\leq \\log K$$\n\nSince $I(c ; \\mathcal{T}) \\geq \\sum_{i=0}^{N} I\\left(u_{i} ; \\mathcal{T}\\right)$, to let the above condition satisfied, the amount of data $K$ must satisfy\n\n$$\\log K \\geq \\sum_{i=0}^{N} I\\left(u_{i} ; \\mathcal{T}\\right)$$\n\n$$K \\geq e^{\\sum_{i=0}^{N} I(u_{i} ; \\mathcal{T})}$$\n\nTherefore, if the number of confounders increases, then the demand for data will grow exponentially.\n\nWhen data is not rich enough, the nesseray condition may not be satisfied. The InfoNCE lower bound $I_{NCE}(c ; \\mathcal{T})$ may be loose, that is $I_{NCE}(c ; \\mathcal{T})$ may be much smaller than the true mutual information $I(c ; \\mathcal{T})$, thus the MI optimization based on $I_{NCE}(c ; \\mathcal{T})$ will be severely affected.\n\n2) **Clarification:**\n$I_{N C E}\\left(c ; \\mathcal{T}\\right)$ and $I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right)$ have different upper bound. \n\n$I_{N C E}\\left(c ; \\mathcal{T}\\right)$ is the lower bound of $I\\left(c ; \\mathcal{T}\\right)$, and the necessary condition of $I_{N C E}\\left(c ; \\mathcal{T}\\right)$ to be a tight bound of $I\\left(c ; \\mathcal{T}\\right)$ is\n\n$$I_{N C E}\\left(c ; \\mathcal{T}\\right) \\leq I\\left(c ; \\mathcal{T}\\right) \\leq \\log K$$\n\n$I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right)$ is the lower bound of $I\\left(c_{i} ; \\mathcal{T}\\right)$ and the necessary condition of $I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right)$ to be a tight bound of $I\\left(c_{i} ; \\mathcal{T}\\right)$ is\n\n$$I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right) \\leq I\\left(c_{i} ; \\mathcal{T}\\right) \\leq \\log K$$\n\nAs for disentangled context $c=\\{c_{1},c_{2},\\cdots,c_{N}\\}$, we then derive the necessary condition of $I(c,\\mathcal{T})$ to be a tight lower bound of $I(c,\\mathcal{T})$:\n\nWith the assumption that the contexts $\\{c_{1},c_{2},\\cdots,c_{N}\\}$ are independent to each other, then $I(c ; \\mathcal{T})$ could be derived as $\\sum I\\left(c_{i} ; \\mathcal{T}\\right)$. Therefore, under the confounder independent assumption, let $I_{NCE}(c ; \\mathcal{T})$ be a tight bound is only necessary to let every $I_{NCE}(c_{i} ; \\mathcal{T})$ to be a tight bound. \n\nIf every $I_{NCE}(c_{i} ; \\mathcal{T})(i=1,2,\\ldots,N)$ is a tight bound, then we have\n\n$$I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right) \\leq I\\left(c_{i} ; \\mathcal{T}\\right) \\leq \\log K$$\n\nunder the confounder independent assumption, we have\n\n$$ \\sum I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right) \\leq \\sum I\\left(c_{i} ; \\mathcal{T}\\right) \\leq N \\log K$$\n\n$$I_{N C E} \\left(c ; \\mathcal{T}\\right) = \\sum I_{N C E}\\left(c_{i} ; \\mathcal{T}\\right) \\leq I\\left(c ; \\mathcal{T}\\right) = \\sum I\\left(c_{i} ; \\mathcal{T}\\right) \\leq N \\log K$$\n\nThus, the necessary condition of $I_{N C E} \\left(c ; \\mathcal{T}\\right)$ to be a tight bound of $I \\left(c ; \\mathcal{T}\\right)$ could be relaxed to \n\n$$I_{N C E} \\left(c ; \\mathcal{T}\\right) \\leq I\\left(c ; \\mathcal{T}\\right) \\leq N \\log K$$\n\nTherefore, by decomposing the MI estimation under the confounder independent assumption, the demand of the amount $K$ of data could be reduced from $K \\geq e^{I(c ; \\mathcal{T})}$ to $K \\geq e^{\\frac{1}{N} I(c; \\mathcal{T})}$. \n\nAnd with $I(c ; \\mathcal{T}) \\geq \\sum_{i=0}^{N} I\\left(u_{i} ; \\mathcal{T}\\right)$, specificly, the the amount $K$ of data could be reduced from $K \\geq e^{\\sum_{i=0}^{N} I\\left(u_{i} ; \\mathcal{T}\\right)}$ to $K \\geq e^{\\frac{1}{N} \\sum_{i=0}^{N} I\\left(u_{i} ; \\mathcal{T}\\right)}$.\n\n", " \n**Q3:Figure 6 does not show how effectively each confounder was encoded, shouldn't Setup0 and Setup4 be located closer than Setup0 and Setup3?**\n\nANS:\n\nFor context-based method methods, the core factor that influences the generalization performance is whether the context can separate the different settings as much as possible. Figure 6 shows that the context learned by DOMINO can be more easily distinguished than TMCL. Previous works with context visualization also used similar evaluation criteria as we did, i.e., the latent contexts from the same tasks are close in the embedding space while maintaining clear boundaries between the different tasks(see Section 5.3 in CCM paper). \n\n\nSince we consider all trajectories different from the current setting as negative cases when computing InfoNCE, and treat them equally when computing $L_{NCE}$, our current algorithm can only guarantee that the contexts under different settings can be separated obviously, and do not guarantee that similar settings will be encoded more similarly.\n\nThe requirement kindly suggested by you that similar settings should be encoded in similar contexts and meanwhile distinguished well from each other is indeed a promising research direction, which can be left as future works. \n", " We appreciate your careful reading of our and your very constructive suggestions. Here, we address your concerns as follows.\n\n**Q1:Why choose T-MCL as the sole baseline in their MBRL experiments**\n\nANS:\n\nWe choose TMCL as the main baseline because: 1) TMCL is the state-of-the-art method in model-based meta-RL. 2) TMCL and its base version CADM are the first to provide a rich meta-RL benchmark affected by confounders (containing both discrete and continuous confounders). In contrast, all other methods give results in only a small number of environments. 3) RIA is a recent advanced algorithm, which is concurrent work of DOMINO and is also implemented based on the TMCL and has not yet open source when we ran our comparison experiments. Since RIA has open-sourced the code, we have added the relevant comparison experiments with RIA.\n\n\n**Q2:why was PEARL chosen over Varibad?**\n\nANS:\n\nThe experiments combined with the model-free approach aim to verify the degree of improvement of decomposed MI optimization and entangled MI optimization under the multi-confounder setting, respectively, compared to relying only on the reward signal to learn the context, and thus to validate the advantages of DOMINO.\nPEARL is the most typical and widely used algorithm that relies only on the forward signal learning context and is also used as a key baseline in TMCL and CADM. Varibad introduces the VAE method and recurrent network to learn the context, which optimizes the context learning from different perspectives from DOMINO and TMCL methods and does not conflict with each other. Thanks to your constructive suggestions, we think the effective combination of DOMINO and Varibad will become a more powerful baseline for meta-RL.\n\n**Q3:It seems that the number of context vectors N must be set to the number of environmental factors. Is this understanding correct?**\n\nANS:\n\nYes, theoretically, when the number of context vectors N must be set to the number of environmental factors, the performance is the best. Here, we also provide the experiment about the DOMINO’s sensitivity testing to N. We find that DOMINO still performs better than TMCL even when N is not consistent with the true number of confounders. As long as N is set to a respectively conservative value that does not exceed the total number of potential confounders in the environment, DOMINO can still benefit the context learning.\n\n**Q4:How do the authors view the role of reward generalization in their work? Can DOMINO be expected to work in settings where the environment confounders also impact the reward function?**\n\nANS:\n\nReally good question! This is a very good research direction!\nThe reward generalization can be categorized as a kind of task generalization.\nThe parameter of the reward function, for example, the target speed of the robot, can also be considered as a confounder that influences the reward transition.\nTo address this problem under the DOMINO framework, we provide the following solution. \nThe context encoder maps the current sequence of state-action-reward pairs $[s\\_{\\tau},a\\_{\\tau},r\\_{\\tau}]^{t}\\_{t-H}$ i into disentangled contexts, which contains the information of the physical confounders like mass and damping and the reward confounder. The historical trajectory also should consider the reward part, i.e., $s_t,a_t,r_t$,$s\\_{t+1}$. Then the proposed decomposed mutual information optimization method can also be used in this situation to extract effective context. Moreover, the prediction loss should also add the reward prediction term. Thus, with the above design, DOMINO can address the reward generalization and dynamics generalization simultaneously.", " **Q1: Reporting the results of a Welch t-test between the proposed method and the baselines**\n\nANS:\n\nThanks for your suggestion, we have added the Welch t-test results between the the proposed method and baselines in the revised version paper. The p-value results of the Welch t-test between the proposed method and the baselines are shown as follows:\n\n| | cartpole | pendulum | ant | halfcheetah | slimhumanoid | hopper |\n|-------|----------|------------|----------|-------------|--------------|----------|\n| TMCL | 1.27e-11 | 7.1776E-10 | 0.000756 | 0.010593 | 0.134583 | 0.019499 |\n| MINO | 3.87E-07 | 0.000619 | 0.006396 | 0.006166 | 0.108993 | 0.000698 |\n| PEARL | 0.000824 | 0.003779 | 0.053568 | 0.041328 | 5.4165E-05 | 0.011843 |\n\n\nThus, with significance level $\\alpha=0.15$, the improvement of DOMINO compared to the baselines are significant ($p<0.15$). \n\n**Q2: the performance comparison plot in Figure 1b should have error bars. It should also state what method of averaging was used for the plotted values**\n\nANS\n\nThanks for your kind suggestion. We have revised Figure 1b with error bars(see page2, revised version paper). The method of averaging is that we calculated the average return of 20 random tests.\n\n**Q3:what measure of uncertainty is represented by the error bars for each plot and table?**\n\nANS:\n\nThe error bars for each plot and table show the confidence interval of the average return with 20 random tests over different random seeds. Since reinforcement algorithms are usually sensitive to the seed, most of the papers show the average return with confidence intervals over more than 5 random seeds. In the shown figures, the line represents the average value of different seeds, and the shade represents the confidence interval, which is calculated by a typical data visualization package Seaborn.lineplot.\n\n**Q4:whether this bound is an upper or lower bound and what is the definition of “MI underestimation”**\n\nANS:\n\nThe InfoNCE bound is a lower bound of the mutual information, and the MI underestimation in our original paper is defined as follows:\nWhen the InfoNCE bound is much smaller than the mutual information, which is also called underestimation, then the InfoNCE is a loose lower bound, and in this situation, the MI is hard to be optimized by maximizing the InfoNCE bound according to underestimation.\n\n\n**Q5:Given that the independence assumption is core to this work, it is unclear how significant this setting will be in practice and for future work.**\n\nANS:\n\nIn practice, the dynamics generalization problem faced by the robot is mainly caused by the mismatch between various dynamics parameters in the simulation environment and the real world, such as mass, damping coefficient, length, size, and stiffness of the mechanical structure. These parameters themselves are usually independent, and when many of them are different in both real-world and simulation environments, the robot dynamics will exhibit more complex variations, posing a great challenge to the generalization of control strategies.\n\t\nTherefore, based on this assumption, inferring information about each individual confounder from state transition sequences is very important for improving the performance of the robot. This process is similar to the process in which a human calibrates each kinetic parameter respectively, except that DOMINO lets the relevant information be encoded from the trajectory automatically using a context-based approach.\n\nFor future works, we can explore how to extract the information that is most useful for state transfer from each of the confounders separately when they do have some correlation with each other. One possible option is to adjust the penalty factor for mutual information between the context vectors in DOMINO, which can be set to be dynamically adjustable.", " **Q6: what is the price in performance one might expect to pay for making this assumption?**\n\nANS:\n\nIf the confounder independence assumption is no longer satisfied, splitting the mutual information into sums of several subitems to be estimated separately will lead to an overestimation of the mutual information. And fortunately, the penalty term in $L_{NCE}$ loss function for $I(c_{i},c_{j})$ can suppress this phenomenon to some extent. Therefore, the performance does not deteriorate in more complex environments, where the assumption is not satisfied, compared to baselines such as TMCL.\n\n**Q7:Adding a sensitivity analysis to how dependent the performance is on setting N.**\n\nANS:\n\nThanks for your constructive suggestion. We added the sensitivity analysis on the hyper-parameter $N$, and have added it to the revised version.\n\nWe compare the performance of DOMINO with different hyper-parameter $N$, which is equal or not equal to the number of confounders in the environment. In this experiment, the confounder is the damping, mass, and a crippled leg (number of confounders is 3), and we compare the performance of DOMINO with different hyper-parameter $N={1,2,3}$. \nAs shown in Figure 9 (**see Appendix E.3, page21, revised version paper**), even though the hyper-parameter $N$ is not equal to the ground truth value of the confounder number, DOMINO also benefits the context learning. In practice, under a conservative setup of hyper-parameter $N$, DOMINO can also benefit the context learning compared to the baselines like TMCL.\n", " Thank you for your recognition of the novelty of our method and the experimental results. Since your concern is mainly related to the clarity of the mathematical definitions, we provide a more detailed explanation of them and polished our paper with your kind suggestion. Here, we address your concerns as follows:\n\n**Q1. Explanation of the objectives and how confounder affects the $R_{train}$ and $R_{test}$.**\n\nANS:\n\nWe want to learn a policy condition on the context, which is encoded from the sequences of current state action pairs $[s_{\\tau},a_{\\tau},s_{\\tau+1}]_{\\tau=t-H}^{t}$ in several training scenarios and enable it to perform well in test scenarios never seen before.\n\nSpecifically, the agent infers the context $c$ that characterizes the current environment through a context encoder $g_{\\phi}$ from the historical trajectories obtained from interactions in the current environment and then generates actions conditioned on the context $c$.\n\nThe characteristics of the training scenario are determined by confounder $u$. The confounder $u$ affects the dynamics of the robot and thus the distribution of the state transfer $(s,a,s')$, and the reward as a function of $(s,a,s')$, thus will affect the $R_{train}$ and $R_{test}$. \nIn the test scenario, the settings of the confounder $u$ are different from those in the training scenario, and we hope that the learned context encoder has good generalization performance so that the policy conditioned on the learned context can also achieve high returns in the test environment.\n\n**Q2. the difference between the two context encoders.**\n\nANS:\n\nThe context encoder $g_{\\phi}$ aims to encode the sequence of state action pairs in the current episode into the $N$ context vectors $c_{1},c_{2},...,c_{N}$, while the function $h_{w}$ aims to map the historical trajectory into the same dimension of $c_{i}$. The key difference is the output dimension of $g_{\\phi}$ is $N$ times than $h_{w}$.\n\n**Q3. the parameters being optimized through the loss functions in Eq. 5, 8, and 9.**\n\nANS:\n\nSorry for the confusion. The parameters being optimized in Equation5, is the $\\varphi$ of context encoder $g_{\\varphi}$ and $w$ in function $h_{w}$ which maps the trajectory into the same dimension of $c_{i}$.\n\nThe parameters being optimized in Equation8, is the $\\varphi$ of context encoder $g_{\\varphi}$ and the $\\phi$ in the prediction model $f_{\\phi}$.\n\nThe Equation 9 is a combination of Equation 5 ($L_{NCE}$) and Equation 8 ($L_{Pre}$), and the parameters being optimized are $\\varphi$, $w$ and $\\phi$. \n\n\n**Q4.Why was the dot product chosen as the similarity measure ?**\n\nANS:\n\nThe critic function is calculated by the dot product of the input vectors with normalization. Thus the output is the cosine similarity of two input vectors.", " **Q5. About the stability of DOMINO learning process.**\n\nANS:\n\nSorry for the confusion. We aim to claim that DOMINO is less likely to deteriorate after learning a good policy. There is less oscillation in the DOMINO training process compared with TMCL, and the variance shown in the figure is caused by the difference in performance under different random seeds, and the learning process under each seed shows less oscillation compared to TMCL. For example, as shown in Figure 3 and Figure 4, in the slim-humanoid-m-d, even with 1.6*1e5 steps, TMCL is also likely to perform poorly, while DOMINO always performs well. Although it has more variance between different seeds in Halfcheetah-m-d, it performs better than the baseline, and the return curve is always above the baseline. \n\n\n**Q6. Why does MINO perform better than T-MCL?**\n\nANS:\n\nT-MCL learns the context encoder only under the supervision of transition prediction, while MINO learns the context encoder via both the transition prediction and mutual information optimization and could help the context encoder to extract better environmental information. Previous works like CCM and TCL also prove that adding mutual information optimization could help context learning under a single confounder setting. This paper aims to reveal the advantages of decomposed mutual information optimization in the multi-confounder setting compared to the approach that only optimizes entangled context with historical trajectory mutual information. Therefore, it is an important ablation version. \nIn terms of the experimental results, MINO may perform worse than TMCL in the environments like Cartpole-f-l, while DOMINO performs well. Furthermore, DOMINO shows more advantages than MINO in the experiments combined with model-free methods. \n\n**Q7. In Figure 6, did you concatenate all vectors before applying t-SNE?**\n\nANS:\n\nYes, we concatenate all context vectors together and visualize them in the 2D space by t-SNE.\n\n\n**Q8. The effect of t-SNE**\n\nANS:\n\nThe visualization aims to show whether the contexts can be separated from each other under multi-confounder setting, which is one of the key factors of the performance of the conditional policy.\n\n**Q9. Please show that each latent context captures one of the confounders of the environment**\n\nANS:\n\nThanks for your good suggestion. We add an additional experiment to vary only one of the confounders and observe the changes of $N$ disentangled vectors accordingly. \nIn this experiment, we set up two different confounders: mass $m$ and damping $d$. Under the DOMINO framework, the context encoder inferred two disentangled context vectors: context 0 and context 1.\nAs shown in Figure 10 and Figure 11 (**see Appendix F.1, page 22, the revised version paper**), context 1 is more related to damping. When the confounders are set as the same mass but different damping, the visualization result of context 1 under different settings are separated clearly from each other, while under the same damping but different mass settings, the visualization result of context 1 is much more blurred from each other. Similarly, context 0 is more related to mass. When the confounders are set to the same damping but different mass, the visualization result of context 0 under different settings is separated clearly from each other, while under the same mass but different damping settings, the visualization result of context 0 is less different from each other.\n\n**Q10. About the training code**\n\nANS:\n\nThe whole training code is available at the following anonymous link: https://anonymous.4open.science/r/DOMINO_NIPS-CEC1/\n", " Thank you very much for recognizing our idea, writing, and presentation. We sincerely thank you for the valuable suggestions. Here, we address your concerns as follows:\n\n**Q1. About the adaptation process and the relationship between dynamics generalization and Meta-RL.**\n\nANS:\n \nWe mention the adaptation process in the preliminaries. We learn a context encoder to capture the environmental information and a context-conditional policy to generate actions in the training process. At test-time, the policy zero-shot adapts to the new MDP under the unseen confounders setting $u_{test}$ conditioned on the inferred context. During the adaptation process, the context encoder maps the sequence of state action pairs in new dynamics into context vectors, and the policy generates actions condition on the learned context. Since the proposed decomposed mutual information optimization framework can be used as a plug-and-play module to combine with the conventional meta-reinforcement learning methods, and the key of this paper is to learn the better context encoder, we have introduced this part relatively little. With your kind suggestion, we have added a description of the adaption process related to it in the revised version.\n\nThe generalization objectives of meta-reinforcement learning include several aspects such as generalization of tasks, generalization of robot dynamics, generalization of environments, etc. Dynamics generalization of reinforcement learning belongs to a branch of meta-reinforcement learning. For example, in PEARL, which is a typical meta-reinforcement learning algorithm, the experiments on Walker-2D-Params domain are to test the performance of dynamics generalization.\n\nBy conditioning on an effective context, Meta-RL policies can easily generalize to new tasks within a few adaptation steps.\nContext-based Meta-RL methods like PEARL (Titled: Efficient Off-Policy Meta-RL via Probabilistic Context Variables) and CCM (Titled: Towards Effective Context for Meta-Reinforcement Learning) then train a policy conditioned on the latent context to improve generalization. Both TMCL and CADM can be categorized as context-based meta-reinforcement learning methods.\n\n**Q2: About the novelty and the relationship with previous works**\nANS: The core innovation of this paper is to address the challenges for accurate estimation of MI posed by the combination of multiple confounders. \n\nANS:\n \nMaximizing MI has been verified by several earlier works, such as [1,2,3], to have an improvement in the single confounder setting. CCM [1] adds mutual information optimization and designs an additional exploration policy to collect more elective data. FOCAL [2] introduces contrastive learning method to offline meta-RL. RIA [3] introduces a relational intervention approach to TMCL which also attempts to maximize the MI between context vector and historical trajectories. \n\nHowever, when multiple confounders act together, it is difficult to learn an accurate entangled context variable to cover all the information of multiple confounders. Our theoretical analysis also shows that as the number of confounders increases, InfoNCE may become a very loose lower bound, which poses a challenge for the optimization of MI. Therefore, this paper focuses on addressing such a challenge by decomposed MI optimization. \n\nExperimental Comparison:\n\nDOMINO has designed two parts of experiments combined with the model-based approach and with the model-free approach.\n\nIn the experiments combined with the model-based method, both DOMINO and paper [3] are implemented based on TMCL.\nIn the experiments combined with the model-free approach, both DOMINO and paper [1] are implemented based on PEARL.\n\nIn this paper, we use TMCL and PEARL as the main baseline and add an ablated version that optimizes only one mutual information between the entangled context and the trajectory for comparison, aiming to highlight the effect of the proposed decomposed MI optimization method under the multi-confounded setting.\n\nWith your kind suggestion, we add a comparison experiment to compare DOMINO and RIA. As shown in Figure 8 (**see Appendix E.2 in page21, revised version paper**), DOMINO achieves better generalization performance than RIA and TMCL, especially in complex environments like Halfcheetah-$m$-$d$ and Slim-humanoid-$m$-$d$.\n\n[1] Haotian Fu, Hongyao Tang, Jianye Hao, Chen Chen, Xidong Feng, Dong Li, and Wulong Liu. Towards effective context for meta-reinforcement learning: an approach based on contrastive learning.\n\n[2] Li, L., Huang, Y., Chen, M., Luo, S., Luo, D., & Huang, J. (2021). Provably Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning. arXiv preprint arXiv:2102.10774.\n\n[3] Guo J, Gong M, Tao D. A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning[C]//International Conference on Learning Representations. 2022.", " \n\n**Q3. The number of learned context vectors as prior information to compare with TMCL.**\n\nANS:\n\nGood question! The number of learned context vectors is set as the number of confounders in the environment as a primary hyper-parameter. \n\nHere, we add additional sensitivity test experiments of the hyper-parameter $N$ to solve your concerns. \n\nWe compare the performance of DOMINO with different hyper-parameter $N$, which is equal or not equal to the number of confounders in the environment. In this experiment, the confounder is the damping, mass, and a crippled leg (number of confounders is 3), and we compare the performance of DOMINO with different hyper-parameter $N={1,2,3}$. As shown in Figure 9 (**see Appendix E.3, page 21, revised version paper**), even though the hyper-parameter $N$ is not equal to the ground truth value of the confounder number, DOMINO also benefits the context learning. In practice, under a conservative setup of hyper-parameter $N$, DOMINO can also benefit the context learning compared to the baselines like TMCL.\n\nWe acknowledge that the introduction of the prior information is one of the limitations of our paper, and we also explicitly state this in the limitation section. Here, we provide a practical method to estimate the number of confounders. In the absence of a prior, one can consider training multiple context encoders in parallel, and selecting the best $N$ by comparing the accuracy of state transition prediction, etc. \n\nThis paper focuses on verifying that decomposed mutual information optimization has a significant advantage over entangled mutual information optimization when multiple confounders act together. And we will continue to explore the methods without the prior information on confounder numbers in future works.\n", " This paper addresses the problem of learning generalizable context in RL. In particular, it suggests learning disentangled context representation of each confounding in the environment using the proposed model, DOMINO, which optimizes decomposed MI objectives. It adopts the contrastive learning method when learning the disentangled context representation, regarding trajectories sampled from the setting of the same confounding as positive pair and of different confounding as negative pair. The authors also provide a theoretical basis for how optimizing their decomposed MI objective can make $I_{NCE}$ a tighter lower bound by alleviating the underestimation of MI. By learning policy conditioning on the learned context vector, DOMINO can achieve higher generalization performance compared to both model-based and model-free baselines. Strengths:\n\nThe paper is well written and clear to understand. Using contrastive loss when learning disentangled representation of each confounding is novel and intuitive. And it is intriguing to get an idea of sampling negative pairs from different episodes. The experiments are comprehensive and the results are impressive.\n\n\nWeaknesses:\n\nHowever, the proof of Lemma 1 and Theorem 1 lacks mathematical rigor. Also, there is some missing specific information about notations in the proof, thereby undermining the clarity and soundness of the paper (e.g., $w_y$ and $E$). Visualization of the learned context embeddings does not show how effectively each confounding is encoded.\n - In Appendix A,\n - How does the first equility ($q(y | x, y_{2:K}) = p(y) K w_y$) hold?\n - What is $w_y$?\n - What does $E$ mean in $I_{NCE}(x;y | E, K)$?\n- In Theorem 1,\n - Even if the number of confounders increases, the true mutual information $I(c; \\mathcal T)$ does not. Therefore it shows inconsistency to regard $I_{NCE}(c; \\mathcal T)$ and $I_{NCE}(c_i; \\mathcal T)$ as having the same upper bound.\n - The following inequality also holds in the same setting. $\\sum I_{NCE}(c_i;\\mathcal T | K) = I_{NCE}(c;\\mathcal T| K) \\le log K$ \n- In Figure 6,\n - DOMINO encodes the context vector of each confounder mass and damping. However, Figure 6 does not show how effectively each confounder was encoded.\n - For example, Setup0 and Setup4 have the same damping condition and Setup0 and Setup3 have different mass and damping conditions. Then shouldn't Setup0 and Setup4 be located closer than Setup0 and Setup3? Yes, the authors adequately addressed the limitations and potential negative social impact of their work.", " This paper studies a contextual reinforcement learning (RL) setting where the environment dynamics are parameterized by independent factors, which the authors refer to as “confounders.” In each episode, the underlying factors can vary. They present a method for contextual meta-reinforcement learning (RL) called DOMINO, which learns to encode the RL agent’s current trajectory into a set of independent context vectors. These independent context vectors can then be used as inputs to the transition model in model-based RL (MBRL) and as an input to the policy in model-free RL, thereby providing the agent with an inferred context for the underlying environment factors in any given episode. Importantly, their method assumes the underlying environment factors are similarly independent. The main contributions of the paper are the method, DOMINO, for learning independent context vectors from the trajectory and their analysis and experimental results demonstrating the favorable properties of this method (including improved empirical performance against baselines learning entangled context vectors), when the underlying independence assumptions are valid. Strengths\n\n- The paper provides a simple method for improving context-aware meta RL in an environment with multiple independent factors of variation that impact the transition dynamics. The method itself is clearly described. This seems to be the first method to directly exploit an explicit assumption of independence among the underlying environment factors of variation.\n- The method performs well against sensible baselines. Importantly the method performs well against an ablation that does not learn disentangled context vectors.\n\nWeaknesses\n\n- The reported results in the Table 1 and 2 have high overlap between the authors’ DOMINO and MINO methods and the baselines. The signficance of these results could be made clearer by reporting the results of a Welch t-test between the proposed method and the baselines.\n- Similarly, the performance comparison plot in Figure 1b should have error bars. It should also state what method of averaging was used for the plotted values\n- The paper can benefit from a full pass to improve the clarity of the writing. There are numerous missing details about basic figures, such as what measure of uncertainty is represented by the error bars for each plot and table. There are also several ambiguous phrasings and sentences with confusing wording. For example\n- A key aspect of this paper is the analysis of InfoNCE as a “loose bound” of the mutual information. However, the authors never define whether this bound is an upper or lower bound. While this detail can be inferred from context, I think it is important to make this point clearer to the reader. Relatedly, the definition of “MI underestimation” in L45 is unclear.\n- Given that the independence assumption is core to this work, it is unclear how significant this setting will be in practice and for future work.\n- Moreover, it seems important for the experiments to assess how valid such an independence assumption is in practice, and crucially, what is the price in performance one might expect to pay for making this assumption. An experiment assessing the performance of DOMINO and MINO on a more complex environment whose underlying factors of variation are not mutually independent would improve this paper by providing a more complete picture of the effectiveness of this method.\n- There seems to be an underlying assumption that the N independent context vectors aim to encode information about the underlying factors of variation in the environment. However, this connection is actually never explicitly made in the writing, making the jump from discussing MI in terms of environment factors to context vectors (4.1 to 4.2) unclear.\n- It seems that DOMINO requires setting the number of context vectors N equal to the number of environment factors of variation. In general, we may not know this value exactly. Adding a sensitivity analysis to how dependent the performance is on setting N to this exact value would provide important information on how applicable this method is in practice.\n\nMinor comments:\n\n- L22: “mythologies” should be “morphologies”.\n- L47-48: “First the context encoder embeds the past state-action pairs into disentangled context vectors” is an inaccurate description, as it must first be optimized to do so (as next described in L48-49).\n- This paper could consider citing related work in unsupervised environment design [1,2,3,4] and more generally, RL work in procedurally-generated environments [5,6]. These works are deeply related as they effectively perform meta-RL over a space of environment variations with an implicitly learned context. Ignoring this line of work seems like a significant oversight.\n\nReferences\n\n[1] Dennis et al, 2020. Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design.\n\n[2] Jiang et al, 2021. Prioritized Level Replay.\n\n[3] Jiang et al, 2021. Replay-Guided Adversarial Environment Design.\n\n[4] Parker-Holder et al, 2022. Evolving Curricula with Regret-Based Environment Design.\n\n[5] Raileanu et al, 2021. Decoupling Value and Policy for Generalization in Reinforcement Learning.\n\n[6] Cobbe et al, 2019. Leveraging Procedural Generation to Benchmark Reinforcement Learning. - Could the authors elaborate on their choice of T-MCL as the sole baseline in their MBRL experiments? Further details on this choice would benefit the clarity of the experiments section.\n- Further, since the goal of the method is to perform efficient meta-test adaptation via context vectors, why was PEARL chosen over Varibad, which has been shown to provide much more efficient within-episode adaptation compared to PEARL?\n- It seems that the number of context vectors N must be set to the number of environment factors. Is this understanding correct?\n- This paper assumes the environment confounders impact the transition function, but not the reward function. How do the authors view the role of reward generalization in their work? Can DOMINO be expected to work in settings where the environment confounders also impact the reward function? The core assumption of this work also acts as its primary limitation: The environment factors of variation are assumed to be independent, and their number known a priori. The authors should make an effort to emphasize this limitation and to what extent they believe such an assumption of independence may be applicable in practice.", " This paper tackles the problem of generalization in MDPs where the dynamics changes are assumed to be caused by multiple independent factors, denoted as context. The proposed framework (DOMINO) learns a context encoder that maps trajectories to a latent context via decomposed mutual information using noise-contrastive estimation (InfoNCE). The authors combine DOMINO with model-free and model-based RL algorithms, and perform experiments in classic environments, as well as in the Mujoco benchmark, in settings where multiple confounders change simultaneously. Additionally, qualitative visualizations of the latent context vectors are presented using t-SNE. Strengths:\n- The idea of capturing the different confounders that may affect the dynamics of the MDP into different latent contexts is novel and interesting.\n- The experimental results show that the proposed method can, in general, achieve better performance than the state-of-the-art.\n\nWeaknesses:\n- The paper needs improvement regarding the clarity of the mathematical definitions, such as the objective functions.\n- It is not clear whether the improvements are because of the decomposed mutual information framework, or because of other algorithmic improvements (see below).\n Furthermore, I have the following questions and constructive criticisms:\n\n- “Our goal is to learn a generalized context encoder, which is learned in training process by maximizing the expected rewards R_train in seen environments, and zero-shot generalized to unseen environments for a high expected rewards R_test” \nThis sentence is hard to follow. The objective is also not very clearly defined in Eq. 1. For instance, how is the context $c$ generated? How do the confounders $u$ affect the expectation?\n\n- In Section 4.2, it is not clear the difference between the two context encoders, $g_\\phi$ and $h_w$? Importantly, why is $h_w$ necessary? Notice that $h_w$ does not appear in the pseudo-code of the algorithms in the appendix.\n\n- It is not clear what are the parameters being optimized through the loss functions in Eq. 5, 8, and 9. I suggest specifying which parameters ($\\psi$, $\\phi$, etc.) are involved in the gradients of each loss function.\n\n- “The critic function $\\psi(·, ·)$ measures the similarity between inputs by dot product.” Why was dot product chosen as the similarity measure? Can you elaborate on this decision?\n\n- “The results illustrate that DOMINO learns the policy more efficiently and stably than the baselines.” \nIt is not possible to infer that DOMINO learns more “stably” than the baselines. In fact, sometimes it shows more variance (see Fig. 3 - Halfcheetah-m-d) than the baselines.\n\n- Why does MINO perform better than T-MCL? It seems the improvements of MINO (in comparison to T-MCL) are more important than the improvements of DOMINO (in comparison to MINO).\n\n- In Figure 6, it is not clear what each point represents. Given a trajectory, the context encoder outputs N different context vectors. In this figure, did you concatenate all vectors before applying t-SNE? \n\n- The key characteristic of the proposed method is the fact that it should learn disentangled and independent latent contexts. However, the visualizations using t-SNE were not able to show that. An important result would be to show that indeed each latent context captures one of the confounders of the environment. This could be shown by varying only one of the confounders and observing whether only one of the latent contexts changes accordingly.\n\n- The authors state in the checklist that they have included the code, data, and instructions needed to reproduce the main experimental results in Appendix D. However, only pseudocode of the algorithms and code for the environment were made available. The actual code of the proposed methods is not available.\n\n- In the paper’s abstract, it is said that the open-sourced code and videos are released on their anonymous homepage. However, there are no videos on this page, and only code for the environments is available, not the algorithms/training code.\n The paper could benefit from a discussion regarding the assumption of independent confounders. For instance, how difficult it would be to adapt the algorithm to the case where we have co-related confounders?", " This paper proposes a decomposed mutual information method to learn disentangled context information, which can generalize reinforcement learning algorithms into unseen environments. The experimental experiments demonstrate that the proposed method can achieve better performance than the previous methods. Strengths:\n1. The writing of this paper is pretty well, and the idea of it is easy to follow.\n2. The figures in this paper are very clear and very well.\n3. The extensive experiments show the effectiveness of the proposed method.\n\nWeakness: \n1. Based on the title, I assume that this study focuses on the meta-reinforcement learning problem. The conventional meta-reinforcement learning methods include an adaptation process, but this paper makes no mention of this process. Additionally, the paper states that it intends to train a general context-encoder to solve the adaptation problem, indicating that the paper's context is the dynamics generalization in reinforcement learning (this paper also mentions it in line 84), which is in contrast to the title of the paper, which refers to meta-reinforcement learning.\n\n2. The second problem of this paper is the novelty. The paper aims to maximize the mutual information between contexts extracted from historical information and the historical trajectories. However, this paper does not make clear the relationship with [1,2,3] which also attempt to maximize the MI between context vector and historical trajectories. Furthermore, this work does not compare the performance with [3] and even does not acknowledge it, despite the fact that [3] focuses on a similar problem to this paper. As a result of the missing contribution and experimental comparisons with [1,2,3], I believe this paper's uniqueness is somewhat limited.\n\n3. The number of learned context vectors $c$ is set as the number of environments in the study, which is the primary hyperparameter of the suggested technique. However, in a real-world setting, the number of environments is not available, making it unfair to compare it to the baseline TMCL, which doesn't rely on such prior information. This increases my concerns about the technical soundness of this paper.\n\nIn conclusion, while the writing and experimental results are excellent, this paper suffers from the aforementioned clarity and novelty issues. If the authors address my concerns in their response, I will consider raising my score.\n\n\n------------------------------------------------- After Rebuttal ------------------------------------------------\n\nI think that the additional experimental results and discussion in the revision resolve my concerns about the clarity problem of the submission, so I increase my score from 4 to 6 accordingly.\n\nMinors: I believe that RIA considers context information and constructs confounder sets with multiple confounders, so I believe that RIA should be discussed in the introduction's confounder discussion (Line 42).\n\n[1] Haotian Fu, Hongyao Tang, Jianye Hao, Chen Chen, Xidong Feng, Dong Li, and Wulong Liu. Towards effective context for meta-reinforcement learning: an approach based on contrastive learning. \n\n[2] Li, L., Huang, Y., Chen, M., Luo, S., Luo, D., & Huang, J. (2021). Provably Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning. arXiv preprint arXiv:2102.10774.\n\n[3] Guo J, Gong M, Tao D. A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning[C]//International Conference on Learning Representations. 2022. Please refer to the \"Weakness\" listed above. Please refer to the \"Weakness\" listed above." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "nips_2022_CJGUABT_COm", "A59VmCvb7uD", "NVvW8P0b2kb", "SsKD54e7cMG", "w87doH_JV3t", "ecPVLuuFS", "_bOgSB4vaC", "0flbimho4X5", "63-Vv8Jz94", "p87kx5ezqV", "J8poTxluEXM", "Ol0zoOhaM4o", "Ol0zoOhaM4o", "rhQabDwjkbv", "0nq3jEUoxaj", "rG068puM45P", "wTJssqx67Bz", "nips_2022_CJGUABT_COm", "udbxDEYNU7O", "udbxDEYNU7O", "CZEWqv0v0ZP", "cmSlk6WSCB8", "KNbR5GdtF6j", "PsNKYf9Eb5C", "nips_2022_CJGUABT_COm", "_bOgSB4vaC", "_bOgSB4vaC", "_bOgSB4vaC", "cn5mV-9jun", "cn5mV-9jun", "cn5mV-9jun", "3aa4rQl67Fj", "3aa4rQl67Fj", "cmSlk6WSCB8", "cmSlk6WSCB8", "nips_2022_CJGUABT_COm", "nips_2022_CJGUABT_COm", "nips_2022_CJGUABT_COm", "nips_2022_CJGUABT_COm" ]
nips_2022_Ddd6FqHXmHA
OpenAUC: Towards AUC-Oriented Open-Set Recognition
Traditional machine learning follows a close-set assumption that the training and test set share the same label space. While in many practical scenarios, it is inevitable that some test samples belong to unknown classes (open-set). To fix this issue, Open-Set Recognition (OSR), whose goal is to make correct predictions on both close-set samples and open-set samples, has attracted rising attention. In this direction, the vast majority of literature focuses on the pattern of open-set samples. However, how to evaluate model performance in this challenging task is still unsolved. In this paper, a systematic analysis reveals that most existing metrics are essentially inconsistent with the aforementioned goal of OSR: (1) For metrics extended from close-set classification, such as Open-set F-score, Youden's index, and Normalized Accuracy, a poor open-set prediction can escape from a low performance score with a superior close-set prediction. (2) Novelty detection AUC, which measures the ranking performance between close-set and open-set samples, ignores the close-set performance. To fix these issues, we propose a novel metric named OpenAUC. Compared with existing metrics, OpenAUC enjoys a concise pairwise formulation that evaluates open-set performance and close-set performance in a coupling manner. Further analysis shows that OpenAUC is free from the aforementioned inconsistency properties. Finally, an end-to-end learning method is proposed to minimize the OpenAUC risk, and the experimental results on popular benchmark datasets speak to its effectiveness.
Accept
The paper proposes OpenAUC, which is a novel metric designed specifically for evaluating Open-Set Recognition (OSR) performance. OpenAUC is motivated by a formal analysis on existing OSR evaluation metrics, which suffer from three types of inconsistency properties. Theoretical results show that OpenAUC is consistent with the goal of OSR while free of all identified inconsistency properties. An empirical loss function is developed accordingly that enables model training to optimize the proposed OpenAUC. Overall, the paper is well-written. The proposed OpenAUC metric can potentially benefit future research in OSR as recognized by the reviewers. Authors and reviewers engaged in a productive discussion, which helped to further improve the quality of the paper. The authors are encouraged to address some remaining suggestions from the reviewers, including adding results on other backbones in the experiments and extending the related work by discussing more recent literature in OSR and AUC optimization.
train
[ "76y1P2QMX5C", "IPh0ikjq_-a", "hZGBKBsaeW0", "DsePukpCEox", "fRba-LuY15e", "O0a-GEfZbin", "L5jAUPhKxzX", "IlqeRj0sgez", "LoeRF1bZ8l", "_nXcPdXnAF", "gV7LyA-aXxi", "Y3TyYjJ8x5o", "wxegDpgJBbC", "ew10SzwKNh" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your nice comments! we would like to make the following responses. \n\n> Comment (a): The open-set function r is the exactly same as \"1 - max-of-softmax-score\", right? If so, it is natural to ask whether other methods such as max-of-logit [14] benefit from the OpenAUC optimized network.\n\n**Ans**: Thank you very much for this constructive comment! Both answers are positive. We conduct this experiment on the CUB dataset. As shown in the following table, CE+, the max-of-logit method, also benefits from the proposed optimization objective. \n\n| CUB | Close Acc | AUC(E/M/H)* | OpenAUC(E/M/H) | TNR@95(E/M/H) | macro-F(E/M/H) | micro-F(E/M/H) |\n|:------- |:---------:|:---------------------:|:---------------------:|:---------------------:|:-------:|:-------:|\n| CE+ | 86.2 | 88.3 / 82.3 / 76.3 | 79.8 / 75.4 / 70.8 | 28.4 / 42.1 / 52.3 | 82.6 / 80.3 / 78.3 | 83.3 / 81.6 / 81.4 |\n| CE+OpenAUC | 86.1 | 88.7 / 82.9 / 77.6 | 80.1 / 75.8 / 72.0 | 27.8 / 39.5 / 46.7 | 81.9 / 79.6 / 77.7 | 82.5 / 81.1 / 80.8 |\n\n> Comment (b): Moreover, I encourage the authors to have a well-thought-out discussion in the next version about (1) OpenAUC vs. OSCR w.r.t curves and summary number, (2) operating point for real-world applications and limitations of openAUC.\n\n**Ans**: Thank you very much for this suggestion! We have updated the responses to the latest version. For the sake of your convenience, we attach the corresponding parts right here.\n\nFor (a), i.e., Comment (1) and (2):\n\n**Sec.4.1, Line 188-194.** Compared with the aforementioned metrics, the OSCR curve contains richer information and allows comparing model performances at different operating points. While our goal is to optimize the overall performance of the curve. Hence, it is necessary to find a numeric metric that aggregates the information of the entire curve, such that the models can be trained by optimizing the loss of the metric. To this end, [13] and [14] estimate the area under the OSCR curve by directly calculating the numeric integral with histograms. However, this number is hard to optimize due to multiple non-differential operators such as ranking and counting.\n\n**Sec. 4.2, Line 220-222.** Moreover, Compared with the OSCR curve, OpenAUC enjoys a concise formulation, based on which we can design a differentiable objective function for Empirical Risk Minimization (ERM). We will present the details in Sec.4.4.\n\nFor (b), i.e., Comment (6): \n\n**Sec.6 Broad Impact.** This work provides a novel metric named OpenAUC for the OSR task, as well as its optimization method. We expect our research could promote the research of open-set learning, especially from the respective of model evaluation. Moreover, no metric is perfect, and, of course, it is no exception for OpenAUC. To be specific, OpenAUC summarizes the OTPR performance *under all the OFPR performance*. However, some applications, such as self-driving, require a high recall of open-set. According to Prop.6, only the performance under low OFPR is of interest. In this case, OpenAUC might be biased due to considering irrelevant performances. This might cause potential negative impact concerning safety and security. To fix this issue, optimizing the partial OpenAUC, which summarizes the OTPR performance under some given OFPR performance, might be a better choice. Of course, there is no free lunch. Partial OpenAUC will be more difficult to optimize due to the selection operation. Besides, the generalization bound of open-set learning is still an opening problem, and we leave the corresponding analysis of OpenAUC optimization in future work.\n\nBesides, (1) New empirical results are attached in Appendix.D, as well as our observations. (2) The typos in Line 90 and 91 have been revised. (3) The final open-set function is highlighted in Appendix.C. \n\n\n> Comment (c): Update the manuscript.\n\n**Ans**: Thanks for this constructive suggestion! We have updated the manuscript according to the comments, and all the revisions are marked with blue.", " Thank you very much for your nice comment! We have updated the responses to the latest version. To be specific, (1) new empirical results are attached in Appendix.D, as well as our observations. (2) More implementation details, including the new backbone, the generation of open-set pairs, and the efficiency issue, can be found in Appendix.C. (3) Fig.1 has been revised to eliminate the confusing notation. (4) A new section, i.e., Sec.6 is attached to discuss the potential societal impact of OpenAUC. Moreover, the empirical results on all the SBB datasets will be updated in the final version.", " I appreciate the responses and my main concerns have been addressed properly. Hence, I have raised my rating. The complete evaluation on the SSB datasets is suggested to be included in the final version, which will better validate the performance and benefit future works down the line.", " Thanks for the rebuttal. \n\nBecause of the typo of r, I didn't follow the open-set function r. Now I think I understand. In plain language, I think the open-set function r is exactly the same as \"1 - max-of-softmax-score\", right? If so, it is natural to ask whether other methods (other than max-of-softmax) such as max-of-logit [14] benefits from the OpenAUC optimized network. I encourage the authors to run this simple experiment.\n\nMoreover, I encourage the authors to have a well-thought-out discussion in the next version about (1) OpenAUC vs. OSCR w.r.t curves and summary number, (2) operating point for real-world applications and limitations of openAUC.\n\nA small complaint -- NeurIPS allows authors to update manuscript during rebuttal, and doing so helps build trust between authors and reviewers. I suggest authors do so next time. I encourage authors to release code as well.\n\nI maintain my rating as weak accept.", " Thank you for your comments! We would like to make the following response:\n\n> Comment (1): Why OpenAUC is better than calculating the area under the OSCR?\n\n**Ans**: Thank you very much for this constructive comment! We agree with the reviewer that [13,14] also get a numerical metric by calculating the area under the OSCR curve. While the point here is *the way to finish the calculation*. Our final goal is to find a reasonable OSR objective function to optimize directly. To this end, we need to get a simplified version of the metric, so that we can design a differentiable objective function for ERM (Empirical Risk Minimization). The existing studies [13,14] estimate the area under the OSCR curve by directly calculating the numerical integral with histograms, which involves multiple non-differential operators such as ranking and counting. By contrast, OpenAUC can be expressed as the sum of pair-wise loss terms, which enjoys a similar form to AUC. Inspired by the ERM framework of AUC, we can easily construct a differentiable objective function to optimize OpenAUC. For completeness, we will update these discussions in the future version.\n\n> Comment (2): The argument in Line188 conflicts with that in [4].\n\n**Ans**: Thanks for your nice question! We've realized that our expression might have induced some misunderstanding. In fact, our argument does not conflict with that in [4]. We also agree that the operation curve contains richer information than a single metric. However, again, our goal is to optimize the overall performance of the curve. Hence, we have to find a numerical metric that aggregates the information of the entire curve, so that the models can be trained by optimizing the loss of the metric. In this sense, our metric is necessary since compatible with its corresponding OFPR-OTPR curve. We will clarify this issue in the future version.", " > Comment (3): It is important to justify the proposed loss function helps, or doesn't decrease much, the accuracies of closed-set classification and open-set detection.\n\n**Ans**: Thanks very much for your constructive suggestion! We conduct an additional experiment on a more challenging dataset, i.e., CUB [14]. According to your suggestion, we present the model performances on multiple metrics such as Close-set Accuracy, AUC, OpenAUC, Error Rate@95%TPR, and Open-set F-score. Note that we report the open-set F-score under the optimal threshold. Besides, we did not analyze Error Rate@95%TPR in Sec.3 since it is a metric for novelty detection, and little OSR work adopted it as a metric. For your convenience, the new results are attached as follows, where (E/M/H) corresponds to the results on the Easy/Medium/Hard split of open-set classes. \n\n| CUB | Close Acc | AUC(E/M/H)* | OpenAUC(E/M/H) | Error@95(E/M/H) | macro-F(E/M/H) | micro-F(E/M/H) |\n|:------- |:---------:|:---------------------:|:---------------------:|:---------------------:|:-------:|:-------:|\n| Softmax | 78.1 | 79.7 / 73.8 / 66.9 | 67.2 / 63.0 / 57.8 | 46.6 / 55.9 / 62.8 | 67.4 / 66.5 / 66.6 | 69.0 / 68.9 / 70.8 |\n| GCPL | 82.5 | 85.0 / 78.7 / 73.4 | 74.7 / 70.3 / 66.7 | 37.0 / 46.8 / 51.3 | 77.6 / 75.4 / 74.0 | 78.4 / 76.8 / 77.4 |\n| RPL | 82.6 | 85.5 / 78.1 / 69.6 | 74.5 / 69.0 / 62.4 | 39.5 / 53.5 / 64.0 | 75.4 / 73.3 / 72.4 | 76.7 / 75.2 / 76.6 |\n| ARPL | 82.1 | 85.4 / 78.0 / 70.0 | 74.4 / 68.9 / 62.7 | 37.6 / 49.9 / 62.7 | 75.3 / 73.1 / 72.2 | 76.6 / 75.0 / 76.5 |\n| CE+ | 86.2 | 88.3 / 82.3 / 76.3 | 79.8 / 75.4 / 70.8 | 28.4 / 42.1 / 52.3 | 82.6 / 80.3 / 78.3 | 83.3 / 81.6 / 81.4 |\n| ARPL+ | 85.9 | 83.5 / 78.9 / 72.1 | 76.0 / 72.4 / 66.8 | 48.7 / 60.6 / 67.8 | 80.8 / 79.0 / 77.3 | 81.7 / 80.4 / 80.4 |\n| Ours | 86.2 | 88.8 / 83.2 / 78.1 | 80.2 / 76.1 / 72.5 | 28.1 / 39.7 / 47.6 | 82.2 / 79.7 / 78.1 | 83.0 / 81.2 / 81.1 |\n\nFrom the results, we have the following observations:\n\n- The proposed method outperforms the competitors on novelty-detection metrics such as AUC and Error Rate@95%TPR, especially on the Medium and Hard splits. Moreover: (1) The improvement on AUC comes from the AUC-based term in the proposed objective, which is consistent with our theoretical expectation. (2) The result on Error Rate validates Prop.6 that optimizing Open-AUC reduces the upper bound of FPR (Recall that $Error Rate \\downarrow = 1 - Acc \\uparrow = 1 - \\frac{TP + TN}{TP + TN + FP + FN}, TPR = \\frac{TP}{TP + FN}, TNR \\uparrow = 1 - FPR \\downarrow = \\frac{TN}{TN + FP}$). \n\n- Our method achieves comparable performances on the close-set accuracy and Open-set F-score. This result is reasonable since compared with CE+, no more optimization is conducted on the close-set samples in our new objective function.\n\n- Benefiting from the improvement on open-set samples and the comparable performance on close-set samples, the proposed method achieves the best performance on OpenAUC.\n\n- Another observation is that the Open-set F-score shares similar values for all difficulty splits. Note that the only difference among these splits comes from their open-set data. This phenomenon shows that Open-set F-score cannot differentiate the performance on the open-set. This is inevitable since this metric evaluates the open-set performance only in an implicit manner. Hence, it again validates the necessity to adopt OpenAUC as the evaluation metric.\n\nTo sum up, the empirical results on CUB again speak to the efficacy of OpenAUC and the proposed optimization method. We will update these results in the next version.\n\n> Comment (4): It is not clear what the final open-set function r is in experiments.\n\n**Ans**: Perhaps due to the way of our writing, it is a pity to leave the impression that the open-set function is not well formulated. As described in line 195-197, the final open-set function is defined as $\\min_{k \\in \\mathcal{Y}_k} f(\\boldsymbol{x}_k)$, where $\\forall c \\in \\mathcal{Y}_k, f(\\boldsymbol{x}_c) \\propto \\mathbb{P}[y \\neq c | \\boldsymbol{x}]$. We will highlight this fact in the future version.\n\n\n", " > Comment (5): Line 90 and 91: Is it a typo \"close-set score function r\"? Isn't r an open-set score function as stated in Line86?\n\n**Ans**: Thanks for your careful reading, and the answer is positive. We are sorry for these typos and will correct them in the future version.\n\n> Comment (6): Limitations and potential negative societal impacts of OpenAUC.\n\n**Ans**: Thanks very much for these constructive concerns! OpenAUC summarizes the OTPR performance under *all* the OFPR performance. However, as described in the comment, some applications require a high recall of open-set (i.e., $TPR_{C+1}$). According to Prop.6, only the performance under low OFPR is of interest. In this case, OpenAUC might be biased due to considering irrelevant performances. This might cause potential negative societal impacts. To fix this issue, optimizing the partial OpenAUC, which summarizes the OTPR performance under some given OFPR performance, might be a better choice. Of course, there is no free lunch. Partial OpenAUC will be more difficult to optimize due to the selection operation. Meanwhile, the comment points out that some other applications might favor close-set performance. We argue that OpenAUC is free from this concern since the product formulation of OpenAUC requires that the close-set samples are correctly classified. In other words, a low close-set accuracy will inevitably induce a low OpenAUC. We will update these discussions in the future version.", " Thank you for your comments! We would like to make the following response:\n\n> Comment (1): More related work of the competitors and AUC optimization.\n\n**Ans**: Thank you for this nice suggestion! For detailed information, [a,b,c] might be good references. We will provide a brief review and add some latest literature in the future version.\n\n[a] Chuanxing Geng, Sheng-Jun Huang, Songcan Chen: Recent Advances in Open Set Recognition: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 43(10): 3614-3631 (2021)\n\n[b] Mohammadreza Salehi, Hossein Mirzaei, Dan Hendrycks, Yixuan Li, Mohammad Hossein Rohban, Mohammad Sabokrou: A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges. CoRR abs/2110.14051 (2021)\n\n[c] Tianbao Yang, Yiming Ying: AUC Maximization in the Era of Big Data and AI: A Survey. CoRR abs/2203.15046 (2022)\n\n> Comment (2): Some minor typos.\n\n**Ans**: Thanks for your careful reading! We will correct these typos in the future version.\n\n> Comment (3): In Appendix, Figure.3 specifies the indices of classes, that is, “FP1”, “FP2”, “FN1” and so on. How do these classes correspond to the notations in the proof?\n\n**Ans**: We are sorry for this confusing presentation. For Fig.3(a), the indices \"1\" and \"2\" corresponds to $h(\\boldsymbol{x}_2)$ and $y_2$, respectively. For Fig.3(b), the index \"1\" corresponds to $y_2$. We will improve the figure in the future version. ", " Thanks for your comments! We would like to make the following responses. For the sake of your convenience, new empirical results are first attached:\n\n| CUB | Close Acc | AUC(E/M/H)* | OpenAUC(E/M/H) | Error@95(E/M/H) | macro-F(E/M/H) | micro-F(E/M/H) |\n|:------- |:---------:|:---------------------:|:---------------------:|:---------------------:|:-------:|:-------:|\n| Softmax | 78.1 | 79.7 / 73.8 / 66.9 | 67.2 / 63.0 / 57.8 | 46.6 / 55.9 / 62.8 | 67.4 / 66.5 / 66.6 | 69.0 / 68.9 / 70.8 |\n| GCPL | 82.5 | 85.0 / 78.7 / 73.4 | 74.7 / 70.3 / 66.7 | 37.0 / 46.8 / 51.3 | 77.6 / 75.4 / 74.0 | 78.4 / 76.8 / 77.4 |\n| RPL | 82.6 | 85.5 / 78.1 / 69.6 | 74.5 / 69.0 / 62.4 | 39.5 / 53.5 / 64.0 | 75.4 / 73.3 / 72.4 | 76.7 / 75.2 / 76.6 |\n| ARPL | 82.1 | 85.4 / 78.0 / 70.0 | 74.4 / 68.9 / 62.7 | 37.6 / 49.9 / 62.7 | 75.3 / 73.1 / 72.2 | 76.6 / 75.0 / 76.5 |\n| CE+ | 86.2 | 88.3 / 82.3 / 76.3 | 79.8 / 75.4 / 70.8 | 28.4 / 42.1 / 52.3 | 82.6 / 80.3 / 78.3 | 83.3 / 81.6 / 81.4 |\n| ARPL+ | 85.9 | 83.5 / 78.9 / 72.1 | 76.0 / 72.4 / 66.8 | 48.7 / 60.6 / 67.8 | 80.8 / 79.0 / 77.3 | 81.7 / 80.4 / 80.4 |\n| Ours | 86.2 | 88.8 / 83.2 / 78.1 | 80.2 / 76.1 / 72.5 | 28.1 / 39.7 / 47.6 | 82.2 / 79.7 / 78.1 | 83.0 / 81.2 / 81.1 |\n\n\\* E/M/H corresponds to the results on the Easy/Medium/Hard split of open-set classes.\n\n> Comment (1): The experimental datasets seem already quite saturated.\n\n**Ans**: Thank you very much for this constructive comment! We conduct an additional experiment on an SSB dataset, i.e., CUB. The above table presents the model performances on multiple metrics such as Close-set Accuracy, AUC, OpenAUC, Error Rate@95%TPR, and Open-set F-score, where (E/M/H) corresponds to the results on the Easy/Medium/Hard split of open-set classes. Note that we report the open-set F-score under the optimal threshold. From the results, we have the following observations:\n\n- The proposed method outperforms the competitors on novelty-detection metrics such as AUC and Error Rate@95%TPR, especially on the Medium and Hard splits. Moreover: (1) The improvement on AUC comes from the AUC-based term in the proposed objective, which is consistent with our theoretical expectation. (2) The result on Error Rate validates Prop.6 that optimizing Open-AUC reduces the upper bound of FPR (Recall that $Error Rate \\downarrow = 1 - Acc \\uparrow = 1 - \\frac{TP + TN}{TP + TN + FP + FN}, TPR = \\frac{TP}{TP + FN}, TNR \\uparrow = 1 - FPR \\downarrow = \\frac{TN}{TN + FP}$). \n\n- Our method achieves comparable performances on the close-set accuracy and Open-set F-score. This result is reasonable since compared with CE+, no more optimization is conducted on the close-set samples in our new objective function.\n\n- Benefiting from the improvement on open-set samples and the comparable performance on close-set samples, the proposed method achieves the best performance on OpenAUC.\n\n- Another observation is that the Open-set F-score shares similar values for all difficulty splits. Note that the only difference among these splits comes from their open-set data. This phenomenon shows that Open-set F-score cannot differentiate the performance on the open-set. This is inevitable since this metric evaluates the open-set performance only in an implicit manner. Hence, it again validates the necessity to adopt OpenAUC as the evaluation metric.\n\nTo sum up, the empirical results on CUB again speak to the efficacy of OpenAUC and the proposed optimization method. We will update these results in the next version.\n\n> Comment (2): It would be good to see how the proposed method works on other backbones.\n\n**Ans**: Thanks for this nice suggestion! The experiment above adopts ResNet-50 as the backbone. Due to the time limit, we can only finish the experiments on the CUB dataset. We will add the experiments for all the other datasets in the next version. We hope these results can make the proposed method more convincing.", " > Comment (3): Figure 1 is partly unclear to me. (a) What is the difference between m-f-score and M-f-score? (b) Why OSCR curves are not presented?\n\n**Ans**: Perhaps due to the way of our writing, it is a pity to leave Fig.1 partly confusing. \n\nFor (a), as presented in Eq.(2) and Eq.(3), M-f-score and m-f-score represent Open-set F-score that aggregates Precision and Recall in a *macro* and *micro* manner, respectively. For the sake of your convenience, we attach the equations right here.\n$$ \\texttt{F-score} := 2 \\times \\frac{\\texttt{P}\\_k \\times \\texttt{TPR}\\_k}{\\texttt{P}\\_k + \\texttt{TPR}\\_k},$$\nwhere \n$$\\texttt{P}\\_k := \\frac{1}{C} \\sum\\_{i=1}^{C} \\frac{\\texttt{TP}\\_{i}}{\\texttt{TP}\\_{i}+\\texttt{FP}\\_{i}}, \\texttt{TPR}\\_k := \\frac{1}{C} \\sum\\_{i=1}^{C} \\frac{\\texttt{TP}\\_{i}}{\\texttt{TP}\\_{i}+\\texttt{FN}\\_{i}}$$\nif we aggregate model performances in a *macro* manner, and\n$$\\texttt{P}\\_k := \\frac{\\sum\\_{i=1}^{C} \\texttt{TP}\\_{i}}{\\sum\\_{i=1}^{C}\\left(\\texttt{TP}\\_{i}+\\texttt{FP}\\_{i}\\right)}, \\texttt{TPR}\\_k := \\frac{\\sum\\_{i=1}^{C} \\texttt{TP}\\_{i}}{\\sum\\_{i=1}^{C}\\left(\\texttt{TP}\\_{i}+\\texttt{FN}\\_{i}\\right)}$$\nif the performances are summarized in a *micro* manner. To eliminate confusion, these two metrics will be denoted as macro-F-score and micro-F-score in the next version, respectively.\n\nFor (b), the reason is two-fold: On one hand, Fig.1 aims to illustrate the inconsistency property of F-score, Youden's index, and normalized accuracy, while the OSCR curve does not suffer from an inconsistency property. Moreover, in Fig.1, we plot metric values against thresholds. However, to plot the OSCR curve, we need to plot CCR against FPR. Hence, we cannot plot them in the same figure. \n\n> Comment (4): Suggestion (not weakness): show some demonstrative figures when describing the weaknesses of other methods.\n\n**Ans**: Thanks for this constructive suggestion! We have demonstrated some examples in Appendix (Fig.3). To make the propositions easier to understand, we will add more figures and attach them to the main text in the future version.\n\n> Comment (5): How the open-set pairs are generated during the evaluation? Are the same pairs used for all methods? The efficiency issue.\n\n**Ans**: Our response consists of the evaluation on the training set and the test set, respectively.\n\n- When training, open-set samples are unavailable, and thus we adopt the mixup strategy on each batch $B$ to generate open-set samples. Specifically, we shuffle the received batch, which produces a mini-batch $B'$, and then conduct mixup on the pairs in $B \\times B'$, where $\\times$ denotes *pointwise* product of two sets. Finally, the metric is calculated on the pairs in $B \\times \\tilde{B}$, where $\\tilde{B}$ is the batch generated by the mixup operation. Note that we expect the instances in each pair from $B \\times B'$ to have different class labels, so that the mixup examples (i.e., $\\tilde{B}$) can be located somewhere outside the close-set domain. Hence, we eliminate the pairs from the same classes. Note that we only mixup the pairs at the same slot of $B$ and $B'$, and the metric is evaluated on the pairs at the same slot of $B$ and $\\tilde{B}$. Hence, the time complexity is still $O(|B|)$. Empirically, the training time for 600 epochs increases from 16h23min to 16h25min, which is quite efficient. Besides, we fix the random seed to guarantee the same pairs are generated for all methods. \n\n- During the test phase, open-set samples are available. Benefiting from the pairwise formulation, we can calculate OpenAUC efficiently. Specifically, we first mask each close-set sample $\\boldsymbol{x}\\_k$ that has been misclassified on the close-set. Specifically, we have\n$$\\tilde{r}(\\boldsymbol{x}\\_k) \\gets \\begin{cases}\n\\epsilon + \\max\\_{\\boldsymbol{x}\\_u \\in \\mathcal{S}\\_u} r(\\boldsymbol{x}\\_u),& h(\\boldsymbol{x}\\_k) \\neq y_k\\\\\\\\\nr(\\boldsymbol{x}\\_k),& \\text{otherwise}\n\\end{cases}$$ \nwhere $\\mathcal{S}\\_u$ denotes the open-set, and $\\epsilon > 0$ is a small constant. In this way, we have\n$$\\begin{aligned} \n \\texttt{OpenAUC}(f, r) & = \\frac{1}{N\\_k N\\_u} \\sum\\_{i=1}^{N\\_k}\\sum\\_{j=1}^{N\\_u}\\mathbb{I}[y\\_i = h(\\boldsymbol{x}\\_i)] \\cdot \\mathbb{I}[r(\\boldsymbol{x}\\_j) > r(\\boldsymbol{x}\\_i)] \\\\\\\\\n & = \\frac{1}{N\\_k N\\_u} \\sum\\_{i=1}^{N\\_k}\\sum\\_{j=1}^{N\\_u} \\mathbb{I}[\\tilde{r}(\\boldsymbol{x}\\_j) > \\tilde{r}(\\boldsymbol{x}\\_i)] \\\\\\\\\n & = \\texttt{AUC}(\\tilde{r}).\n\\end{aligned}$$\nIn other words, OpenAUC degenerates to the traditional AUC, and common tools such as Scikit-learn can boost the computation. Besides, the pairs are naturally the same for all methods.", " > Comment (6): Briefly mention the future work on generalization bound which is not considered at the moment for OpenAUC.\n\n**Ans**: Generalization analysis for OSR is an appealing but rather challenging direction. To be concrete, existing techniques for generalization analysis are mostly based on the assumption that the training set and the test set are sampled from the same distribution, while it becomes invalid in OSR. Similar challenges appear in the open-set domain adaptation (OSDA) and novelty detection, but all related results require the test samples to be available in the training phase [a,b]. To fix this issue, [c] makes a strong assumption on the open-set distribution and leverages an off-the-shelf result of the density ratio estimation method. But how to do it for other methods, in general, remains an opening question. Moreover, our main focus in this paper is to find a proper metric for OSR to guide optimization and training. The generalization analysis is out of our scope.\n\n[a] Si Liu, Risheek Garrepalli, Thomas G. Dietterich, Alan Fern, Dan Hendrycks: Open Category Detection with PAC Guarantees. ICML 2018: 3175-3184\n\n[b] Zhen Fang, Jie Lu, Feng Liu, Junyu Xuan, Guangquan Zhang: Open Set Domain Adaptation: Theoretical Bound and Algorithm. IEEE Trans. Neural Networks Learn. Syst. 32(10): 4309-4322 (2021)\n\n[c] Zhen Fang, Jie Lu, Anjin Liu, Feng Liu, Guangquan Zhang: Learning Bounds for Open-Set Learning. ICML 2021: 3122-3132\n\n> Comment (7): No societal impact is discussed.\n\n**Ans**: Thanks very much for this constructive concern! Indeed, no metric is perfect, and, of course, it is no exception for OpenAUC. Specifically, OpenAUC summarizes the OTPR performance under *all* the OFPR performance. However, in many practical scenarios such as self-driving, only the OTPR performance under *low* OFPR performance is of interest. There might be some negative societal impact concerning safety and security. In this case, OpenAUC might be biased due to considering irrelevant performances. To fix this issue, partial OpenAUC, which summarizes the OTPR performance under some given OFPR performance, might be a better choice. Of course, there is no free lunch. Partial OpenAUC will be more difficult to optimize due to the selection operation. So, we leave it as future work and will include this discussion in the next version.", " This paper introduced a new evaluation metric for OSR, OpenAUC, which jointly measures the binary open-set performance and multi-class closed-set performance, and also introduced a simple OSR method by minimizing the OpenAUC risk with synthetic open-set samples by mixing up features of closed-set samples. Thorough theoretical analysis is presented and promising performance on public datasets is shown. Strengths:\n+ The joint evaluation of open-set and closed-set performance is an important problem for the OSR problem.\n+ Thorough theoretical analysis is given for existing methods and the proposed method; the proposed method is simple and effective. \n+ Good results are obtained on public datasets. \n\nWeaknesses:\n- The experimental datasets seem already quite saturated, except TinyImageNet. The more challenging SSB datasets, introduced in [14] with a particular focus on better evaluating OSR, should be evaluated, to strengthen the paper.\n- Only the VGG32 backbone is used. It would be good to see how the proposed method works on other backbones as well, such as ResNet, ViT.\n- Figure 1 is partly unclear to me. What is the difference between m-f-score and M-f-score? Why OSCR curves are not presented? OSCR is the most relevant metric, and the OSCR curves should be presented.\n\nOne suggestion (not weakness): when describing the weaknesses in the main text on other methods, it would be better to add some demonstrative figures to show some examples and how the proposed method can handle them. \n\nOverall, I hold a positive view on the proposed method and think it can be helpful for future OSR research, especially evaluation, but the experiment parts need improvement, especially on the more challenging datasets. Please refer to the weaknesses above. In addition, \n- The OpenAUC requires measuring pairs of closed and open set samples, which is different from existing methods that only measure individual data points. How the pairs are generated during the evaluation? Are the same pairs used for all methods? The possible pairs will be way more than the individual data points? This will significantly increase the evaluation cost from O(n) to O(n^2), thus reducing efficiency. Briefly mentioned the future work on generalization bound which is not considered at the moment for OpenAUC. No societal impact is discussed, and I didn't see any major concerns here. ", " This paper focuses on the evaluation issue for the Open-Set Recognition (OSR) problem. Specifically, the authors point out that existing metrics for OSR are inconsistent with the goal of OSR: some poor open-set predictions can escape from the punishment of classification-based metrics, while novelty detection AUC ignores the close-set performance. In view of this, a novel metric, named OpenAUC, is proposed. Theoretical analysis reveals that OpenAUC overcomes the limitations of the existing metrics. Moreover, an end-to-end learning algorithm is proposed to optimize OpenAUC. Finally, empirical results on six benchmark datasets validate the effectiveness of the proposed method. Strengths:\n The authors make a systematic analysis of existing metrics for Open-set Recognition. Concretely, existing metrics are grouped into classification-based ones and novelty-detection-based ones. And their limitations are collectively categorized into three types of inconsistency properties.\n A novel metric named OpenAUC is proposed for OSR. Compared with existing metrics, OpenAUC (1) evaluates the performance on close-set and open-set in a unified manner; (2) aggregates the performance under different thresholds; (3) enjoys a concise formulation and thus is easier to optimize.\n The empirical results are convincing. On one hand, it validates the inconsistency properties of existing metrics. On the other hand, the effectiveness of the proposed learning method is validated.\n\nWeakness:\nIt is recommended that more related work can be provided, such as the details of the competitors and AUC optimization.\nBesides, there exist some minor issues: It is recommended to use “Eq.(1)” instead of “Equation (1)”. Some punctuations are missing such as the full stops of Eq.(9) and Eq.(12).\n In Appendix, Figure.3 specifies the indices of classes, that is, “FP1”, “FP2”, “FN1” and so on. How do these classes correspond to the notations in the proof such as $y_1$ and $y_2$? It might be a bit confusing, and more clarification is necessary. Yes", " The paper introduces a new metric called OpenAUC as a summary number to jointly measure the closed-set classification accuracy and open-set detection accuracy. It is a threshold-free metric. The paper compares existing metrics used in the literature of open-set recognition and points out their limitations. Further, the paper develops a loss to train neural networks by directly optimizing the OpenAUC metric. Experiments on standard open-set recognition datasets validate the effectiveness of the loss.\n\n Strengths:\n\n- The motivation of the paper is good. It is desired to design a metric for open-set recognition. \n\n- The paper nicely analyzes existing metrics and explain why they fail to jointly measure closed-set and open-set accuracies. \n\n- The developed metric and the derived loss function make sense.\n\n\nWeaknesses:\n\n- One important weakness is that the paper does not justify why OpenAUC is better than OSCR. As pointed out in the paper (Line189), calculating the area under the OSCR curve can be a summary number (as done by [13,14]). \n\n- Following the above, Line188 argues that \"a numeric metric is generally necessary for model comparison.\" This argument conflicts that in [4] -- quote here: \"The application of any algorithm to a real world problem involves the selection of an operating point, with the natural choices on a PR curve being either high precision (low number of false positives) or high recall (high number\nof true positives).\" That said, a curve offers richer information and allows comparing methods at different operating points. The paper should discuss this.\n\n- While the derived loss helps train a model that directly optimizes OpenAUC, it should evaluate the trained model using other metrics as well, including OSCR, F-measure, OSCR, closed-set accuracy, and AUROC. Admittedly, these metrics (except OSCR) have issues, given the standard datasets, using all these metrics give a balanced understanding -- it is important to justify the proposed loss function helps, or doesn't decrease much, the accuracies of closed-set classification and open-set detection. \n\n- It is not clear what the final open-set function r is in experiments.\n\n- Line 90 and 91: Is it a typo \"close-set score function r\"? Isn't r an open-set score function as stated in Line86? The paper is above average. As for questions, authors are encouraged to address the weaknesses listed above. Answers in the rebuttal can sway the rating. The paper does not effectively discuss limitations and potential negative societal impacts. The design of the new metric is a summary number that is based on the heuristics (a model should be good at both close-set classification and open-set detection). There are indeed potential limitations and negative impacts. For example, such a summary number makes it non-trivial to select operating points in real-world systems and hide critical failures in the real world. Some applications require high recall of the open-set (e.g., autonomous vehicles) and some other favor high accuracy on the close-set (e.g., image tagging). " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "DsePukpCEox", "hZGBKBsaeW0", "gV7LyA-aXxi", "L5jAUPhKxzX", "ew10SzwKNh", "ew10SzwKNh", "ew10SzwKNh", "wxegDpgJBbC", "Y3TyYjJ8x5o", "Y3TyYjJ8x5o", "Y3TyYjJ8x5o", "nips_2022_Ddd6FqHXmHA", "nips_2022_Ddd6FqHXmHA", "nips_2022_Ddd6FqHXmHA" ]
nips_2022_csr9uRmTC3f
Exploring the Algorithm-Dependent Generalization of AUPRC Optimization with List Stability
Stochastic optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning. Although various algorithms have been extensively studied for AUPRC optimization, the generalization is only guaranteed in the multi-query case. In this work, we present the first trial in the single-query generalization of stochastic AUPRC optimization. For sharper generalization bounds, we focus on algorithm-dependent generalization. There are both algorithmic and theoretical obstacles to our destination. From an algorithmic perspective, we notice that the majority of existing stochastic estimators are biased only when the sampling strategy is biased, and is leave-one-out unstable due to the non-decomposability. To address these issues, we propose a sampling-rate-invariant unbiased stochastic estimator with superior stability. On top of this, the AUPRC optimization is formulated as a composition optimization problem, and a stochastic algorithm is proposed to solve this problem. From a theoretical perspective, standard techniques of the algorithm-dependent generalization analysis cannot be directly applied to such a listwise compositional optimization problem. To fill this gap, we extend the model stability from instancewise losses to listwise losses and bridge the corresponding generalization and stability. Additionally, we construct state transition matrices to describe the recurrence of the stability, and simplify calculations by matrix spectrum. Practically, experimental results on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
Accept
All reviewers agree the paper makes novel contributions for AUPRC optimization. It proposes new batch-based estimator of AUPRC and studies its approximation error. Then it develops a new algorithms for optimizing this estimator. It also establishes the generalization error of the proposed algorithms via novel listwise stability. It seems that the proposed method is still sensitive to the batch size as shown in the results. The authors are encouraged to compare with [53], which proposes a stochastic algorithm for AP maximization with convergence guarantee and is not sensitive to the batch size.
train
[ "TP8_p2EiaQr-", "fiqbvA-zrr5", "2zKBGZrfZt8", "9goRKAqkJzu", "8bwgJYFnrdb", "qF_y9WIJ1V1", "kq61ulbapb6" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time and positive comments on our manuscript! We would like to reply to the following questions:\n### **Q1:**\nThe mechanism of the semi-variance regulation. Why the improvement in R@K is more significant than mAUPRC?\n### **A1:**\nThe semi-variance regulation is motivated by Prop. 2, which says that reducing the score variance will lead to better AUPRC estimation. To explore the effect of semi-variance regulation, we report the standard deviation of positive/negative scores in the validation set of iNaturalist:\n\n| method | pos std | neg std | mAUPRC | R@1 |\n| :------------: | :----: | :----: | :----: | :-----: |\n| w semi-var | 0.10 | 0.05 | 36.16 | 68.22 |\n| w/o semi-var | 0.14 | 0.08 | 35.99 | 67.50 |\n\nIt can be seen that the semi-variance regulation significantly reduces the standard deviation, and improves both mAUPRC and R@1. Since R@K only considers whether there are positive examples in the top-K list, while AUPRC requires a better overall ranking. Therefore, it is more challenging to improve AUPRC, and it seems that the improvement will be less significant in absolute value.\n\n\n### **Q2:**\nThe time consumption of the proposed score interpolation.\n\n### **A2:** \nAlthough the score interpolation is shown as a two-loop process in Alg. 2, obviously it can be accelerated with a parallel implementation. Practically, in our Pytorch implementation, it takes $2.9ms$ to compute the AUPRC loss per iteration, of which only $0.4ms$ is used for the score interpolation. Compared to the time spent on model inference and update (about $526ms$ per iteration), the time consumption of loss calculation and score interpolation is negligible.\n\n### **Q3:** \nWhat's the non-asymptotic approximation error of the stochastic estimator under specific distribution hypotheses?\n\n### **A3:**\nThank you for your helpful suggestions! Besides the asymptotic analysis in our paper, we still have the following non-asymptotic conclusion on the approximation error:\n\n**Proposition 3.** *For any $0 < \\delta < 1$, at least with probability of $1 - \\delta$, we have*\n$$\n\\left|\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}\\subseteq\\mathcal{S}}[\\hat{f}(\\pmb{w};\\pmb{z})] - \\widehat{\\text{AUPRC}}^\\downarrow(\\pmb{w};\\mathcal{S})\\right| = \\mathcal{O}\\left(\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^+}}} + 2\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^-}}}\\right).\n$$\n**The proof is provided in the next comment.**\n\nFrom the above Proposition, we can draw the conclusion that the approximation error convergences to zero at order of $\\mathcal{O}\\left(\\frac{1}{\\sqrt{{n^+}}} + \\frac{2}{\\sqrt{{n^-}}}\\right)$. However, such non-asymptotic analysis leaves the variance out, while the asymptotic version explains why the semi-variance term works (see A1 to Reviewer qm2w).", " Thank you for your time and constructive feedback! We will improve our work as outlined below:\n\n### **Q1:** \nFormal presentation of the simplification technique with matrix spectrum.\n\n### **A1:** \nThe simplification technique is used in the proof of Thm. 2 (from Eq. (63) to Eq. (67)). To optimize a compositional problem\n$$\n \\min\\_{\\pmb{w}} f(\\pmb{w}, g(\\pmb{w})),\n$$\na commonly used technique is maintaining an intermediate variable $\\pmb{v} \\approx g(\\pmb{w})$. Consider two datasets $\\mathcal{S},\\mathcal{S}'$ that differ with at most one example. Let $\\pmb{w}\\_t, \\pmb{v}\\_t$ to be the model and the intermediate variable generated with the dataset $\\mathcal{S}$ respectively, and similarly $\\pmb{w}\\_t', \\pmb{v}\\_t'$ are generated with $\\mathcal{S}'$.\nWhen analyzing the corresponding stability, we have to bound both $\\\\|\\pmb{w}\\_t' - \\pmb{w}\\_t\\\\|$ and $\\\\|\\pmb{v}\\_t' - \\pmb{v}\\_t\\\\|$. Since the two variables depend on each other, the derivation of the upper bound will be cumbersome. We propose to solve this problem from the recurrence. Formally, if\n$$\n \\left[\\\\|\\pmb{w}\\_{t+1}' - \\pmb{w}\\_{t+1}\\\\|, \\\\|\\pmb{v}\\_{t+1}' - \\pmb{v}\\_{t+1}\\\\|, 1\\right]^\\top\n \\leq \\left(\\pmb{I}\\_3 + \\pmb{M} / t\\right) \\left[\\\\|\\pmb{w}\\_{t}' - \\pmb{w}\\_{t}\\\\|, \\\\|\\pmb{v}\\_{t}' - \\pmb{v}\\_{t}\\\\|, 1\\right]^\\top, \\\\\n \\\\|\\pmb{w}\\_{t\\_0}' - \\pmb{w}\\_{t\\_0}\\\\| = \\\\|\\pmb{v}\\_{t\\_0}' - \\pmb{v}\\_{t\\_0}\\\\| = 0,\n$$\nwhere all elements in $\\pmb{M}$ is non-negative, then we have\n$$\n \\\\|\\pmb{w}\\_{T+1}' - \\pmb{w}\\_{T+1}\\\\| \n \\leq [1\\ 0\\ 0]\\Lambda\\ diag\\left((T')^{\\lambda\\_1},(T')^{\\lambda\\_2},(T')^{\\lambda\\_3}\\right)\\Lambda^{-1}\\ [0\\ 0\\ 1]^{\\top},\n$$\nwhere $T' = T / (t\\_0-1)$, $\\lambda\\_{1,2,3}$ are the eigenvalues of $M$, and each column of $\\Lambda$ is the corresponding eigenvector. In this way, the stability for compositional optimization algorithms can be obtained by analyzing the spectrum of transition matrices.\n\n### **Q2:** \nWhy is the condition $n^{+} / (n^{+} + n^{-}) = \\pi$ hard to satisfied in practice?\n\n### **A2:** \nThe sampling strategy depends on the specific tasks, and the assumption of the sampling rate might limit the generality of the AUPRC optimization algorithm. Here we provide two examples:\n\n**Example 1.** Data distributions in some tasks like retrieval and medical diagnosis are largely skewed, e.g., for class \\#2588 in iNaturalist, $\\pi \\approx 1.2\\times 10^{-4}$. Satisfying the condition $n^{+} / (n^{+} + n^{-}) = \\pi$ requires a batch-size of over $10\\times 10^{4}$, which is neither feasible nor necessary, especially for deep models.\n\n**Example 2.** We first briefly introduce a common sampling setting in retrieval tasks: a mini-batch usually contains multiple queries and corresponding positive examples. Given a query, the negative examples are formed from positive examples of other queries. When the priors of queries are different, it will be hard to control the number of examples to satisfy the condition for all queries.\n\n### **Q3:** \nIs it possible to use techniques like variance reduction to avoid the $\\mathcal{O}(1/N^{+})$ term in Thm. 3?\n\n### **A3:** \nThe $\\mathcal{O}(1 / N^{+})$ term sources from the non-linearity of $\\nabla f(\\pmb{w}\\_t;\\pmb{z}\\_{i\\_t}, \\cdot)$ (see Eq. (72), Eq. (79) and Lem. 4). Therefore, even if $\\phi(h\\_{\\pmb{w}}(\\pmb{z}^+))$ is an unbiased estimation of $h\\_{\\pmb{w}}(\\mathcal{S}^+)$, the stochastic gradient might still be biased. Recent work on bilevel optimization has utilized variance reduction to handle similar problems, so it is feasible to solve the issue with similar techniques. However, it will make the stability analysis much more complicated, and we have to explore this in future work.", " We sincerely thank you for your time and efforts! We would like to clarify the following issues:\n\n### **Q1:** \nExperiment details of the competitors.\n\n### **A1:** \nWe reimplement all the competitors on the same codebase to ensure a consistent setting in terms of model structure, data preprocessing and augmentation, learning rate schedule, testing pipeline, etc. The unique hyperparameters of competitors follow the optimal settings of the original papers. Moreover, the optimizers used are slightly different: following previous work, Adam is used to train the competitors, while ours is trained with SGD to ensure consistency with our theoretical analysis. We also provide the results of ours trained with Adam in Tab. 2, which shows no significant difference. Therefore, the comparisons are fair, and we will update these details in the latest version.\n\n\n### **Q2:** \nThe reason for choosing the one-side Huber loss as a surrogate loss.\n\n### **A2:** \nOn one hand, one-side Huber loss can be viewed as a smooth variant of the Hinge loss, such that Assumption 2 (i.e., the objective function $F$ is $L$-smooth) holds. On the other hand, compared to the square loss, one-side Huber loss is less sensitive to outliers, making the learning more robust.\n\n\n### **Q3:** \nWhy the performance gain is more significant in iNaturalist than SOP?\n### **A3:** \nIn fact, the performance gain of all AUPRC-based methods (e.g., SmoothAP [9], DIR [57], FastAP [12]) is more significant in iNaturalist than SOP (See Tab. 1). As far as we know, the main reason is two-fold: first, the number of positive examples per query is much larger ($56.7$ in iNaturalist v.s. $5.3$ in SOP on average). Therefore, according to the definition of AUPRC, the weights of positive examples should be more discriminative in datasets like iNaturalist, while pairwise losses like contrastive loss ignore this factor. Second, the scale of iNaturalist is larger, thus it is less possible to overfit the training set of iNaturalist than SOP.\n\n\n### **Q4:** \nHow to determine the prior $\\pi$ in the image retrieval task?\n\n### **A4:** \nWe count the number of examples in each class and estimate the prior $\\pi$ with the frequency. Assuming that the training set is i.i.d. sampled from the true distribution, such an estimation of $\\pi$ is unbiased and consistent.", " In the proof of Prop. 2, we denote\n$$\n X^c\\_{{n^-}} = \\hat{\\mathbb{E}}\\_{\\pmb{x}\\sim\\pmb{z}^-}\\left[\\ell\\_1\\left(c - h\\_{\\pmb{w}}(\\pmb{x})\\right)\\right],~~~~\n Y^c\\_{{n^+}} = \\hat{\\mathbb{E}}\\_{v\\sim\\pmb{v}}\\left[\\ell\\_2\\left(c - v\\right)\\right].\n$$\nThen the approximation error is decomposed into\n$$\n\\begin{aligned}\n &\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}\\subseteq\\mathcal{S}}[\\hat{f}(\\pmb{w};\\pmb{z})] - \\widehat{\\text{AUPRC}}^\\downarrow(\\pmb{w};\\mathcal{S}) \\\\\\\\\n =& \\underbrace{\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}, c\\sim h\\_{\\pmb{w}}(\\pmb{z}^+)}\\left[\n \\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) X^c\\_{{n^-}} + \\pi Y^c\\_{{n^+}}} - \\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) \\mu\\_{c,1} + \\pi Y^c\\_{{n^+}}}\n \\right]}\\_{(a)} \\\\\\\\\n &+ \\underbrace{\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}, c\\sim h\\_{\\pmb{w}}(\\pmb{z}^+)}\\left[\\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) \\mu\\_{c,1} + \\pi \\mu\\_{c,2}} - \\frac{(1-\\pi) \\mu\\_{c,1}}{(1-\\pi) \\mu\\_{c,1} + \\pi \\mu\\_{c,2}}\n \\right]}\\_{(b)} \\\\\\\\\n &+ \\underbrace{\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}, c\\sim h\\_{\\pmb{w}}(\\pmb{z}^+)}\\left[\\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) \\mu\\_{c,1} + \\pi Y^c\\_{{n^+}}} - \\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) \\mu\\_{c,1} + \\pi \\mu\\_{c,2}}\n \\right]}\\_{(c)},\n\\end{aligned}\n$$\n\nwhere $\\mu\\_{c,1},\\mu\\_{c,2}$ are the mean values of $X^c\\_{{n^-}}$ and $Y^c\\_{{n^+}}$, respectively.\n\nNext, we focus on the term $(a)$. Under the assumption of Prop. 2, $X^c\\_{{n^-}}$ can be viewed as an average of i.i.d. variables, thus according to Hoeffding's inequality, for any $\\epsilon > 0$ we have\n$$\n \\mathbb{P}\\left(\\left|X^c\\_{{n^-}} - \\mu\\_{c,1}\\right| \\geq \\epsilon\\right) \\leq 2\\exp\\left(-\\frac{2{n^-} \\epsilon^2}{B\\_{\\ell\\_1}^2}\\right).\n$$\nTherefore, for any $c$, $0 < \\delta < 1$, with probability at least $1 - \\delta$, we have\n$$\n\\begin{aligned}\n &\\left|\\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) X^c\\_{{n^-}} + \\pi Y^c\\_{{n^+}}} - \\frac{(1-\\pi) X^c\\_{{n^-}}}{(1-\\pi) \\mu\\_{c,1} + \\pi Y^c\\_{{n^+}}}\\right| \\\\\\\\\n \\leq& \\left|\\frac{(1-\\pi)^2 X^c\\_{{n^-}}}{\\left((1-\\pi) X^c\\_{{n^-}} + \\pi Y^c\\_{{n^+}}\\right)\\left((1-\\pi) \\mu\\_{c,1} + \\pi Y^c\\_{{n^+}}\\right)}\\right|\\cdot \\left|X^c\\_{{n^-}} - \\mu\\_{c,1}\\right| \\\\\\\\\n \\leq& \\frac{1}{\\mu\\_{c,1}} \\cdot \\left|X^c\\_{{n^-}} - \\mu\\_{c,1}\\right| \\\\\\\\\n \\leq& \\sqrt{\\frac{B\\_{\\ell\\_1}^2 \\log \\frac{2}{\\delta}}{2\\mu\\_{c,1}^2 {n^-}}}\\\\\\\\\n \\leq& \\sqrt{\\frac{B\\_{\\ell\\_1}^2 \\log \\frac{2}{\\delta}}{2\\mu\\_{1}^2 {n^-}}},\n\\end{aligned}\n$$\nwhere $\\mu\\_1 = \\inf\\_{c}\\ \\mu\\_{c,1}$. If we further assume that $X^c\\_{{n^-}}$ is independent w.r.t. different $c$, then considering all positive $c$, with probability at least $1 - \\delta / 3$, we have \n$$\n |(a)| \\leq \\sqrt{\\frac{B\\_{\\ell\\_1}^2 \\log \\frac{6{n^+}}{\\delta}}{2\\mu\\_{1}^2 {n^-}}} = \\mathcal{O}\\left(\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^-}}}\\right),\n$$\nand similarly\n$$\n |(b)| = \\mathcal{O}\\left(\\frac{\\log (6{n^+} / \\delta)}{{n^-}}\\right), ~~ |(c)| = \\mathcal{O}\\left(\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^+}}}\\right).\n$$\nTo sum up, with probability at least $1 - \\delta$ we have \n$$\n \\left|\\mathop{\\hat{\\mathbb{E}}}\\limits\\_{\\pmb{z}\\subseteq\\mathcal{S}}[\\hat{f}(\\pmb{w};\\pmb{z})] - \\widehat{\\text{AUPRC}}^\\downarrow(\\pmb{w};\\mathcal{S})\\right| \\leq |(a)| + |(b)| + |(c)| = \\mathcal{O}\\left(\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^+}}} + 2\\sqrt{\\frac{\\log (6{n^+} / \\delta)}{{n^-}}}\\right).\n$$", " The paper proposes a novel framework that optimizes AUPRC in an end-to-end manner. The main idea is inspired by the theoretical properties of the objective function and the stochastic algorithm. The authors show that the objective function is asymptotically unbiased by approximation error analysis, and the proposed stochastic optimization algorithm has a generalization guarantee. Their experiments demonstrate the proposed framework work well on image retrieval datasets. I consider this work novel and sound in three aspects:\n1) The proposed stochastic estimator solves the estimation bias issue. The instability is mitigated by an auxiliary vector estimating positive scores. Sufficient theories and numerical experiments illustrate the proposed method.\n2) It is the first work studying the algorithm-dependent generalization of stochastic AUPRC optimization. It is a challenging topic involving the list-wise problem and compositional problem.\n3) The main claims are properly verified on both simulation and real-work data.\n\nThe main weakness of this work is missing explanations/analyzes of some techniques:\n1) In table 2, the model with the semi-variance term has higher R@K, but similar mAUPRC. I’m interested in the mechanism of the semi-variance regulation.\n2) The time consumption of the proposed score interpolation is $O(N^+)$. It might slow down the training process.\n\nTo sum up, this paper looks solid on both technical and theoretical parts. The presentation is overall well organized. Therefore, I recommend accepting this paper.\n I hope the authors can clarify some more in-depth analysis mentioned in the weaknesses part. Besides, the stochastic estimator is only proved to be asymptotically unbiased, but the order is still unclear. I suggest the authors to provides the asymptotic order. It might require data distribution hypotheses, as what the authors did in simulation experiments. Yes.", " This work proposes a stochastic algorithm for AUPRC optimization based on a sampling-rate-invariant unbiased stochastic estimator. The authors study the theoretical properties including the approximation error and generalization bound. Based on the theoretical results, they propose a semi-variance regular term to further improve the performance. The algorithm is applied to the image retrieval task. Strengths:\n- The weaknesses of some previous works are clearly identified and the proposed method is tailored to alleviate them. Both theoretical analysis and simulation experiments are provided to support the soundness.\n- Formal theoretical interpretation of the components of the proposed method. This may inspire future work along this line.\n- The extensive experiments have validated the proposed method. The ablation studies clearly present the effect of each component.\n\nWeaknesses:\n- The experiment details of the competitors need to be more specific to ensure fair comparisons.\n- How the surrogate loss $\\ell_1$ is chosen seems unclear. The theoretical results hold as long as it is an upper bound of $\\ell_{0,1}$, why not use commonly used surrogate losses like square loss? Please see the weaknesses for my main concerns. Some minor issues are given as follows:\n- The performance gain is more significant in iNaturalist than SOP. What’s the difference between these two datasets?\n- How to determine the prior $\\pi$ in the image retrieval task?\n The limitations and potential negative societal impact have been described.", " In this paper, the authors aim to explore the properties of the AUPRC stochastic optimization. They provide two main theoretical results: the unbiasedness of stochastic estimators and the generalization of the optimization algorithm. To develop the algorithm-dependent generalization, they extend the model stability to list-wise loss. Experiments are conducted on three datasets. The theoretical results of this paper are solid in the following aspects:\n1. It’s interesting and challenging to study the algorithm-dependent generalization of AUPRC optimization. This work fills the gap in the list-wise loss stability analysis.\n2. The authors provide a useful tool to analyze the convergence/stability of compositional optimization problems. Although transition matrices are widely used in complicated processes, it is novel and effective to simplify the calculation with the matrix spectrum.\n3. By jointly considering the convergence and the generalization, this work provides guidance to find the trade-off between convergence and generalization of AUPRC optimization.\n4. The main results are well presented and clearly proved.\n\nMy main concern is the presentation of some key techniques can be further improved. While the proposed techniques like simplification with matrix spectrum sound novel and reasonable to me, they haven’t been formally present in the main paper. Presenting it briefly in the main paper allows researchers to reuse these techniques on other problems.\n\nOverall, this paper addresses some important theoretical issues of AUPRC optimization, and I tend to accept this paper.\n 1. In section 3.2 (P1), it is claimed that $n^+ / (n^+ + n^-) = \\pi$ is hard to satisfied in practice. I think this condition can be achieved by changing the sampling rate during training, could you provide more explanations?\n2. In my view, the $O(1/N^+)$ term is caused by the estimation of $h(S^+)$. Is it possible to use acceleration techniques in optimization, like variance reduction, to avoid this problem? Yes" ]
[ -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, 4, 4, 5 ]
[ "8bwgJYFnrdb", "kq61ulbapb6", "qF_y9WIJ1V1", "8bwgJYFnrdb", "nips_2022_csr9uRmTC3f", "nips_2022_csr9uRmTC3f", "nips_2022_csr9uRmTC3f" ]
nips_2022_MbCAOMGsZXC
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Masked Autoencoders (MAE) have shown great potentials in self-supervised pre-training for language and 2D image transformers. However, it still remains an open question on how to exploit masked autoencoding for learning 3D representations of irregular point clouds. In this paper, we propose Point-M2AE, a strong Multi-scale MAE pre-training framework for hierarchical self-supervised learning of 3D point clouds. Unlike the standard transformer in MAE, we modify the encoder and decoder into pyramid architectures to progressively model spatial geometries and capture both fine-grained and high-level semantics of 3D shapes. For the encoder that downsamples point tokens by stages, we design a multi-scale masking strategy to generate consistent visible regions across scales, and adopt a local spatial self-attention mechanism during fine-tuning to focus on neighboring patterns. By multi-scale token propagation, the lightweight decoder gradually upsamples point tokens with complementary skip connections from the encoder, which further promotes the reconstruction from a global-to-local perspective. Extensive experiments demonstrate the state-of-the-art performance of Point-M2AE for 3D representation learning. With a frozen encoder after pre-training, Point-M2AE achieves 92.9% accuracy for linear SVM on ModelNet40, even surpassing some fully trained methods. By fine-tuning on downstream tasks, Point-M2AE achieves 86.43% accuracy on ScanObjectNN, +3.36% to the second-best, and largely benefits the few-shot classification, part segmentation and 3D object detection with the hierarchical pre-training scheme. Code is available at https://github.com/ZrrSkywalker/Point-M2AE.
Accept
This paper proposes, the Point-M2AE, a multi-scale masked autoencoder (MAE) pre-training framework for self-supervised learning of 3D point clouds. This is a generalization of the existing 2D-MAE framework to 3D point cloud domain. The proposed Point-M2AE introduces a U-Net-like transformer and a multi-scale masking strategy to generate consistent visible regions across scales. Extensive experiments are conducted on various downstream tasks to validate the power of the proposed method. All reviewers think that the current paper presents a novel framework, making an important contribution to point cloud representation. It also has sufficient empirical results to demonstrate the performance of the proposed model. The feedback from the authors also addresses the major concerns of the reviewers. After reading the reviewers’ comments and the authors’ replies, the AC recommends accepting the paper.
train
[ "a87lj0vHEUQ", "smtfNcXAotK", "FOf0SPN2bsfs", "iSIUXmL-zLE", "zzcL6W9rBIS", "HXbqeYdA69y", "UoG_wGDQ23h", "N28EVxYMt17", "VjMzVDtyE7p", "9jxCa8LrMEA" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThanks again for your insightful comments and valuable time in reviewing our paper. We have provided the corresponding responses to your concerns, and added them in ***the revised supplementary material*** accordingly, which are highlighted in blue.\n\nGiven the discussion phase is quickly passing, we wonder if our responses address your concerns. If you have any further questions, we are more than happy to discuss them.\n\nLooking forward to your reply.\n\nBest, Authors", " We sincerely thank your helpful suggestions, and address the concerns as follows:\n\n>**Q1: Frozen encoder for other tasks.**\n\nThank you for the suggestion. Prior works only test the frozen encoder by training a linear SVM on ModelNet40 for synthetic shape classification, and it is more reasonable to show the learned representation for other 3D tasks. We experiment our Point-M2AE and Point-BERT with their frozen encoders on three other downstream datasets: real-world shape classification on ScanObjectNN, part segmentation on ShapeNetPart, and few-shot classification (5-way 20-shot) on ModelNet40. For real-world and few-shot classification, we append a learnable classification head of linear projection layers to the pre-trained encoder. For part segmentation, we make the segmentation decoders of both Point-M2AE and Point-BERT unfrozen, whose architectures are the same as the fully-unfrozen fine-tuning experiments. As Point-BERT does not provide the pre-training approach or weights for 3D object detection, we cannot compare their detection performances here.\n\nThe results are presented in the following table. With the frozen encoder, Point-M2AE consistently outperforms Point-BERT on all tasks, e.g., +5.5\\% on ModelNet40 and +2\\% on 5-way 20-shot classification. With the 85.6\\% mIoU_I, Point-M2AE's frozen encoder even performs comparably to the fully fine-tuned Point-BERT. The results fully demonstrate our pre-training has learned better and more general point cloud representations than Point-BERT does.\n\n||Frozen Encoder|ModelNet40 |ScanObjectNN |ShapeNetPart |5-way 20-shot|\n|---|---|---|---|---|---|\n|Point-BERT |Yes |87.4 |75.6 |84.8 |97.0 ± 2.3|\n|Point-M2AE |Yes |**92.9**|**78.3**|**85.6**|**97.2 ± 2.1**|\n|||+5.5\\% |+2.7\\% |+0.8\\% |+0.2\\%|\n|Point-BERT |No |93.2 |83.1 |85.6 |96.3|\n|Point-M2AE |No |**94.0**|**86.4**|**86.5**|**98.3**|\n|||+0.8\\% |+2.7\\% |+0.9\\% |+2.0\\%|\n\n>**Q2: More unlabeled data for pre-training.**\n\nThank you for the suggestion. The great advantage of unsupervised learning is to utilize large-scale unlabeled data. We first categorize the current available datasets according to the types of point clouds below.\n\n\n|Datasets|ShapeNet |ModelNet40 |ScanObjectNN |ScanNetV2|\n|---|---|---|---|---|\n|Point Clouds |Synthetic shapes |Synthetic shapes |Real-world shapes |Real-world Scenes|\n|Training Samples |57,448 |9,843 |11,416 |1,201|\n\n\nThe default setting of prior works is to pre-train on the synthetic ShapeNet and fine-tune on the others. We here incorporate more unlabeled data for pre-training Point-M2AE and present the results in the table below.\n\n|ShapeNet |ModelNet40 |ScanObjectNN |ScanNetV2 |ModelNet40 |ScanObjectNN|\n|---|---|--|--|--|--|\n|✔️ |- |- |- |92.9 |86.4|\n|✔️ |✔️ |- |- |**93.1**|86.5|\n|✔️ |- |✔️ |- |92.3 |87.1|\n|✔️ |- |- |✔️ |91.2 |86.8|\n|✔️ |✔️ |✔️ |✔️ |92.6 |**87.6**|\n\nConsidering different types of point clouds, We adopt two evaluation metrics, classification with a linear SVM on ModelNet40, and fine-tuning on ScanObjectNN, which respectively reflect the pre-trained representations for synthetic and real-world point clouds. As shown in the table, with more pre-training point clouds of the same type, the downstream performance can be largely improved.\n\n1) If the synthetic ModelNet40 is integrated with synthetic ShapeNet, the classification accuracy of the linear SVM test on ModelNet40 is boosted to 93.1\\%. Also, pre-trained by more 3D-shape point clouds, the classification accuracy of fine-tuning on ScanObjectNN with real-world shapes can be slightly improved.\n\n2) If we incorporate the real-world ScanObjectNN or ScanNetV2 during pre-training, the ModelNet40 scores would be slightly harmed due to the domain gap, but the ScanObjectNN scores are both boosted for training more real-world data.\n\n3) If all datasets are utilized as pre-training data, the classification accuracy of fine-tuning on ScanObjectNN can be largely improved to 87.6\\%. This demonstrates the learning capability of Point-M2AE if more pre-training data is available.", " >**Q3: The influence of different amount of pre-training data.**\n\nTo verify the importance of training data amount, we fix to use only synthetic shapes and sample 20\\%, 40\\%, 60\\%, and 80\\% of ShapeNet data for pre-training. We also adopt the linear SVM on ModelNet40 and fine-tuning on ScanObjectNN as the evaluation metrics. As reported in the table below, more pre-training data contributes to better downstream performance, which accords with the intuition.\n\n\n|Data Amount |0\\% |20\\% |40\\% |60\\% |80\\% |100\\%|\n|--|-|-|-|-|-|-|\n|ModelNet40 |0 |89.7 |90.8 |91.6 |92.1 |**92.9**|\n|ScanObjectNN |83.9 |84.2 |85.1 |84.9 |85.7 |**86.4**|\n\n\n>**Q4: Difficulties of introducing MAE into 3D point clouds.**\n\nThanks for the suggestion. We will improve the introduction part in the revised paper. There are four main difficulties of directly transferring MAE from 2D images to 3D point clouds.\n\n>>**1. The irregular data form of point clouds.**\n\n2D images are grid-based data, whose pixels have spatially regular arrangements. By this, 2D MAE can naively divide the image into non-overlapping patches and randomly mask some of them for reconstruction. In contrast, 3D point clouds are permutation-invariant and are irregularly distributed in 3D space. How to convert point clouds into multiple discrete tokens that can be masked and reconstructed is an important challenge to be tackled. For this, we utilize the widely adopted Farthest Point Sampling (FPS) to obtain the token centers and adopt $k$-NN to aggregate neighboring features as the token features. The FPS makes the point tokens evenly scatter in the space and has minimum overlaps, which prevents information leakage between masked and unmasked tokens. Also, the $k$-NN ensures that each masked token only requires to reconstruct its neighboring points, creating a properly challenging pretext task.\n\n>>**2. The local-to-global relations of 3D structures.** \n\nUnlike 2D images, it is critical to understand the relations between local parts and the overall 3D shape, which have geometric and semantic dependence. For example, the network can recognize an airplane starting from its wings, or segment the wing's part from the airplane's global feature.\nHowever, 2D MAE directly downsamples the image into a low-resolution feature map and adopt a non-hierarchical transformer to process. Therefore, we propose a hierarchical MAE architecture unique for 3D point clouds. Our Point-M2AE has multiple stages that progressively encodes different point cloud scales, thus better encoding the local-to-global relations.\n\n>>**3. How to mask multi-scale point cloud?**\n\nAs 2D MAE only has a single image scale, it only needs a random mask over the transformer. For our Point-M2AE, as illustrated in the main paper, we are required to generate multi-scale masks that ensure the visible regions to be consistent across scales. This is for preserving complete local patterns and enabling coherent learning of the encoder. Otherwise, the detailed 3D geometry would be lost and the encoder would `see' different unmasked parts of the point clouds at different scales, which severely harms the performance (92.9\\% $\\rightarrow$ 88.4\\% of Linear SVM on ModelNet40).\n\n>>**4. How to capture fine-grained 3D structures?**\n\nThe fine-grained information of 3D structures are significant for downstream 3D dense prediction tasks, e.g., part segmentation. Except for the multi-scale architecture and masking, we further add skip connections between the encoder and decoder, which has not been tried before on 2D MAE. This can complement the fine-grained point cloud features to the decoder and improve the performance of Point-M2AE.\n\nIn summary, our Point-M2AE considers challenges that are distinct for 3D representation learning, and introduces specific designs accordingly.\n", " We sincerely thank your constructive advice, and address the reviewer's concerns as follows:\n\n>**Q1: The framework compared to PointNet++.**\n\nAs PointNet++ laid the groundwork for hierarchical point cloud processing, nearly all later point-based works inherited its multi-scale framework and inserted more advanced geometry extractors on top of it, such as DGCNN, CurveNet, PointMLP, etc. However, our Point-M2AE extends the multi-scale architecture for a totally different task, point cloud masked autoencoding. The distinct differences are as follows:\n\n>>**1. A new task.**\n\nAlthough PointNet++ has been explored for 3D multi-scale feature extraction, for masked autoencoding on point cloud pre-training, we are the first to successfully learn representations with a multi-scale architecture. Even for masked autoencoding on 2D images, there is no prior work to adopt a multi-scale transformer or an encoder-decoder with skip connections prior to NeurIPS's submission deadline. \n\n\n>>**2. Architecture differences.**\n\nDirectly using PointNet++-like hierarchical transformer cannot achieve competitive performance as shown in the ablation study of the main paper (Tables 7 and 8). Only with our proposed modifications, the classification accuracy of ModelNet40 dataset with a Linear SVM can be boosted from 88.4\\% to 92.9\\%.\nThe modifications are as follows:\n\n1) We propose the multi-scale masking and the mask-guided token merging/propagating modules. As our mask is back-projected across scales via neighbor indices, the merging/propagating process are required to be unified with the indices. Otherwise, it would cause inconsistent visible regions across scales and information leakage between masked and unmasked tokens.\n\n2) During self-supervised pre-training, we utilize the hierarchical decoder with skip connections and token upsampling. For fine-tuning on part segmentation task, we design a new decoder as illustrated in Section 4.2 of the paper. It contains no skip connections or hierarchical upsampling, but directly upsamples multi-scale features into 2,048 points, which is different from PointNet++'s decoder.\n\n3) The skip connections are only utilized to supplement fine-grained features for the ***visible tokens*** in the decoder, since our encoder only processes the unmasked tokens. In contrast, PointNet++ processes all input points and constructs the connections from every pair between the encoder and decoder.\n\n>>**3. Different implementation details.**\n\nBesides the modifications above, we also have different implementation details:\n\n1) PointNet++ adopts mini-PointNet to aggregate local point features, but our Point-M2AE utilizes transformers with local spatial self-attention mechanisms.\n\n2) PointNet++ uses ball query to search neighboring points for downsampling, but we utilize k-NN algorithm.\n\n3) Our decoder has one fewer stage than the encoder to build a properly challenging pretext task for pre-training, but PointNet++ has the same stage numbers for the encoder and decoder.", " \n\n>**Q2: Explanation of the scale number.**\n\nThank you for the suggestion. We will add more detailed explanations for the number of scales in the revised paper. We have conducted ablation study of different scale numbers in Table 2 of the original supplementary material, and we also list the results in the following tables. Here are two points about the choice of scale number:\n\n>>**1. Scale number depends on the input point number.**\n\nOur multi-scale masking back-projects the visible regions by the neighbor indices, which actually have some overlapping for the k-NN of neighboring points. If the input number is fixed, too many scales would accumulate to more visible regions and finally cause most tokens to be visible at the lowest 1-st scale. Then, the reconstruction pretext task becomes less challenging and will quickly converge, by which the encoder cannot learn robust point cloud representation. In the paper, we adopt 2,048 input points for pre-training, and the best scale number is 3. \nIf we adopt 4-scale Point-M2AE, we report the visible point numbers of different scales in the following table. As shown, even if the mask ratio is 80\\% at the highest 4-th scale (6/32 visible), the back-projected 2-nd scale (164/256 visible) and 1-st scale (458/512 visible) nearly have no masked points.\n\n| |1-st scale |2-nd scale |3-rd scale |4-th scale|\n|---|---|---|---|---|\n|All Tokens |512 |256 |64 |32|\n|Visible Tokens |458 |164 |27 |6|\n\n\nWe then experiment with 1,024 and 4,096 input points in the following tables, where more points can perform better with more scales.\n\n|Scale |Points |Acc (\\%)|\n|---|---|---|\n|**2** |**1024** |**92.3**|\n|3 |1024 |91.9|\n|**3** |**2048**|**92.9**|\n|4 |2048 |90.4|\n|4 |4096 |92.8|\n|**5** |**4096** |**93.1**|\n\n\n>>**2. The stage number of decoder is better to be one fewer than the scale number.**\n\nSuppose we build an S-scale Point-M2AE, the encoder and decoder are better to be $S$-stage and (S-1)-stage, respectively. As explained in Section 3.3 of the main paper, the (S-1)-stage of the decoder (corresponds to the 2-nd scale of the point cloud) can already well represent the overall 3D shape and simultaneously preserve enough local patterns, referring to the visualization in Figure 3's $P_2$ of the paper. If reconstructing from this (S-1)-stage, the pretext task can be more challenging without fine-grained cues of $P_1$. Further upsampling point clouds into $1$-st scale at the decoder's S-th stage would only bring extra computation budget and harm the representation learning by a too simple pretext task. We experiment with equal stage numbers of encoder and decoder in the table below.\n\n|Encoder |Decoder |GPU Mem. |Acc (\\%)|\n|---|---|---|---|\n|**3** |**2** |**19 GiB** |**92.9**|\n|3 |3 |24 GiB |90.7|\n\n\n\n>**Q3: The notation conflict of $T_1^v$.**\n\nSorry for the misunderstanding. For simplicity, we abuse the notation of the point tokens of the s-th stage as $T_s^v$, regardless whether they are before or after the s-th stage encoder, since the $s$-th stage encoder would not change the scales of tokens.\n\n>**Q4: The voting strategy of Point-BERT.**\n\nThe paper of Point-BERT indeed does not mention the voting strategy, but marks this point in its official GitHub repository. In the \"Pretrained Models\" section of the README, we can observe \"Acc. (vote)\" at the second table's header. It shows that Point-BERT's results on ModelNet40 of 93.24 (1k points), 93.48 (4k points), and 93.76 (8k points) are all obtained based on voting. More specifically, the code for voting is at line 382 at ***Point-BERT/tools/runner_BERT_finetune.py*** of its repository. Therefore, we also report our classification accuracy, 94.0\\% with 1k points, on ModelNet40 by the same voting strategy as Point-BERT for fair comparison. \n\nTo conduct voting, the trained model is utilized to predict the test set for 10 times by default. In each time, the test point clouds are randomly transformed by scaling and translation. After that, the classification logits for each test point cloud are integrated by max pooling, which can increase the model's inference robustness.", " We sincerely thank your insightful comments, and address the concerns as follows:\n>**Q1: Comparison to similar works with masked modeling on point clouds.**\n\nThanks for mentioning the related work. We will cite and discuss them in our revised paper. \n\nFu et al. [1], Liu et al. [2], and Pang et al. [3] also conduct point cloud pre-training via masking, which are ***concurrent works*** to ours, but use different strategies for masked modeling.\n\n>>**Comparison to Fu et al [1]:** \n\n1) **Different pre-training strategies.** Following Point-BERT, [1] utilizes BERT-style pre-training. It is not a masked autoencoder (MAE) and different from our MAE-style pre-training. Such BERT style predicts the masked token encoded by an independently trained tokenizer, while our MAE style directly reconstructs the masked points' raw 3D coordinates, which is simpler and more efficient. \n\n2) **Less self-supervisory signals.** [1] consists of two complicated losses, a masked modeling loss and a contrastive loss for different sub-sets of point clouds. Our Point-M2AE only requires the simple reconstruction loss and achieves better performances.\n\n>>**Comparison to Liu et al [2]:** \n\n1) **Different pre-training strategies.** [2] proposed a masked discrimination (MD) pre-text task that conducts binary classification to judge if a point token is masked. It adopts binary focal loss for self-supervision and is different from our MAE-style pre-training that reconstructs masked coordinates.\n\n>>**Comparison to Pang et al [3]:** \n\n1) **Hierarchical architectures.** [3] also adopts MAE-style pre-training but utilizes a plain transformer-like 2D MAE without 3D specific modifications. Our Point-M2AE adopts a hierarchical encoder-decoder with skip connections and local attention to better capture local-to-global 3D geometries. \n\n2) **Multi-scale Masking strategy.** [3] adopts the vanilla random masking, but we introduce a multi-scale masking to generate consistent visible region across scales. It can largely boosts the performance as shown in Table 7 of the main paper (88.4 $\\rightarrow$ 92.9 for Linear SVM on ModelNet40).\n\n>>**Comparison to Pellis et al [4]:**\n1) **Different tasks.** We target on the self-supervised point cloud pre-training, but [4] solves semantic segmentation specially for heritage point clouds without pre-trained 3D networks. Thus, [4] cannot be compared with our method for pre-training. \n\n2) **Different usage of masks.** [4] leverages the masks to project labeled multi-view images into point clouds, while we utilize masks for pre-training via masked autoencoding.\n\nWe compare the characteristics and performances of [1], [2], [3] on different tasks in the following table.\n'Linear SVM' denotes the linear evaluation on ModelNet by SVM, and '5-way 20-shot' denotes the few-shot classification on 5-way 20-shot ModelNet40. The last three lines represent the three fine-tuning experiments on three datasets.\n\n||Point-M2AE|Point-BERT|[1]|[2]|[3]|\n|---|---|---|---|---|---|\n|Pre-training Style|MAE|BERT|BERT |MD |MAE|\n|Hierarchical |Yes |No |No |No |No|\n|Attention Scope |Local |Global |Global |Global |Global|\n|Masking |Multi-scale |Random |Random |Random |Random|\n|Linear SVM |**92.9**| 87.4 |92.1 |- |-|\n|5-way 20-shot |**98.3** |96.3 |97.0 |97.2 |97.8|\n|ModelNet40 |**94.0**|93.2 |93.6 |93.8 |93.8|\n|ScanObjectNN |**86.4**|83.1 |83.2 |84.3 |85.2|\n|ShapeNetPart |**86.5**|85.6 |86.0 |86.0 |86.1|\n\n>**Q2: Visual interpretation of local spatial attention in ablation study.**\n\nWe visualize the attention weights with and without the local attention in ***Figure 5 of the newly-revised supplementary material.*** As shown in the figure, with the local attention, the query point (marked by star) only has large attention values within a local spatial range (marked by yellow dotted circles), other than scattering over the entire 3D shape (marked by yellow arrows). This enables each point to concentrate more on neighboring local features in early stages for capturing and encoding detailed structures.\n\nReferences\n\n[1] POS-BERT: Point Cloud One-Stage BERT Pre-Training. arXiv 2022.\n\n[2] Masked Discrimination for Self-Supervised Learning on Point Clouds. arXiv 2022.\n\n[3] Masked Autoencoders for Point Cloud Self-supervised Learning. ECCV 2022.\n\n[4] An Image-Based Deep Learning Workflow for 3D Heritage Point Cloud Semantic Segmentation. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences-ISPRS Archives 46.2/W1-2022 (2022): 429-434.", " >**Q3: More description of fine-grained information.**\n\nThanks for the valuable suggestion. The fine-grained information of the point clouds refers to the exquisite 3D structures and subtle geometric variations, e.g., thin branches of a plant, fingers of a human, engines of a plane, etc. It can be better revealed by high-resolution points and are more significant to some dense 3D tasks, e.g., part segmentation. However, the plain transformer of vanilla MAE is a non-hierarchical architecture and directly downsamples the input point cloud into low-resolution cubes, which would severely blur such details. \n\nTherefore, our Point-M2AE adopts a pyramid encoder-decoder transformer with multi-scale representations. In this way, the network can well capture the fine-grained features from the reserved fine-grained 3D structures in the early stages and gradually aggregates them into high-level semantics.\n\nTo better ease the understanding, we show the differences between multi-scale (hierarchical) and single-scale (non-hierarchical) architectures by two visualizations in ***the newly-revised supplementary material***. In ***Figure 6***, we visualize the extracted point features and the reconstructed masked point clouds during pre-training. Compared to the non-hierarchical network, the hierarchical one shows higher feature responses in the fine-grained structures and reconstructs details more accurately. In ***Figure 7***, we visualize the extracted point features and the segmentation results in downstream part segmentation task. Likewise, the multi-scale architecture predicts more fine-grained part labels for the objects.\n\n>**Q4: EMD Loss.**\n\nThanks for this suggestion. Except for the Chamfer Distance (CD) loss with L2 norm, we further evaluate the L1-norm CD loss, EMD loss, and their combinations with a Linear SVM on ModelNet40. As shown in the table below, the original L2-norm loss performs better than all other compared losses. \n\n\n|L2-norm CD |L1-norm CD |EMD |Linear SVM|\n|---|---|---|---|\n|✔️ |- |- |**92.9**|\n|- |✔️ |- |91.1|\n|- |- |✔️ |91.9|\n|✔️|- |✔️ |92.4|\n|- |✔️ |✔️ |91.3|\n\nWe denote the reconstructed and ground-truth point sets as $S_1$ and $S_2$. Compared to EMD loss that requires an optimal mapping for every point between $S_1$ and $S_2$, L2-norm CD loss only optimizes the separate pair-wise distances and is thus more robust to the variation of 3D structures. Compared to L1-norm CD loss, L2 norm of Euclidean Distances can better depict spatial distribution and pay more attention to the far away points.\n\n>**Q5: Miss figure tag.**\n\nThanks for pointing out. We will correct the tag in the revised supplementary material.", " This paper uses MAE to learn the point cloud features without fixed topology. Its multi-scale architecture can capture features at different resolutions for guidance. The pyramid architecture also facilitates the network to extract high-level features, and the self-attention mechanism adopted by this network also controls the focus of the network. The complementary skip connections also better connect the global and local features to guide each other. And it also produces competitive results in experiments. - Originality: The main idea of the proposed approach is to use the masked autoencoders. To my limited knowledge about it, this is novel and thus the originality is reasonable.\n- Quality:While the approach seems reasonable and the experimental results look promising, I have the following concerns(See Questions) about the paper.\n- Clarity:This manuscript is clearly written.\n- Significance:There are not many works in the field of point cloud feature learning by masked autoencoders, so I think this paper makes a positive contribution. 1) There are some works in the field that apply masked autoencoders in point clouds except Point-BERT, but I find that the authors miss these works[1][2][3][4], and I want to see more discussion from the authors about some of these out-picked works, and what this paper amounts to in terms of improvements to these precursor works.\n2) I hope that the authors can add visual interpretation of the results of the Local Spatial Self-Attention module in their ablation experiments.\n3) The authors keeps stressing that their model can pay more attention to fine-grained information, but there are no more descriptions about fine-grained information except \\textbf{Figure 2} in \\textbf{Appendix C} (which only shows the point cloud images with different scales).\n4) Although the authors have achieved good results in the experiments after freezing the encoder for the relevant downstream tasks, have you analyzed the influence of the CD loss you selected for the training of the model's results? Now it seems that the decoder is driven only by CD Loss. Whether to use EMD as loss in the experiments or combine the two as compound loss? Will the results still be as good as now? I'd like to see the quantitative results if possible.\n5) Miss the figure tag in \\textbf{Appendix C. Line 68}: “As shown in Figure []”.\n\nIn the current version, there are some issues as well. I look forward to the response by the authors. For now, I would recommend a borderline accept rating for the paper.\n\n[1]Fu, Kexue, et al. \"POS-BERT: Point Cloud One-Stage BERT Pre-Training.\" arXiv preprint arXiv:2204.00989 (2022).\n\n[2]Liu, Haotian, Mu Cai, and Yong Jae Lee. \"Masked Discrimination for Self-Supervised Learning on Point Clouds.\" arXiv preprint arXiv:2203.11183 (2022).\n\n[3]Pang, Yatian, et al. \"Masked autoencoders for point cloud self-supervised learning.\" arXiv preprint arXiv:2203.06604 (2022).\n\n[4]Pellis, Eugenio, et al. \"An Image-Based Deep Learning Workflow for 3D Heritage Point Cloud Semantic Segmentation.\" International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences-ISPRS Archives 46.2/W1-2022 (2022): 429-434. Yes, I don't see any potential negative social impact of this work.", " This work proposes a multi-scale masked autoencoder for pre-training on point clouds. It designs a pyramid architecture for hierarchical encoder and decoder to progressively model 3D point clouds. A novel multi-scale masking and a back-project strategy are proposed for generating consistent visible regions across scales for point clouds. Extensive experiments demonstrate that the proposed method outperforms other pre-training methods on the linear SVM classification task and downstream tasks. Strengths:\n1. This paper is generally technically sound.\n2. The design of the masking and back-project strategy is novel and fits well on the 3D point clouds pretraining task.\n3. The extensive experiments verify the effectiveness of the proposed approach.\n\nWeakness:\n1. The overall framework seems to be simply a variant of PoinNet++ with Transformer Encoder, especially the idea of the Skip Connection and Point Token Upsampling, hence the framework seems to have insufficient technical innovations.\n2. It can be noted that the U-Net like network demonstrated in Figures 2 and 3 have arbitrary multiple scales (at least greater than three). However, the actual experiments have only 2-3 scales and do not give an adequate explanation for the choice of the number of scales.\n\n \n\n 1. Why does the final experiment use only 2-3 scales and what about more? Please explain further the motivation for the choice of the number of scales.\n2. In Sec 3.2 ‘Token Embedding and Merging’, “After that, we obtain the initial point tokens T_1^v for the 1-st stage of the encoder” seems to be conflict with the Figure 2 where the T_1^v is the output of the 1-st stage of the encoder.\n3. In Sec 4.2 ‘Shape Classification’, “We follow Point-BERT to use the voting strategy [29] for fair comparison on ModelNet40…”, as the Point-BERT does not seem to mention that it uses a voting strategy and dose not cite the paper [29]. Please explain further the voting strategy.\n\n NA", " This paper extends the MAE to deal with irregular 3d point clouds. It proposes a multi-scale u-net like encoder-decoder to learn both fine-grained local shape information and global semantic information. The network is self-supervised trained by masking the input point cloud with multi-scale mask strategy. Experiments are conducted on ModelNet40 with the learned representation combined with SVM, achieving good classification results which is comparable to the SOTA supervised methods. The effectiveness of the proposed method is also validated by fine-tuning on the down-stream tasks such as classification, part segmentation and 3d object detection, and observed improvements over existing methods. Positives\n+: The paper studies an important problem in deep learning and computer vision community. The proposed method is a reasonable extension of MAE to deal with point clouds.\n+: Good results were obtained by the proposed method on different tasks.\n+: Detailed ablation study to validate the effectiveness of each component.\n\nNegatives\n-: It would be better if the authors can also provide the results of using the frozen encoder for other tasks like part segmentation and 3d object detection, so as to show that the conclusion about the learned representation is general enough.\n-: As the advantage of self-supervised learning is to harvest from a large amount of unlabeled data, the paper lacks results about using more dataset for pre-training, for instance, using all three kinds of datasets (ShapeNet, ModelNet, ScanNet) together. This is not only helpful for readers to know the potential of the proposed method, but also can be interested to see if the proposed method can deal with different kinds of data together to boost the performance.\n-: The influence of different amount of pre-training examples should also be included in the ablation study.\n-: For the paper writing, it is unclear about the difficulties of introducing the MAE into 3d point cloud. It seems that there are MAE methods for 2D images, then why not we use it for 3d point, and so, the paper makes it done. Therefore, the contribution of this paper is not significant.\n please see the above negative points. yes" ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2022_MbCAOMGsZXC", "9jxCa8LrMEA", "9jxCa8LrMEA", "VjMzVDtyE7p", "VjMzVDtyE7p", "N28EVxYMt17", "N28EVxYMt17", "nips_2022_MbCAOMGsZXC", "nips_2022_MbCAOMGsZXC", "nips_2022_MbCAOMGsZXC" ]
nips_2022_u4ihlSG240n
OmniVL: One Foundation Model for Image-Language and Video-Language Tasks
This paper presents OmniVL, a new foundation model to support both image-language and video-language tasks using one universal architecture. It adopts a unified transformer-based visual encoder for both image and video inputs, and thus can perform joint image-language and video-language pretraining. We demonstrate, for the first time, such a paradigm benefits both image and video tasks, as opposed to the conventional one-directional transfer (e.g., use image-language to help video-language). To this end, we propose a \emph{decoupled} joint pretraining of image-language and video-language to effectively decompose the vision-language modeling into spatial and temporal dimensions and obtain performance boost on both image and video tasks. Moreover, we introduce a novel unified vision-language contrastive (UniVLC) loss to leverage image-text, video-text, image-label (e.g., image classification), video-label (e.g., video action recognition) data together, so that both supervised and noisily supervised pretraining data are utilized as much as possible. Without incurring extra task-specific adaptors, OmniVL can simultaneously support visual only tasks (e.g., image classification, video action recognition), cross-modal alignment tasks (e.g., image/video-text retrieval), and multi-modal understanding and generation tasks (e.g., image/video question answering, captioning). We evaluate OmniVL on a wide range of downstream tasks and achieve state-of-the-art or competitive results with similar model size and data scale.
Accept
After the authors’ rebuttal and long discussion between reviewers and authors, the paper unanimously receives positive rates thanks to reasonable proposed ideas and thorough experiment evaluation. The camera-ready version may need to be updated to fully reflect reviewers’ comments and authors’ answers to them.
train
[ "nYqwlUBKlVI", "ZIfNK8iuV6", "w1VRCf9ZP0V", "TFylbKhtkjC", "7MYTJ8PiWj", "0PW8dXYL9JK", "9W2eiUcEde8", "rxn7sYxSw2x", "fsd9trNF8Yb", "Pf3Wt915eCGW", "BB1DOws3kxf", "yaYZtrdpnZ1", "0PZydnhoyTi", "h1aMacnU5R74", "oBjCUagS_Lve", "gH7PhVI-4Zu", "vdzHGhjgWgS", "C3SXGGQbbGf", "FEENJNLySr", "hfx1QwdfuGq", "aMrASo_ScJf", "VvtFH5Je5F8", "XeXtqYvmEth", "nLUKAMW89Qp" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer cPTx, thanks for your effort again! We are happy that our rebuttal well addressed your concerns!", " Thanks for your rebuttal, I appreciated the newly added experiments. \nThe comparison becomes stronger now. My concerns about NoCaps are addressed.\nI have updated my ratings. ", " Hi cPTx, sorry for disturbing you again and really thanks for your effort! Considering the reviewer-author discussion time window will be closed in 15 hours, can you help read our response to your new questions at your earliest convenience? We hope our response can address your only last two concerns and are also happy to discuss with you further.\n", " Hi Reviewer 5rq1, thanks again for your effort. Do you have any more concerns? As the reviewer-authors discussion window will be closed in about 18 hours, please raise your questions as soon as possible if existed. We are happy to answer them! Grateful for your help!", " **Q2: NoCaps experiments are invalid?**\n\nAnswer: Great question! We agree that \"the initial proposal of NoCaps may not allow the introduction of extra image-text pairs\" when it was first proposed in year 2018, however, Vision-Language Pertraining (VLP) has been rapidly developing in recent years and it is a common practice to evaluate the pretrain-then-finetune model on NoCaps validation for VLP methods now. **All baselines [1][2][3][4][5] except OFA (results not reported) in Table 6 follow the same setting as our OmniVL**, i.e., use image-text pairs during pretraining to evaluate the pretraining performance. Especially ***note that although VinVL reports the results without pre-training, the numbers in our paper are reproduced by LEMON[3] via finetuning from the released checkpoints (which is pre-trained on the combined datasets including 5.65M images, 2.5M QAs, 4.68M captions and 1.67M pseudo-captions)***. In this sense, our comparison is at least fair.\n\n\n[1] Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts.\n\n[2] VinVL: Revisiting Visual Representations in Vision-Language Models.\n\n[3] Scaling up vision-language pre-training for image captioning.\n\n[4] Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.\n\n[5] SimVLM: Simple Visual Language Model Pretraining with Weak Supervision.\n\nFeel free to discuss with us about any new concerns that may obstacle you from changing your decision into accept! We are grateful for your efforts! \n\nWe notice you gave relatively low scores in terms of **Soundness**, **Presentation**, and **Contribution**, do you have specific concerns in terms of these aspects. Also feel free to raise any suggestions for them, we are happy to polish our paper with another revision or in the camera ready version.", " Dear reviewer cPTx,\n\nConsidering the reviewer-author discussion time window will be closed soon, we first answer your questions based on our understanding. Feel free to raise any more questions if our answers do not exactly fit your questions.\n\n**Q1: Comparisons are not fair/convincing?**\n\nAnswer: To answer this question more clearly, we split it into three sub-questions:\n\n1) For the comparison with FLAVA, our comparison is **totally fair and convincing**. Even though FLAVA is pretrained on a larger data than ours, our OmniVL outperforms FLAVA on most tasks/datasets. For example, on image-text retrieval task, our OmniVL achieves better results than FLAVA on both COCO and Flickr under zero-shot and fine-tuning settings (see answer to your question 2 in the rebuttal above). For linear probing results (please see all results in the supplementary material), even though FLAVA performs better than our OmniVL on Food101, DTD and Flowers datasets, our OmniVL outperforms FLAVA on CIFAR10, CIFAR100 and Pets datasets. And the average performance on these six datasets of our OmniVL is **even slightly better** than FLAVA(**86.7 vs 86.3**). In this sense, even under this unfair setting (FLAVA uses larger dataset than ours), our OmniVL is already better than FLAVA. By pretraining FLAVA on the same 14M image-text data (fair setting, FLAVA$\\_\\{14M\\}$), we demonstrate our OmniVL outperforms FLAVA on most datasets. Detailed table is below.\n\n| Method | Food101 | CIFAR10 | CIFAR100 | Pets | DTD | Flowers | Avg |\n|:---|:---:|:---:|:---:|:---:| :---:| :---:| :---:| \n| FLAVA | 85.2 | 90.4 | 76.2 | 82.3 | 74.2 | 92.7 | 83.5 |\n| FLAVA$\\_\\{14M\\}$ | **88.5** | 92.9 | 77.7 | 84.8 | **77.3** | **96.4** | 86.3 |\n| OmniVL | 87.4 | **96.2** | **83.2** | **87.1** | 76.5 | 89.8 | **86.7** |\n\n2) For the comparison with OFA, as you said, our OmniVL and OFA **target different research perspective, therefore it is indeed hard or even impossible to compare them under an exactly fair setting**. In details, OFA targets multi-task unification in the image/text domain, including detection, captioning, VQA, Image Infilling, visual grounding, and so on. On contrary, our OmniVL mainly proposes to unify image-language and video-language pretraining. Technically, they are orthogonal efforts that study different things. If we involve other tasks beyond image-text matching and image captioning, then the comparison is unfair for us because our OmniVL does not use the pretrainging data of such tasks. To summarize, only involving image-text matching and image captioning may not look like a totally fair/convincing setting, but we have tried our best to do that. Based on the fact that OFA and our OmniVL are orthogonal effort, we would like to ask you to understand our situation. \n\n3) For the scalability of large pre-training data, **we have already conducted the corresponding experiments in Table 4 of our main paper**. By scaling the pretraining image-text pair data from 4M to 14M, we have observed significant performance gain, which well demonstrates the scalability of our method. We understand pretraining on a even larger scale like FLAVA, OFA, or even SIMVLM may make the results more solid, but our GPU resources cannot support to finish such experiments in a short time (e.g., the estimated training time on FLAVA 70M data will take more than three weeks). Considering most research teams like us do not have many gpus like big companies like google (SIMVLM), facebook (FLAVA) and Alibaba (OFA), we hope you can understand it. But we may emphasize that **14M and 4M image-text data settings are the most common setting for the research community in this direction**. For example, baseline methods including UNITER, OSCAR, UNIMO, VLMO, ALBEF and BLIP all follow such settings. More importantly, **the performance gain by scaling image-text data from 4M to 14M can well demonstrate the scalability of our OmniVL**. But anyway, if pretraining on even larger data is the only factor that obstacle you from recommending acceptance, we promise to add such results into the camera ready version (more time left for us).\n\n\n", " Before we answer your questions, may we ask two quick questions?\n\n1. We have reported the results of FLAVA on 14M image-text data, could you please provide some details on why you think the comparison with FLAVA is not convincing enough?\n\n2. Does \"extra image-text pairs\" in your second question refer to Image-text pairs adopted in pretraining?", " Dear authors,\nThanks for your response.\nMy concerns with Q1, Q4, Q5 are addressed. However, for Q2 and Q3, I still have doubts about the comparison.\n\n**[Comparisions are not fair]**\n\nI agree that some comparisons are not fair because those methods are trained on different datasets (e.g., SimVLM on 1.8B) but the comparisons with OFA and FLAVA are not convincing.\nFor example, pre-training OFA on only image-text matching and image captioning objectives is not a correct way to use OFA because it is a multi-tasking model. Removing other objectives would definitely compromise its performance.\nSo here is another question, is this approach scalable to large pre-training data? If so, could you report the performance pre-training with FLAVA's data or OFA's data?\n\n**[NoCaps experiments are invalid]**\n\nIn addition, as for the NoCaps experiments, I do not think it is the correct way to do NoCaps experiments.\nIf you are familiar with this dataset, you will know the initial proposal of NoCaps does not allow the introduction of extra image-text pairs [1]. Most methods (e.g., VinVL) followed this rule. However, this work adopted the pre-trained and then fine-tuned model on COCO to evaluate on NoCaps. So the experiments on NoCaps are invalid.\n\nAnyway, I appreciate the authors' efforts in the additional experiments.\nI am glad to raise my rating but I still lean to reject.\n\n\n[1] Agrawal, H., Desai, K., Wang, Y., Chen, X., Jain, R., Johnson, M., ... & Anderson, P. (2019). Nocaps: Novel object captioning at scale. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8948-8957).", " Hi Reviewer 5rq1, thanks for your response! You really asked a very good & interesting question! To answer this question in detail, we would like to first discuss the two metrics \"CIDEr\" and \"SPICE\". \"CIDEr\" is a n-gram based metric, which measures the word overlap between generated and reference captions. That is to say, it will be higher if the predicted captions and reference captions have the exactly same words. This property also makes it sensitive to n-gram overlap, thus cannot well evaluate cases where two sentences are semantically similar but with different words. On the contrary, the \"SPICE\" metric[1] measures the semantical similarity of scene graphs constructed from predicted and reference caption but not n-gram, therefore it shows better correlation with human judgement (More detailed can be found in [1]). Therefore, the close performance in terms of SPICE indicates our model can generate **semantically similar results** as BLIP.\n\nAnswer for sub-question 1: Based on the above analysis, the large performance gap in terms of the CIDEr metric and similar SPICE performance indicates that our model generates semantically similar captions but with different words. The possible reason may be that we conduct pretraining on more diverse datasets, i.e., involving both image-language data and video-language data, rather than only image-language data as BLIP. Since we cannot see the groundtruth captions during evaluation (submitted to the server), we are unable to prove our hypothesis by checking detailed examples.\n\nAnswer for sub-question 2: To be honest, this question is very interesting but difficult to answer. Indeed, we also observe similar phenomenon in recent SOTA methods in the NoCaps official leaderboard, i.e., achieving better results (in terms of CIDEr) on near and out of domain splits than in-domain split on the test set. For example, the CIDEr of Rank-1st method GiT on in-domain and near-domain are 122.40, 123.92 respectively, the CIDEr of Rank-2nd method CoCa on in-domain, near-domain and out-domain are 117.90, 120.73 and 121.69 respectively. We also discussed this phenomenon with experts of image and video captioning tasks, but cannot find very concrete explanations because the groundtruth captions of the test set are not accessible. We guess the possible reason may come from two aspects: 1) There is a tradeoff between overfitting and generalization, i.e., good generalization performance on near-domain and out-domain will lead to slightly worse results on in-domain split. 2) The three splits of the test set may have different annotation distribution.\n\n\n[1] Spice: Semantic propositional image caption evaluation", " Hi Authors,\n\nThanks for addressing my comments and for your patience in my response. I am fine with your response to Q1 and Q3, but have some follow up questions on Q2.\n\n1. Yes, the spice compared with BLIP is 15 vs. 15.1. However, there is a larger gap on CIDEr: 104.6 vs. 111.3.\n2. I actually was referring to why does OmniVL do better on near and out of domain splits than it does on in domain samples, since those are typically the highest performing split (regardless of the other models you compare to). Is your explanation for this because it's only fine-tuned on COCO?", " Hi reviewer VLaG, thanks for your confirming! Really grateful!", " Hi Reviewer cPTx, since the reviewer-authors discussion time window will be closed soon, can you help read the rebuttal as soon as possible so that we can address any remaining concerns you may have. Grateful for your effort!", " Hi,\n\nThe provided response answer my questions. Thanks!", " Hi Reviewer VLaG,\n\nWe are very grateful for your efforts and positive feedbacks. Can you help check our response and see whether your questions are well answered? We are happy to discuss with you about any remaining questions you may still have.", " Dear reviewer 5rq1,\n\nWe would like to thank you again for your effort and positive feedback. Can you help find time to take a look at the response and check whether your questions are well answered. We are very happy to discuss with you and provide further clarification for any new question. ", " Hi Reviewer cPTx, thanks for your effort and suggestions! We have addressed the concerns in the above rebuttal, can you help take a look and see whether your concerns are well addressed? We are happy to answer any question you may still have. ", " Thank you very much for recognizing the importance of our work. We are encouraged that the reviewer think our paper is well-written and our results are very promising. We also thank the reviewer for the valuable comments about the ablation part and have polished our paper based on the suggestions. Below are the detailed response.\n\n**Q1: Missing references.**\n\nAnswer: Thanks for pointing this out, we have cited and discussed these papers in the revision.\n\n**Q2: More details for the data used in Table 9.**\n\nAnswer: Thanks for raising this question! For a fair comparison, we use both visual-label data and vision-language data here. The detailed settings are listed below.\n\n| Pretraining | # Image-Text | # Image-Label | # Video-Text | # Video-Label |\n|:---|:---:|:---:|:---:|:---:|\n| Video-only | | | 2.5M | 0.3M |\n| Image-only | 14M | 1.3M | | |\n| Joint | 14M | 1.3M | 2.5M | 0.3M |\n| Img2vid | 14M | 1.3M | 2.5M | 0.3M |\n| Decoupled Joint | 14M | 1.3M | 2.5M | 0.3M |\n\n***We have added these details in the supplementary material and the corresponding description in the main paper***.\n\n**Q3: Does the Table 9 experiments assume a zero-shot setup, or the model is fine-tuned on the downstream task? If it is finetuned, can you also provide numbers without pre-training?**\n\nAnswer: Thanks for the great question! The results in Table 9 are evaluated under a fine-tuning setting. We provide the numbers without pretraining below. By comparison, we can see that pretraining on large-scale data could improve the results on all types of downstream tasks, and our proposed decoupled joint pretraining brings the most significant performance improvement. ***We have integrated these results into the Table 9 of the revision and added corresponding text description.***\n\n| Pretraining | COCO TR@1 | COCO IR@1 | MSRVTT (ret) | COCO (cap) B@4 | COCO (cap) C | VQA | MSRVTT(QA) |\n|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Without Pretraining | 37.1 | 28.5 | 9.6 | 27.4 | 80.0 | 39.51 | 36.6 |\n| Decoupled Joint | 82.1 | 64.8 | 47.8 | 39.8 | 133.9 | 78.33 | 44.1 |\n\n**Q4: I find a bit strange the fact that \"Joint\" pre-training performs worse than \"Image-only\" and also that \"img2vid\" brings an improvement while \"joint\" doesn't. Do you have any insights about this? Also, I would suggest including some of the \"img2vid\" results in the main paper.**\n\nAnswer: Really great question! The possible reason is that, as video data contain complex spatial-temporal information, it is difficult (or needs very long pretraining epochs) for the model to converge by performing video pretraining (or joint pretraining) from scratch. This is partially reflected by the bad results of \"Video-only\" pretraining. By contrast, \"img2vid\" first performs image pretraining before conducting the video pretraining, which provides a strong initialization by first learning the spatial representation well and thus makes the model easier to learn in the following video pretraining. ***We have followed your suggestion and moved the \"img2vid\" results into the main paper***.\n\n**Q5: The decoupled learning seems to bring a quite significant gain. I would emphasize this aspect a bit more.**\n\nAnswer: Thanks for your great suggestion! ***We have added more discussion and emphasis in the revision***.\n\n**Q6: Can you give more details about the \"$\\*$\" (or 14M$\\^\\{\\*\\}$ ) from the tables. From my understanding 14M means that the model was pre-trained on 14M text-image pairs as well as on text-video data. Is this correct? If yes, please make it more obvious in the paper. Also, if this is indeed the case, I think it would be fair to include the numbers only with the 14M samples to have a clear comparison on the amount of performance the video data brings.**\n\nAnswer: Yes, your understanding is correct! **14M$\\^\\{\\*\\}$** denotes extra video data is used (as mentioned in line 217, we have highlighted it in the revision Line 210). Actually, we have shown the results with 14M image only data in Table 9 (denoted as **Image-only**) and supplementary material.", " Thank you for your valuable feedback. **We have addressed all the concerns and added missing comparisons in the revision**. We hope it can convince you and change the decision. \n\n**Q1: The so-called decoupled paradigm separates the image-language pretraining and video-language joint pretraining. The two-stage pre-training is a bit complicated. End-to-end pre-training is easier to train.**\n\nAnswer: Great question! We know one-stage end-to-end pre-training may look easier to train, but the complex spatial-temporal information within videos indeed makes it difficult to learn video and video-language representation from scratch. It is not only inefficient but also ineffective. The ineffectiveness is reflected by the results in Table 9, where the proposed decoupled joint pretraining outperforms both ***Joint*** Pretraining (which is trained end-to-end) and ***Video-only*** Pretraining (which is trained end-to-end) by a large margin. The proposed decoupled joint pretraining is also the **key** to make image-language and video-language benefit each other. The inefficiency can be partially reflected by the recent work TimeSformer[1] and video self-supervised pretraining work VideoMAE [2], which requires more than 800 training epochs.\n\nWe would also like to point out that, stage-wise pretraining has been widely adopted by other VL pretraining methods, e.g., FLAVA, VLMO, and e.t.c, due to its simplicity in implementations as well as its effectiveness.\n\n[1] Is Space-Time Attention All You Need for Video Understanding?\n\n[2] Masked Autoencoders As Spatio-temporal Learners.\n\n**Q2: Several important baselines are omitted. FLAVA serves as an important foundation model but there are few comparisons. Why? I found that FLAVA outperforms Omnivl on Food101, DTD, and Flowers.**\n\nAnswer: Since FLAVA only reports the zero-shot image-text retrieval results in their paper, and it doesn't support image captioning (lacking a text decoder for text generation), we don’t compare with it for these two tasks.\n\nTo make a comparison with FLAVA, we evaluate the fine-tuning results of FLAVA on our own, and compare the image-text retrieval performance of FLAVA and OmniVL on both COCO (top) and Flickr (down) under zero-shot and fine-tuning settings. The results below demonstrate OmniVL outperforms FLAVA by large margins on both datasets.\n\n| Method | FT (TR@1/5/10) | FT (IR@1/5/10) | ZS (TR@1/5/10) | ZS (TR@1/5/10) |\n|:---|:---:|:---:|:---:|:---:|\n| FLAVA | 61.5 / 82.1 / 89.6 | 50.1 / 74.4 / 83.2 | 42.7 / 76.8 / - | 38.4 / 67.5 / - |\n| OmniVL | 82.1 / 95.9 / 98.1 | 64.8 / 86.1 / 91.6 | 71.8 / 90.6 / 95.0 | 56.4 / 80.8 / 87.8 |\n\n\n| Method | FT (TR@1/5/10) | FT (IR@1/5/10) | ZS (TR@1/5/10) | ZS (TR@1/5/10) |\n|:---|:---:|:---:|:---:|:---:|\n| FLAVA | 85.4 / 95.7 / 98.3 | 73.2 / 92.7 / 95.5 | 67.7 / 94.0 / - | 65.2 / 89.4 / - |\n| OmniVL | 97.1 / 99.8 / 100.0 | 87.5 / 97.5 / 99.0 | 87.8 / 98.3 / 99.2 | 76.1 / 92.5 / 95.4 |\n\nAlthough FLAVA outperforms OmniVL on Food101, DTD, and Flowers in terms of linear probing, we argue that their pretraining data (70M image-text data) are much larger than ours. For fair comparisons, we pretrain FLAVA (we adopt the implementation in [torchmultimodal](https://github.com/facebookresearch/multimodal/tree/main/examples/flava)) on the 14M image-text data (denoted as FLAVA$\\_\\{14M\\}$) and evaluate the results using linear probing on Food101, CIFAR10, CIFAR100, Pets, DTD, and Flowers. The results below demonstrate that using the same amount of pretraining data, OmniVL beats FLAVA on most datasets. (Due to space limit, we put the linear probing comparison results with FLAVA in the supplementary material but will move to the main paper in the camera ready version, which has an additional content page space.)\n\nWe also compare OmniVL with FLAVA$\\_\\{14M\\}$ on COCO retrieval through finetuning. We see from the last two columns below that OmniVL outperforms FLAVA$\\_\\{14M\\}$ clearly.\n\n\n| Method | Food101 | CIFAR10 | CIFAR100 | Pets | DTD | Flowers | COCO FT TR@1 | COCO FT IR@1 |\n|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| FLAVA$\\_\\{14M\\}$ | 85.2 | 90.4 | 76.2 | 82.3 | 74.2 | 92.7 | 57.4 | 48.7 |\n| OmniVL | 87.4 | 96.2 | 83.2 | 87.1 | 76.5 | 89.8 | 82.1 | 64.8 |", " **Q3: Also, for image captioning tasks, the state-of-the-art is already 140+. Although different model sizes and pre-training data are used, the authors should compare with more recent methods (like SimVLM, OFA).**\n\nAnswer: We don’t include SIMVLM and OFA for comparisons on image caption because the pretraining data that they adopt are far larger than ours (1.8 B / 21.4 M v.s. 14 M). Even so, OmniVL still achieves comparable or even better results on some metrics, e.g., SPICE, on the COCO Caption dataset. \n\nFor fair comparisons, we pretrain OFA on the 14M image-text data with the officially released codebase (with only image-text matching and image captioning objectives, denoted as OFA$\\_\\{14M\\}$), the results (row 2 in the table below) demonstrate that using the same amount of pre-training data, OmniVL performs better than OFA measured by all the metrics. Note that we don't show the results of SIMVLM on 14M data since they haven't released their code. \n\n| Method | # Pre Img-Txt | B@4 | METEOR | CIDER | SPICE |\n|:---|:---:|:---:|:---:|:---:|:---:|\n| SIMVLM | 1.8B | 39.0 | 32.9 | 134.8 | 24.0 |\n| OFA$\\_\\{14M\\}$ | 14M | 38.7 | 30.6 | 130.5 | 23.5 | \n| OFA | 21.4M | 41.0 | 30.9 | 138.2 | 24.2 |\n| OmniVL | 14M | 39.8 | 31.2 | 133.9 | 24.2 |\n\nAdditionally, with less pretraining data, OmniVL outperforms SIMVLM and OFA on visual question answering, which also validates the effectiveness of our method.\n\n| Method | # Pre Img-Txt | test-dev | test-std |\n|:---|:---:|:---:|:---:|\n| SIMVLM | 1.8B | 77.9 | 78.1 |\n| OFA | 21.4M | 78.0 | 78.1 |\n| OmniVL | 14M | 78.3 | 78.4 |\n\nWe have included all the new comparison results in the uploaded revision.\n\n**Q4: Imagenet ommited for linear probing.**\n\nAnswer: Great question! We omitted ImageNet for linear probing evaluation because it is included in our UniVLC pretraining data, in that case, it is unfair to compare the linear probing performance of OmniVL with other methods.\n\n**Q5: Incorrect description about FLAVA in Section2.**\n\nAnswer: Thanks for pointing it out! We have fixed this issue in the uploaded revision.", " Thank you for acknowledging the importance of our work and sharing the valuable feedback! Below, we answer the questions one by one.\n\n**Q1: More discussion about limitation and negative societal impact.**\n\nAnswer: Thanks for your suggestion! We have added more discussion about limitation and negative societal impact in the revision. Here are updated section: ***Although our model has achieved superior results on a wide range of downstream tasks, it still lacks of the commonsense reasoning capability required by some visual-language interaction tasks (e.g., visual/video question answering). It also needs better architecture design to enable the zero-shot capability for visual question answering and few-shot task customization capability like GPT-3. From the societal impact perspective, since our model is pretrained on the large-scale web-crawled data which may contain some toxic language or bias, and it is not easy to explicitly control the model output, much attention should be paid to ensure responsible model deployment.***\n\n**Q2: Why does OmniVL perform more poorly on the in-domain subset of the NoCaps captioning task?**\n\nAnswer: As mentioned in Line 259, we adopted the fine-tuned model on COCO to evaluate on NoCaps, and the results in Table 6 demonstrate that OmniVL achieves better performance on the overall dataset. Considering the in-domain subset of NoCaps share a similar set of object classes with COCO, we think the slightly worse results (15 v.s. 15.1 in terms of SPICE) compared with BLIP might possibly result from the fact that OmniVL doesn’t overfit on COCO, producing decent results on both COCO and the in-domain split of NoCaps. More importantly, the resulting model does generalize to new domains---we observe significant performance gains produced by OmniVL compared to alternative methods on near-/out-of domain splits. This suggests that OmniVL is able to balance the tradeoff between overfitting and generalization. This is particularly encouraging given that generalization ability is arguably the gold standards to evaluate the performance of machine learning models.\n\n\n**Q3: UniVLC is a straightforward extension of UniCL loss.**\n\nAnswer: Thanks. In this paper, we aim to achieve unification in three dimensions: modality, functionality, and pretraining corpus. To this end, we build upon UniCL and make the following changes to fully utilize both webly-crawled visual-text data and human-annotated visual-label data. First, we maintain memory banks to store the most recent visual vectors and text vectors from momentum encoders, which facilitates us to enjoy a large batch size for contrastive learning. Second, we further extend its scope to cover video data. We agree that this extension is very intuitive and straightforward, but we are the first that demonstrates its effectiveness in unifying image-language and video-language pretraining.", " We thank all reviewers for their valuable comments. We are happy all reviewers think our paper addressed an important research topic with good motivation. We are also encouraged that Reviewer 5rq1 think our work is an obvious next step toward general multimodal models, and Reviewer VLaG thinks our paper is well-written and overall message is well understood.\n\n**We have revised our paper and updated it in the system**. In the revision, 1) We have followed the suggestion of Reviewer 5rq1 and added more in-depth discussion of limitation and negative societal impact; 2) We have addressed the main concerns of Reviewer cPTx by adding comparisons with more SOTA methods include FLAVA, SIMVLM and OFA, and fixed the imprecise description in Section 2; 3) Based on the suggestion of Reviewer VLaG, we have added the missing references and the ablation result without pre-training in Table 9, moved the \"img2vid\" results into main paper, and emphasized decoupled learning more. ", " This paper proposes OmniVL, a vision-language foundation model that supports both image-language and video-language pretraining in a unified framework. The model allows for evaluation of vision-only tasks, multimodal retrieval, and multimodal generation based tasks like captioning and VQA. The model achieves comparable or SOTA performance across many benchmark datasets. Strengths\n- The proposed framework unifies different modality objectives, downstream task formulations, and input data types.\n- Given the current move toward \"foundation\" models, this work seems like an obvious next step toward general multimodal models that are performant in a variety of downstream tasks, and is new to incorporate video data at the same time.\n- There are comprehensive experiments and performance is either comparable or SOTA on some downstream evaluation tasks.\n- The decoupled pretraining is a sensible idea and has a large impact on performance given the ablation study\n\nWeaknesses\n- While the paper states that their unified vision-language contrastive (UniVLC) is a novel contribution, it is a straight forward extension of the existing UniCL loss. \n - Why does OmniVL perform more poorly on the in-domain subset of the NoCaps captioning task? I don't think the authors adequately addressed limitations of the approach or negative societal impact. Computational resources were broadly mentioned, but the societal consequences sentences are hand wavy and it gives the impression the authors didn't think critically about this.", " The authors propose a new foundation model for image-language and video-language tasks.\nFirst, it applies a single unified transformer-based visual encoder for both image and video inputs. Second, they propose a decoupled joint pre-training of image-language and video-language to effectively leverage spatial and temporal dimensions.\nThird, they propose a unified vision-language contrastive loss to leverage different sources of data.\nThe authors argue they achieve a good performance on many different types of tasks.\n Strengths:\n\t1. The motivation is good. The authors aim to build a foundation model for image-language and video-language tasks, which is a popular research topic.\n\t\n\nWeaknesses:\n\t1. The so-called decoupled paradigm separates the image-language pretraining and video-language joint pretraining. The two-stage pre-training is a bit complicated. End-to-end pre-training is easier to train.\n\t2. Several important baselines are omitted. For example, FLAVA serves as an important foundation model but there are few comparisons. Why? I found that FLAVA outperforms Omnivl on ood101, DTD, and Flowers. Also, for image captioning tasks, the state-of-the-art is already 140+. Although different model sizes and pre-training data are used, the authors should compare with more recent methods (like SimVLM [1], OFA [2]).\n\t3. Several important datasets are omitted. For example, Imagenet is an important dataset for linear probing but this paper did not report its performance. \n\t4. Because a lot of important baseline models and datasets are missing, the comparison is not comprehensive.\n\n[1] SimVLM: Simple Visual Language Model Pretraining with Weak Supervision\n[2] OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework \t1. The performance comparison is confusing. Why do you report the performance of SimVLM in table7 but do not report it in table6? As we know, SimVLM is able to do both VQA and captioning. This is common for other baselines in other tables. \n\t2. In Section2, the authors claim FLAVA is concurrent work, which is not true. FLAVA is proposed last year and their results should be included and discussed.\n\n 1. include more studies and conduct a fair and comprehensive comparison", " The paper proposes a unified transformer based model for Image-Language and Video-Language tasks by performing both image-language and video-language pretraining. The authors propose a decoupled joint pretraining image-language and video-language in order to obtain a boost in performance on multiple downstream tasks. The model is pretrained in a decoupled way, where first it is pretrained on image-language and then on video-language different from prior works. Moreover, the authors propose a new loss function that leverages image-text, video-text, image-label and video-label information. Finally, the authors test their proposed approach on multiple benchmarks. Strengths\nThe paper tackles an important problem and proposes an interesting system that achieves good results. The overall paper is fairly well written and the overall message is well understood.\n\nWeaknesses\nWhile the overall idea is interesting and the results look promising, I feel like the ablation part of the paper could be more comprehensive. So, I have several questions related to these ablations (please see below)\n\nMissing citations\n* Miech, Antoine, et al. \"Thinking fast and slow: Efficient text-to-visual retrieval with transformers.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n* Croitoru, Ioana, et al. \"Teachtext: Crossmodal generalized distillation for text-video retrieval.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n* Bogolin, Simion-Vlad, et al. \"Cross Modal Retrieval with Querybank Normalisation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. 1. Can you give more details on the data used for the experiments from Table 9? How many samples are there? Do you use the \"label\" anotation?\n\n2. Does the Table 9 experiments assume a zero-shot setup, or the model is fine-tuned on the downstream task? If it is finetuned, can you also provide numbers without pre-training?\n\n3. I find a bit strange the fact that \"Joint\" pre-training performs worse than \"Image-only\" and also that \"img2vid\" brings an improvement while \"joint\" doesn't. Do you have any insights about this? Also, I would suggest including some of the \"img2vid\" results in the main paper.\n\n4. The decoupled learning seems to bring a quite significant gain. I would emphasize this aspect a bit more.\n\n5. Can you give more details about the \"*\" from the tables. From my understanding 14M* means that the model was pre-trained on 14M text-image pairs as well as on text-video data. Is this correct? If yes, please make it more obvious in the paper. Also, if this is indeed the case, I think it would be fair to include the numbers only with the 14M samples to have a clear comparison on the amount of performance the video data brings. The limitations are briefly discussed while the potential societal impact is left for future studies." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "ZIfNK8iuV6", "rxn7sYxSw2x", "rxn7sYxSw2x", "fsd9trNF8Yb", "rxn7sYxSw2x", "rxn7sYxSw2x", "rxn7sYxSw2x", "yaYZtrdpnZ1", "Pf3Wt915eCGW", "hfx1QwdfuGq", "0PZydnhoyTi", "gH7PhVI-4Zu", "vdzHGhjgWgS", "nLUKAMW89Qp", "VvtFH5Je5F8", "FEENJNLySr", "nLUKAMW89Qp", "XeXtqYvmEth", "XeXtqYvmEth", "VvtFH5Je5F8", "nips_2022_u4ihlSG240n", "nips_2022_u4ihlSG240n", "nips_2022_u4ihlSG240n", "nips_2022_u4ihlSG240n" ]
nips_2022_MK_130d4Y0
EcoFormer: Energy-Saving Attention with Linear Complexity
Transformer is a transformative framework for deep learning which models sequential data and has achieved remarkable performance on a wide range of tasks, but with high computational and energy cost. To improve its efficiency, a popular choice is to compress the models via binarization which constrains the floating-point values into binary ones to save resource consumption owing to cheap bitwise operations significantly. However, existing binarization methods only aim at minimizing the information loss for the input distribution statistically, while ignoring the pairwise similarity modeling at the core of the attention mechanism. To this end, we propose a new binarization paradigm customized to high-dimensional softmax attention via kernelized hashing, called EcoFormer, to map the original queries and keys into low-dimensional binary codes in Hamming space. The kernelized hash functions are learned to match the ground-truth similarity relations extracted from the attention map in a self-supervised way. Based on the equivalence between the inner product of binary codes and the Hamming distance as well as the associative property of matrix multiplication, we can approximate the attention in linear complexity by expressing it as a dot-product of binary codes. Moreover, the compact binary representations of queries and keys in EcoFormer enable us to replace most of the expensive multiply-accumulate operations in attention with simple accumulations to save considerable on-chip energy footprint on edge devices. Extensive experiments on both vision and language tasks show that EcoFormer consistently achieves comparable performance with standard attentions while consuming much fewer resources. For example, based on PVTv2-B0 and ImageNet-1K, EcoFormer achieves a 73% reduction in on-chip energy footprint with only a slight performance drop of 0.33% compared to the standard attention. Code is available at https://github.com/ziplab/EcoFormer.
Accept
The paper after rebuttal addresses several of the limitations (mainly lacking positioning in the rich existing litterature) of the first submission. The strength of the paper resides in a holistic approach to the ("yet another") efficient attention mechanism, evaluating and discussing trade-offs between accuracy, compute, energy, and silicon area use. The main limitations are the limited novelty, and the rather thin experimental validation: no SOTA baselines on ImageNet, LRA not being very correlated to real tasks accuracy. Overall, I recommend for the paper to be accepted based on its technical merit.
train
[ "b9e4vnBdlPR", "1qgwimKOVk", "52Z4Skec2oS", "N1ceu5aE7vj", "-rsJyQ48coQ", "rOlJjXV_Vo", "_kVcygdjc35", "K1yx3iLGW0c", "vQo7xYIcJ3U", "upxPvn_UeKO", "NVDIVFp2yBV", "9KMjYLmH5Ux", "FDyN9Q7F9wJ", "mCY_ZVqAaCq", "7ODSGDPNTTW" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback and suggestions! We are happy to address your questions and appreciate the valuable comments.", " Thanks for your feedback and suggestions! We feel glad to address your questions and appreciate the constructive reviews for improving our work.", " Thank you for the clarifications. I am satisfied with the responses and maintain my original score.", " Thank the authors for their response. I agree with the authors that bit fusion can be a potential implementation of the proposed scheme. Moreover, the proposed scheme seems to provide significant energy savings even when considering the access energy of off-chip memory. I will raise my rating to borderline accept.", " Dear Reviewer BP1F,\n\nAs the rebuttal discussion is about to end soon, please don’t hesitate to let us know if there are still some concerns/questions. We have addressed your primary concerns regarding comparing with binary Transformers and made Eq. (11) easier to understand. We sincerely thank you again for your great efforts in reviewing this paper.\n\nBest regards,\n\nAuthors of #231", " Dear Reviewer THJi\n\nWe sincerely thank you again for your great efforts in reviewing this paper. We have addressed your major concerns regarding the actual demonstration on existing hardware backends (GPU and BitFusion). Please don’t hesitate to let us know if there are still some concerns/questions.\n\nBest regards,\n\nAuthors of #231", " We sincerely thank all reviewers for their valuable comments. \n\n## Novelty\nAll reviewers recognize the novelty of our method\n\n* \"*This idea of applying kernelized hash functions … is very interesting. I believe this proposal helps to further minimize the computational cost of MHSA and is considered as a new contribution to the field.*\" (Reviewer THJi)\n* \"*The idea of using learnable kernelized hashing to preserve similarity relations of binary codes is novel.*\" (Reviewer M9L7)\n* \"*The motivation of this work is promising and binarizing the Transformers through kernelized hashing functions in a self-supervised way is novel.*\" (Reviewer BP1F)\n\n## Promising Results\nAll reviewers agree that \n\n* \"*The results are also encouraging.*\" (Reviewer THJi)\n* \"*The results are encouraging and demonstrate the effectiveness of the proposed methods on the vision tasks.*\" (Reviewer M9L7)\n* \"*Experimental results on ImageNet show high efficiency compared with the baselines.*\" (Reviewer BP1F)\n\n## Summary of changes\nWe have revised our submission and summarized our updates as follows:\n\n1. We have provided more throughput results on a GPU. (Reviewer THJi)\n2. We have removed the area cost and shown the actual demonstration of different attention mechanisms in terms of energy and latency on BitFusion in the revision. (Reviewer THJi)\n3. We have provided more comparisons with the non-kernel-based efficient attention scheme. (Reviewer THJi) \n4. We have conducted more experiments in terms of long-context scenarios on the Long-Range Arena benchmark dataset. (Reviewer M9L7)\n5. We have provided more details on how to calculate the number of additions and multiplications. (Reviewers M9L7 and BP1F)\n6. We have provided more discussions on LSH-based efficient attention methods. (Reviewer M9L7)", " Thanks for your valuable comments.\n\n**Q1.** No comparisons with binary transformers, such as [A].\n\n**A1.** As mentioned in L129-133 and Section 3.3, we have discussed many binary transformers, including BinaryBERT [2] and BiBERT [47], which may not well preserve the similarity relations among tokens. We do not compare with [A] since [A] is **an ArXiv paper online on 05/25/2022, which is well after the submission deadline**. Compared with these methods, our EcoFormer customizes a new binarization paradigm to softmax attention from the kernelized hashing perspective, which helps to preserve the pairwise similarity while significantly saving energy cost. \n\nTo demonstrate this, as mentioned in L296-307 (L288-299 in the previous version), we have compared our EcoFormer with Quant-EcoFormer which uses the same binary quantization method as BinaryBERT and BiBERT except that we do not use any advanced training scheme such as knowledge distillation. We have shown the results in Table 4 (Table 3 in the previous version). From the table, our EcoFormer consistently outperforms Quant-EcoFormer on different frameworks while saving more energy. To make it clear, we have changed \"Quant-EcoFormer\" to \"Bi-EcoFormer\" in our revised manuscript.\n\n**Q2.** How to calculate the number of additions and multiplications?\n\n**A2.** As mentioned in L249-250, we calculate the number of additions and multiplications to measure the model complexity following [54]. Specifically, we calculate floating-point operations (FLOPs) following [62], where we count the multiply-accumulate operations for all layers. In this case, each multiply-accumulate operation consists of an addition and a multiplication. We also count the multiplications in the scaling operations. Therefore, our baseline MSA has more multiplications than additions. For our EcoFormer, as mentioned in L70-71, we can replace most of the floating-point multiplications in attention with simple additions. Therefore, there are more additions than multiplications in our EcoFormer. We have included these descriptions in Section A of the supplementary file.\n\n**Q3.** In line 202, $c$ is set to $⌈log_2⁡b⌉$, then the inequation in line 201 could be 0, right?\n\n**A3.** Thanks for pointing it out. In our revision, we set $c$ to $⌈log_2⁡ (b+1)⌉$.\n\n**Q4.** Sec 4.2 seems to have some inconsistent sizes of matrices or vectors, which makes this key part hard to follow and misunderstand. For example, in Eq. (11), the LHS $H({\\bf q})$ should be a $b \\times 1$ vector as you defined at the beginning of this subsection, but RHS is $N \\times b$ after the matrix multiplication.\n\n**A4.** We would like to clarify that our notations do not have inconsistent sizes. In Eq. (11), $H({\\bf q})$ is a matrix with a shape of $N \\times b$ since ${\\bf q} = [{\\bf q}_1, \\cdots, {\\bf q}_N]^{\\top}$ is a matrix with a shape of $N \\times D_p$ , where ${\\bf q}_i \\in \\mathbb{R}^{D_p}$ is a vector as defined at the beginning of Section 4.2. To avoid misunderstanding, in our revision, we have changed ${\\bf q}$ to ${\\bf Q}$, ${\\bf G}$ to ${\\bf g}({\\bf Q})$ and expanded Eq. (11) to\n\n$ H({\\bf Q})= \\left[ h_1({\\bf Q}), \\cdots, h_b({\\bf Q}) \\right] = \\left[ \\begin{array}{c}\n h_1({\\bf q}_1), \\cdots, h_b({\\bf q}_1) \\\\\\\\\n \\cdots~\\cdots \\\\\\\\\n h_1({\\bf q}_N), \\cdots, h_b({\\bf q}_N)\n\\end{array}\n\\right] =\n\\mathrm{sign}\\left({\\bf g}({\\bf Q}){\\bf A}\\right).$\n\n**Q5.** The additional parameters of weights $a$ are optimized iteratively in the training process, the computation cost and time cost might still be a problem.\n\n**A5.** As mentioned in L240-241, we only learn the hash functions per $\\tau$ epoch to prevent the prohibitive computational cost. Heuristically, as mentioned in L265-266, we find that setting $\\tau$ to 30 achieves good performance with only a small amount of additional cost (e.g., 156 seconds for PVTv2-B4). Moreover, we target at improving the inference efficiency, rather than the training efficiency.\n\n**Reference**\n\n[A] BiT: Robustly Binarized Multi-distilled Transformer. arXiv 2022.", " **Q5.** More discussions with the related methods [E][F][G].\n\n**A5.** Thanks for your suggestions. We have included the following discussions in the related work.\n\n*Reformer [30], **SMYRF [E], Fast Transformers [F] and LHA [G]** restrict the attention to the most similar token pairs via hashing and reduce the computational complexity to ${\\mathcal O}(N \\log N)$. In contrast, our EcoFormer learns kernel-based hash functions using attention scores to map the queries and keys into compact similarity-preserving binary codes in Hamming space, which is energy-efficient and in linear complexity ${\\mathcal O}(N)$. With the low-dimensional binary queries and keys, our EcoFormer is able to replace most of the multiplications with simple accumulations.* \n\n**Reference**\n\n[A] Long Range Arena: A Benchmark for Efficient Transformers. ICLR 2020.\n\n[B] Long-short transformer: Efficient transformers for language and vision. NeurIPS 2021.\n\n[C] Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention. AAAI 2021.\n\n[E] SMYRF: Efficient Attention using Asymmetric Clustering. NeurIPS 2020.\n\n[F] Fast Transformers with Clustered Attention. NeurIPS 2020.\n\n[G] Sparse Attention with Learning to Hash. ICLR 2022.\n", " Thanks for your constructive comments.\n\n**Q1.** More results on natural language processing, such as Long-range arena [A].\n\n**A1.** To evaluate the performance of different methods under long-context scenarios, we train our EcoFormer on two tasks, **Text** and **Retrieval** from the Long Range Arena (LRA) benchmark [A] following the settings of [B]. Our implementations are based on the released code of [C]. From Table III, our EcoFormer achieves comparable performance with much lower energy consumption. For example, on **Text**, compared with standard Transformer, our method saves around 94.6% multiplications and 93.7% additions as well as 94.5% energy consumption, which is more efficient than the existing attention mechanisms. These results further justify the effectiveness of our EcoFormer under long-context scenarios.\n\n**Q2.** How to calculate the number of additions and multiplications?\n\n**A2.** As mentioned in L249-250, we calculate the number of additions and multiplications to measure the model complexity following [54]. Specifically, we calculate floating-point operations (FLOPs) following [62], where we count the multiply-accumulate operations for all layers. In this case, each multiply-accumulate operation consists of an addition and a multiplication. We also count the multiplications in the scaling operations. Therefore, our baseline MSA has more multiplications than additions. For our EcoFormer, as mentioned in L70-71, we can replace most of the floating-point multiplications in attention with simple additions. Therefore, there are more additions than multiplications in our EcoFormer. We have included these descriptions in Section A of the supplementary file.\n\n**Q3.** Why not compare with Adder Attention [54] on DeiT?\n\n**A3.** First, as mentioned in L51-62, our EcoFormer focuses on improving the efficiency of attention via kernelized hashing for the long sequence scenario. Therefore, we do not apply our EcoFormer to DeiT due to the short sequence length of 196. Second, as the source code of Adder Attention is unavailable, we are unable to compare EcoFormer with Adder Attention on PVTv2 [63] and Twins [10].\n\n**Q4.** Why was the ImageNet model not trained from scratch?\n\n**A4.** We follow the standard training settings in binarization literature [40,73]. It has been well explored that on large-scale dataset, fine-tuning from the pre-trained model helps to reduce the performance drop due to the gradient approximation for the non-differentiable $\\rm{sign}$ function as explained in L167-169.\n\nTo demonstrate the effect of training from scratch, we apply our EcoFormer to PVTv2-B0 as well as PVTv2-B1. We follow the experimental settings mentioned in L253-267 (L252-266 in the previous version) except that we train the model from scratch with 300 epochs. The initial learning rate is set to $2.5 \\times 10^{-4}$. From Table IV, our method achieves comparable performance while significantly reducing the computational complexity and energy consumption. The accuracy drop from discretization can be mitigated by more advanced optimization methods as discussed in L126-129. We have put the results in Section E of the supplementary file.\n\nTable IV. Performance comparisons of different methods on ImageNet-1K. All the models are trained from scratch. The number of multiplications, additions, and energy consumption are calculated based on an image resolution of 224 × 224.\n\n| Model | Method | #Mul. (B) | #Add. (B) | Energy (B pJ) | Top-1 Acc. (%) |\n|:--------:|:------:|:---------:|:---------:|:-------------:|:--------------:|\n| PVTv2-B0 | MSA | 2.02 | 1.99 | 9.3 | 69.72 |\n| | Ours | **0.54** | **0.56** | **2.5** | **68.70** |\n| PVTv2-B1 | MSA | 5.02 | 5.00 | 23.1 | 78.34 |\n| | Ours | **2.03** | **2.09** | **9.4** | **77.49** |", " **Q4.** Comparisons with other non-kernel-based efficient attention schemes (e.g., [B]).\n\n**A4.** We have compared two non-kernel-based efficient attention methods (Linformer [61] and Reformer [30]) in Table 6 (Table 5 in the previous version). From the table, our EcoFormer saves much more additions, multiplications, and energy consumption. Besides, we also compare our EcoFormer with Combiner [B] on two tasks, **Text** and **Retrieval** from the Long Range Arena (LRA) benchmark dataset [56]. From Table III, our EcoFormer achieves comparable performance to the other methods while significantly reducing the computational complexity and energy cost. For example, on **Text**, compared with standard Transformer, our method saves around 94.6% multiplications and 93.7% additions as well as 94.5% energy consumption, which is more efficient than the existing efficient attention mechanisms. These results further justify the effectiveness of our EcoFormer under long-context scenarios.\n\nTable III. Performance comparisons of different methods on Long Range Arena (LRA). We report the classification accuracy (%) for **Text** as well as **Retrieval** and the average accuracy across two tasks. Bi-EcoFormer denotes that we use binary quantization [26] instead of our proposed hash functions to obtain binarized queries and keys relying on EcoFormer. $^{*}$ denotes that we obtain the results from the original paper.\n\n| Model | #Mul. (B) | #Add. (B) | Energy (B pJ) | Text (4K) | Retrieval (4K) | Average |\n|:------------------:|:---------:|:---------:|:-------------:|:---------:|:--------------:|:---------:|\n| Transformer | 4.63 | 4.57 | 21.25 | 64.87 | 79.62 | 72.25 |\n| Performer [9] | 0.83 | 0.84 | 3.83 | 64.82 | 79.08 | 71.95 |\n| Linformer [61] | 0.81 | 0.81 | 3.74 | 57.03 | 78.11 | 67.57 |\n| Reformer [30] | 0.54 | 0.54 | 2.49 | 65.19 | 79.46 | 72.33 |\n| Combiner$^{*}$ [B] | 0.51 | 0.51 | 2.34 | 64.36 | 56.10 | 60.23 |\n| BiQuant-EcoFormer | 0.39 | 0.67 | 2.03 | 64.68 | 75.91 | 70.30 |\n| **EcoFormer** | **0.25** | **0.29** | **1.17** | 64.79 | 78.67 | 71.73 |\n\n**Reference**\n\n[A] Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. ISCA 2018.\n\n[B] Combiner: Full Attention Transformer with Sparse Computation Cost. NeurIPS 2021.", " Thanks for your constructive comments.\n\n**Q1.** I would like to know if the proposed kernelized hashing attention runs faster, slower, or comparable to kernel-based linear attention methods on GPUs or TPUs.\n\n**A1.** Our proposed EcoFormer runs faster than the kernel-based linear attention on a GPU. To demonstrate this, we measure the throughput of different methods on a single NVIDIA RTX 3090 GPU. We compare our EcoFormer with the standard multi-head self-attention (MSA) and kernel-based linear attention (KLA) [9]. From Table I, KLA shows higher throughput than MSA, while our EcoFormer achieves even faster throughput than KLA, thanks to the reduced feature dimensions ($b$ vs. $D_p$) of queries and keys (Line 190). With efficient accumulation implementation, the throughput of our EcoFormer can be further improved, which will be explored in the future. We have included these results and the corresponding discussions in Section D of the supplementary material.\n\nTable I. Throughput of different methods on ImageNet-1K. MSA denotes the standard multi-head self-attention and KLA represents the kernel-based linear attention. The throughput is measured with a mini-batch size of 32 and an image resolution of 224×224 on a single NVIDIA RTX 3090 GPU.\n\n| Model | Method | Test Throughput (images/s) |\n| ----------- | ------ | :--------------------------: |\n| PVTv2-B0 | MSA | 850 |\n| | KLA [9] | 1166 |\n| | **Ours** | **1379** |\n| PVTv2-B1 | MSA | 621 |\n| | KLA | 769 |\n| | **Ours** | **874** |\n| PVTv2-B2 | MSA | 404 |\n| | KLA | 444 |\n| | **Ours** | **483** |\n| Twins-SVT-S | MSA | 426 |\n| | KLA | 489 |\n| | **Ours** | **576** |\n\n**Q2.** I am not entirely clear about how the energy consumption is calculated. But I think the authors only calculated the energy consumption of arithmetic operations and ignored the energy consumption of DRAM accesses.\n\n**A2.** We would like to clarify that our EcoFormer only targets at **on-chip** efficiency as mentioned in L16-19, L70-71 and L205-207. Therefore, we only calculate the theoretical energy consumption of arithmetic operations following [54]. The energy consumption of **off-chip** DRAM access is dependent on many system-level factors, such as data reusing, partition and scheduling, which is beyond the scope of the manuscript. \n\nTo demonstrate the actual latency and energy consumption with DRAM access taken into consideration, we also test different methods on a simulator of BitFusion [A], a bit-flexible microarchitecture synthesized in 45 nm technology. All our experiments on BitFusion, including our EcoFormer and the other efficient attention mechanisms, use an attention layer with an embedding dimension of 32. We use the sequence length of 3,136 from the first stage of PVTv2 [63]. From Table II, our EcoFormer shows much lower latency and energy than the other efficient attention methods, which further verifies the advantage of our EcoFormer. We have included the results and corresponding descriptions in Section 5.3 of the revision.\n\nTable II. Latency and energy comparisons with different attention methods. We measure the latency and energy of an attention layer with a mini-batch size of 16, a sequence length of 3,136, and an embedding dimension of 32 on a BitFusion [A] simulator.\n| Method | Latency (ms) | Energy (pJ) |\n|:-----------:|:-------------:|:-----------:|\n| Transformer | 0.0036 | 85,692.18 |\n| Performer [9] | 0.0019 | 41,113.64 |\n| Linformer [61] | 0.0018 | 45,770.61 |\n| Reformer [30] | 0.0024 | 57,305.47 |\n| **EcoFormer** | **0.0010** | **24,990.75** |\n\n**Q3.** I am not convinced by the area saving reported in the paper as you cannot simply calculate the area of the accelerator design by multiplying the area of the circuits with the number of operations. I think it makes much more sense to claim better performance on a specific hardware backend rather than area savings.\n\n**A3.** We agree. The on-chip area cost depends on many factors, such as area reusing, memory bandwidth, the data flow of the accelerator, etc. Therefore, we remove the on-chip area cost in our revised manuscript. As mentioned in A2, we instead show the latency and energy on a simulator of BitFusion [A]. From Table II, our EcoFormer significantly saves energy cost and accelerates the inference speed.", " This paper proposes an efficient attention mechanism with a linear complexity with respect to the context length and only requires bitwise operations. The resulting model is called EcoFormer. Compared to existing kernel-based linear attention approaches, EcoFormer applies the kernelized hash functions to directly map queries and keys into binary codes to further reduce the computational complexity using floating-point values. Strengths: This idea of applying kernelized hash functions to existing multi-headed self-attention (MHSA) to achieve linear complexity and eliminate the requirement for floating-point arithmetic operations is very interesting. I believe this proposal helps to further minimize the computational cost of MHSA and is considered as a new contribution to the field. The paper is well written and provides a good discussion of the background and related work. The results are also encouraging, as shown in Table 2, where EcoFormer is able to achieve similar quality (i.e., accuracy) to the ordinary MHSA with significantly fewer multiplication and addition operations. Overall, I like the proposed idea and enjoy reading the paper, but cannot recommend accepting the paper because of the following weaknesses.\n\nWeaknesses: My main concern with this paper is whether the savings claimed in the paper can be achieved with existing hardware backends. The savings in terms of computational cost, energy consumption, and area are purely theoretical and are not supported by any actual demonstration.\n1) Performance: The actual performance of the proposed kernelized hashing attention is not reported in wall clock time and is not compared with the ordinary MHSA as well as the kernel-based linear attention methods. Many previous works on efficient attention have demonstrated a significant performance improvement compared the ordinary MHSA when the context length is long. For EcoFormer, I hope to understand more about the actual implementation of the proposed attention mechanism. Specifically, I would like to know if the proposed kernelized hashing attention runs faster, slower, or comparable to kernel-based linear attention methods on GPUs or TPUs.\n2) Energy: I am not entirely clear about how the energy consumption is calculated. But I think the authors only calculated the energy consumption of arthemitch operations and ignored the energy consumption of DRAM accesses. However, as pointed out by the TPU-v4 paper [1], the energy consumption of DRAM access dominates the energy consumption of the TPU accelerator chip because DRAM access is over 1000 times more expensive than half-precision multiplication. (Please see Table 2 of the paper). Therefore, I think the authors should claim savings in energy consumption by taking the DRAM access energy into account.\n3) Area: I am not convinced by the area saving reported in the paper as you cannot simply calculate the area of the accelerator design by multiplying the area of the circuits with the number of operations. The size of the processing element array/compute engine is determined by the memory bandwidth and the data flow of the accelerator to ensure that the accelerator is not constrained by memory or computation in most cases. I think it makes much more sense to claim better performance on a specific hardware backend rather than area savings.\n\n[1] Ten Lessons From Three Generations Shaped Google’s TPUv4i, ISCA'21\n I would like the authors to respond to the main weaknesses mentioned above. In addition, I have a question about how the proposed EcoFormer compares to other non-kernel-based efficient attention schemes (e.g., [2]).\n\n[2] Combiner: Full Attention Transformer with Sparse Computation Cost, NeurIPS'21 N/A", " This paper proposes a learnable kernelized hashing to binarize the queries and keys such that the hamming distance between binary codes matches the dot-product similarity relations in the original attention maps. The hashing functions use self-supervised learning using the targets from the most similar and dissimilar pair of queries and keys. The paper then exploits the equivalence between the hamming distance and dot product of binary codes to approximate the attention in linear complexity with respect to the sequence length. More importantly, the binarized codes allow replacing the power-hungry floating-point multiplications with simple additions and subtractions. Experiments on vision tasks show that the proposed method results in 50-80% energy and chip area savings for a minor accuracy loss.\n **Strengths**\n - The paper is well written. The language is clear and the flow of logic makes the paper easy to understand.\n - The idea of using learnable kernelized hashing to preserve similarity relations of binary codes is novel. The results are encouraging and demonstrate the effectiveness of the proposed methods on the vision tasks.\n\n**Weakness** \n - The paper demonstrates the efficacy of the proposed methods on vision tasks. I am concerned about the accuracy on natural language processing and other tasks involving sparse attention patterns where the sensitivity of the binarized attention codes could be affected. Long-range arena [1] could be a good benchmark to test the accuracy for various tasks.\n - The paper currently reports the number of additions and multiplications. However, a discussion on how these numbers were derived is missing. It would be good to include this (possibly in the supplementary). \n - The paper mentions that the Adder Attention [2] results in a drastic performance drop. However, the paper doesn't offer any evidence of this in a comparable setting. From the Adder Attention paper, the DeiT-B model with additions can drop the top-1 accuracy from 81.8% to 80.4%, similar to the accuracy drop for the proposed method with the Twins-SVT-S model. Thus, a fair comparison would be to use identical networks to compare the accuracy and performance benefits.\n\n[1] Long Range Arena: A Benchmark for Efficient Transformers\n\n[2] Adder Attention for Vision Transformer **Questions and Suggestions**\n - In lines 254-257, the paper describes the setup for the ImageNet experiments. The proposed method starts from the weights of the pre-trained MSA model and fine-tunes it for 30 epochs. In contrast, for CIFAR experiments, the models were trained from scratch. Why was the ImageNet model not trained from scratch?\n - Besides Reformer, other works have looked at LSH-based schemes to identify similar pairs of queries and keys. A discussion contrasting the current work from these could be included in the related works:\n - SMYRF: Efficient Attention using Asymmetric Clustering (NeurIPS 2020)\n - Fast Transformers with Clustered Attention (NeurIPS 2020) \n - Sparse Attention with Learning to Hash (ICLR 2022) These are discussed.", " This paper proposes a new binarization framework to customize high-dimensional softmax attention and reduce computation complexity in Transformers. Specifically, it utilizes the kernelized hashing to map the original queries and keys into low-dimensional binary codes in Hamming space. The kernelized hashing function is learned to maximize the similarities between the binarized codes and ground-truth relations in the original attention maps. Experiments on classification tasks show it achieves comparable performance with much fewer multiplications and additions. Strengths:\n1. The motivation of this work is promising and binarizing the Transformers through kernelized hashing functions in a self-supervised way is novel.\n2. In this paper, authors explicit the idea and methods clearly and easy to follow how they solve the whole optimization problem.\n3. Experimental results on ImageNet show high efficiency compared with the baselines.\n\nWeaknesses:\n1. Sec. 4.2 seems to have some inconsistent size of matrices or vectors, which makes this key part hard to follow and misunderstanding.\n2. There are some related works using binarization strategy in Transformers, too, including the latest BiT [1]. etc. However, there is no comparison with these methods.\n3. In the section of experiments, tables show the number of additions and multiplications but the details of how to calculate them are missed.\n\n[1] Liu, Zechun, et al. \"BiT: Robustly Binarized Multi-distilled Transformer.\" arXiv preprint arXiv:2205.13016 (2022). 1. In the line 202, $c$ is set to $\\lceil \\log_2 b \\rceil$, then the inequation in line 201 could be 0, right? \n2. The notations in Sec. 4.2 are confusing and difficult to understand, for example, in Eq. (11), the LHS $H(q)$ should be a bx1 vector as you defined at the beginning of this subsection, but RHS is Nxb after the matrix multiplication. How come this happens?\n The additional parameters of weights $a$ are optimized iteratively in the training process, the computation cost and time cost might still be a problem." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "52Z4Skec2oS", "N1ceu5aE7vj", "upxPvn_UeKO", "9KMjYLmH5Ux", "7ODSGDPNTTW", "FDyN9Q7F9wJ", "nips_2022_MK_130d4Y0", "7ODSGDPNTTW", "mCY_ZVqAaCq", "mCY_ZVqAaCq", "FDyN9Q7F9wJ", "FDyN9Q7F9wJ", "nips_2022_MK_130d4Y0", "nips_2022_MK_130d4Y0", "nips_2022_MK_130d4Y0" ]
nips_2022_9uRS5ysgb9
Zero-Shot Video Question Answering via Frozen Bidirectional Language Models
Video question answering (VideoQA) is a complex task that requires diverse multi-modal data for training. Manual annotation of question and answers for videos, however, is tedious and prohibits scalability. To tackle this problem, recent methods consider zero-shot settings with no manual annotation of visual question-answer. In particular, a promising approach adapts frozen autoregressive language models pretrained on Web-scale text-only data to multi-modal inputs. In contrast, we here build on frozen bidirectional language models (BiLM) and show that such an approach provides a stronger and cheaper alternative for zero-shot VideoQA. In particular, (i) we combine visual inputs with the frozen BiLM using light trainable modules, (ii) we train such modules using Web-scraped multi-modal data, and finally (iii) we perform zero-shot VideoQA inference through masked language modeling, where the masked text is the answer to a given question. Our proposed approach, FrozenBiLM, outperforms the state of the art in zero-shot VideoQA by a significant margin on a variety of datasets, including LSMDC-FiB, iVQA, MSRVTT-QA, MSVD-QA, ActivityNet-QA, TGIF-FrameQA, How2QA and TVQA. It also demonstrates competitive performance in the few-shot and fully-supervised setting. Our code and models are publicly available at https://github.com/antoyang/FrozenBiLM.
Accept
The submission introduces a zero-shot VQA model that combines frozen video and bidirectional language models by training additional projection and adaptor layers. The method significantly outperforms related previous work that uses only uni-directional language models. While the approach is somewhat incremental technically, reviewers found the results to be significant and thought that the main claims of the paper are well supported by thorough ablations. There are no significant concerns raised by the reviews, and overall this is solid work, so I recommend acceptance.
train
[ "CMTc8d1OZR", "yHcrV_mWLxY", "ooAxjkEcTqq", "RdET692ma4X", "g1B5n0Sd_c", "nrBbq_Agt_1", "vCruQ2zvQq", "9jCdQXk4vcA", "hkdciq9JU1Q", "BeuJpAWzhE7", "eSctdZRa-v" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response and the extra experiments on the template design and image VQA. After reading all the reviews and discussions, I am happy to support this paper for acceptance.", " Thanks for the clarification and insights. The additional results and discussions on \"Multiple mask tokens decoding\" is thorough and convincing. The gain is indeed incremental; however, it's an interesting future direction for potential improvements. Please do include these additional results in the final version.", " Thanks for responding to my questions. I agree that BiLM significantly differs from autoregressive language models in terms of training and inference. And I am looking forward to seeing the future work of incorporating the speech part. I am inclined to let this paper be accepted.", " We thank the reviewers for their helpful comments. We appreciate encouraging remarks that our idea of tackling zero-shot video question answering with frozen bidirectional language models is elegant (MVzv) and convincing (daCv). Reviewers also found that our paper is well written and flows well (bWDb, MVzv, daCv). They acknowledged that our experiments adequately support the claims from the paper (MVzv, daCv) and that our results are significant to the community (MVzv). These results include a comprehensive ablation study (bWDb) and state-of-the-art results on a wide range of zero-shot/fully-supervised video question answering benchmarks (bWDb, daCv).\n\nWe provide detailed answers to the comments and questions from each reviewer in the different author responses and will modify our paper accordingly.\n", " > i) In line 217, it is evident that a simple averaging over multiple tokens does not preserve the semantic structure of the label/phrase (e.g., \"man riding horse\" vs. \"horse riding man\"). Hence, the resulted model could suffer in the wild when rare events/relationships happen (e.g., \"horse riding man\"). It would be great if the authors can shed some lights on possible solutions and perhaps other strategies they have attempted in the experiments to alleviate this issue. One of the related works is [r5].\n\nTo improve the modeling of multi-token answers of FrozenBiLM for open-ended VideoQA, we have taken inspiration from [A] and performed zero-shot VideoQA inference by using multiple mask tokens decoded in parallel. Then, for each video-question pair, we did one forward pass through the model per possible number of mask tokens (typically, 1 to 5) in order to score all possible answers in vocabulary A. The score of a given answer was then obtained by multiplying the probability of its individual tokens, possibly normalized by its number of tokens. We observed that such a decoding strategy did not significantly improve the accuracy of our model. This may be due to the fact that the current open-ended VideoQA datasets [29, 94, 96, 103] contain a great majority of short answers, e.g. 99% of the answers in the MSRVTT-QA test set are one-token long with our tokenizer. We report in Table T3 below detailed results with this inference strategy compared with the inference strategy used in the paper. Additionally, a possible solution to further improve the decoding in this alternative scheme is to increase the length of the masked spans at pretraining, as in [B]. We thank Reviewer daCv for suggesting a relevant reference [r5], which provides another potential solution to score multi-token answers in our framework. We will include this discussion in the paper.\n\n[A] Jiang et al., X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models, EMNLP 2020. \n[B] Joshi et al., SpanBERT: Improving Pre-training by Representing and Predicting Spans, TACL 2020.\n\n| Inference Strategy | LSMDC | iVQA | MSRVTT-QA | MSVD-QA | ActivityNet-QA | TGIF-QA | \n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| Average token embeddings | **51.5** | 26.8 | 16.7 | 33.8 | 25.9 | 41.9 | \n| Multiple mask tokens | 51.0 | **27.0** | **17.1** | **34.4** | **26.1** | **42.0** | \n\nTable T3 - Impact of the inference strategy on the zero-shot open-ended VideoQA performance.\n\n> ii) Considering that not all existing methods have speech transcripts as model input and the fact that transcripts are extremely informative esp. on instructional videos, the comparison seems unfair (e.g., in Tab. 6). Please comment on this and also denote methods with speech input in the result tables.\n\nAs evaluated in Section 4.2, in the zero-shot setting, speech is particularly helpful on the How2QA and TVQA benchmarks, while it has a low impact on the other benchmarks. We will denote methods with speech input in Tables 5 and 6. In detail, no method uses speech as input in Table 5; in Table 6, the methods using speech as input are HCRN [42] on TVQA, HERO [51] on How2QA and TVQA, MERLOT [104] on TVQA, and RESERVE [105] on TVQA. In addition, in Tables T4 and T5 below we compare our method to state-of-the-art methods in the fair setup, when no speech input is used. These results show the effectiveness of our approach even when speech is not provided as input. We will include these results in Tables 5 and 6 in the paper.\n\n| Method | LSMDC | iVQA | MSRVTT-QA | MSVD-QA | ActivityNet-QA | TGIF-QA | How2QA | TVQA |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| SoTA | 31.0 [105] | 13.3 [97] | 5.8 [105] | 13.5 [97] | 12.3 [97] | 3.6 [68] | **53.1** [97] | 26.1 [68] |\n| FrozenBiLM (Ours) | **50.9** | **26.2** | **16.9** | **33.7** | **25.9** | **41.9** | 41.9 | **29.7** |\n\nTable T4 - Comparison to the state of the art for zero-shot VideoQA without speech as input.\n\n| Method | LSMDC | iVQA | MSRVTT-QA | MSVD-QA | ActivityNet-QA | TGIF-QA | How2QA | TVQA |\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n| SoTA | 53.7 [19] | 35.4 [97] | 46.8 [90] | 48.3 [90] | 41.4 [104] | **69.5** [104] | **85.3** [97] | 44.2 [52] |\n| FrozenBiLM (Ours) | **58.6** | **39.7** | **47.0** | **54.4** | **43.2** | 68.6 | 81.5 | **57.5** |\n\nTable T5 - Comparison to the state of the art for fully-supervised VideoQA without speech as input.", " We thank Reviewer daCv for providing constructive comments.\n\n> Despite that adapting a frozen language model for VL tasks is not new (e.g., [88]), nor is adaptor-based transfer learning for Transformers (e.g., [26] and LoRA [r1]), the paper studies this simple yet important baseline on the challenging problem of VideoQA. \n\nWe wish to point out that bidirectional masked language models (BiLM) significantly differ from autoregressive language models [88] in terms of attention masking, training, and inference. As far as we know, our work is the first to explore the zero-shot capabilities of frozen BiLM for vision-language tasks.\n\n> Note that some relevant concurrent work such as Flamingo [r2] is missing in the related work, considering its relevance and the similar model design. Other related work on zero-shot/few-shot VideoQA include [r3] [r4].\n\nWe thank reviewer daCv for providing these additional references that we will cite and discuss in the final version. We wish to kindly point out that [r2] and [r4] are concurrent works where [r4] was released on arXiv after the NeurIPS submission deadline. Both [r2] and [r4] leverage autoregressive language models for few-shot vision-language tasks while our work shows the benefit of using bidirectional masked language models (BiLM). We emphasize that BiLM significantly differ from autoregressive language models in terms of attention masking (bidirectional vs unidirectional), training (masked language modeling loss vs unidirectional language modeling loss with teacher forcing), and inference (predicting masked tokens vs left-to-right autoregressive generation). In particular, we argue that BiLM have sufficient natural language generation ability to perform VideoQA, and that their output can be better constrained to provide concise answers. Empirically, we have demonstrated that BiLM achieve superior performance and parameter efficiency compared to autoregressive language models, see Section 4.3. \nThe work [r3] is related to ours but is focused on retrieval tasks and does not address video question answering. In particular, [r3] considers the multiple-choice video-to-text retrieval task of MSR-VTT as VideoQA. In this setting, the model is not provided with natural language questions. We will include this discussion to the related work section of the paper. ", " We thank Reviewer MVzv for providing a thoughtful review.\n\n> The drop in performance when removing the suffix is surprising. Does something similar happen when removing the prefix ([CLS])?\n\nWe hypothesize that the suffix helps the model to provide a concise answer, as the suffix is placed next to the right of the mask token that is used to predict the answer. We do not observe an important drop in performance when removing the [CLS] token during zero-shot VideoQA inference, e.g., the accuracy on MSVD-QA slightly drops from 33.8% to 33.2%.\n\n> How important is the prompt design for the zero-shot performance? Have you tried with different prompt templates?\n\nTo further investigate the prompt design, we have explored replacing the words “Question”, “Answer” and “Subtitles” by “Q”, “A” and “S”, respectively, in the templates described in Section 3.3. This change did not impact the zero-shot VideoQA accuracy, however, completely removing “Question”, “Answer”, “Subtitles” and “is it” in the templates resulted in a significant drop of performance. We report detailed results in Tables T1 and T2 below. We conclude that it is important to have tokens that link the different textual inputs. We will add and discuss these results in the final version of the paper.\n\n| Template | iVQA | MSRVTT-QA | MSVD-QA | ActivityNet-QA | TGIF-QA |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| [CLS] Question: <Question>? Answer: [MASK]. Subtitles: <Subtitles> [SEP] | 26.8 | **16.7** | **33.8** | **25.9** | **41.9** |\n| [CLS] Q: <Question>? A: [MASK]. S: <Subtitles> [SEP] | **27.4** | 16.2 | 32.5 | 25.5 | **41.9** |\n| [CLS] <Question>? [MASK]. <Subtitles> [SEP] | 23.1 | 13.6 | 28.0 | 21.6 | 25.2 |\n\nTable T1 - Impact of the prompt on zero-shot open-ended VideoQA performance.\n\n| Template | How2QA | TVQA |\n|:---:|:---:|:---:|\n| [CLS] Question: <Question>? Is it ’’<Answer Candidate>”? [MASK]. Subtitles: <Subtitles> [SEP]” | **58.4** | **59.7** |\n| [CLS] Q: <Question>? Is it ’’<Answer Candidate>”? [MASK]. S: <Subtitles> [SEP]” | 57.7 | 58.2 |\n| [CLS] <Question>? <Answer Candidate>? [MASK]. <Subtitles> [SEP] | 47.6 | 55.0 |\n\nTable T2 - Impact of the prompt on zero-shot multiple-choice VideoQA performance.\n\n> Have you tried using the model for image VQA?\n\nWe have evaluated our pretrained model on the VQAv2 validation set in the zero-shot setting, i.e., without any supervision of visual questions and answers. Frozen [88] achieves 29.5% accuracy in this setting using an autoregressive language model. In comparison, our FrozenBiLM model is 7 times smaller than Frozen and achieves 45.0% accuracy. We conclude that our model can perform competitively on the image-VQA tasks despite being tailored for videos. We will add these results to the final version of the paper. ", " We thank Reviewer bWDb for providing feedback.\n\n> The idea of utilizing a frozen LM module is not new (e.g. https://arxiv.org/abs/2106.13884 which is a paper using autoregressive LMs to perform image-text few-shot learning). Although this method mainly focuses on VideoQA and BiLM, its novelty is still limited. However, this paper has a good engineering value, as no one has tried to use a frozen BiLM on VideoQA tasks. This paper is the first to do that can achieve surprisingly good results.\n\nUsing frozen autoregressive language models for few-shot/zero-shot vision-language tasks has indeed been previously explored [17, 65, 88, 99], as acknowledged in Section 2. However, we wish to emphasize that bidirectional masked language models (BiLM) significantly differ from autoregressive language models in terms of training and inference (predicting masked tokens vs left-to-right autoregressive generation). To accommodate for these differences, we have designed a specific inference strategy tailored for zero-shot VideoQA using BiLM, see Section 3.3. As far as we know, our work is the first to explore the zero-shot capabilities of frozen BiLM for vision-language tasks. We hope that our work raises more interest in exploiting BiLM for few-shot/zero-shot learning of vision and language tasks, in analogy to the literature dedicated to the few-shot capabilities of BiLM in natural language processing [62, 73, 74, 84]. In addition, we also demonstrate the benefits of freezing a BiLM in supervised settings, while current fully-supervised state-of-the-art approaches for VideoQA typically train a BiLM end-to-end [19, 45, 51, 90, 97, 104, 105]. Finally, we also evaluate the video-conditioned fill-in-the-blank task which is challenging for autoregressive language models, and show we can achieve state-of-the-art results on LSMDC.\n\n> Is it possible to adapt a pretrained speech module to this framework? As audio is crucial information in videos. Merely adding ASR transcripts to the model can not fully preserve the audio information. If we can incorporate pretrained models like wav2vec, this direction would be much more interesting.\n\nAudio is indeed a valuable source of information in videos that goes beyond speech transcripts. For example, hearing a camel grunting in the fourth example of Figure 3 could help answering the question. Extending our FrozenBiLM model with audio input is certainly possible as our architecture is agnostic to the added modality (here vision) to the language model. For example, one could linearly project features extracted from a pretrained audio encoder (e.g. wav2vec) to the token embedding space before feeding them to the frozen language model. Note that we would also need a pretraining dataset with videos that contain audio like HowTo100M [64] or YT-Temporal-1B [104, 105] instead of the audio-less WebVid10M dataset used in our work. We leave this interesting direction to future work. ", " This paper proposed to leverage the power of a frozen bidirectional LM to tackle the zero-shot video question-answering task. Adapters and projection layers are added to the model as the only trainable weights, so it's a very lightweight optimization. Also, the frozen backbones enable us to fine-tune the model without forgetting the knowledge of the pretrained backbones, so it can achieve much better performance compared to fine-tuning the whole model. The proposed model achieves state-of-the-art performance on many zero-shot VideoQA datasets. The experiments in the fully-supervised setting also show that it's possible to apply the same method with much training data and achieve state-of-the-art results. Strengths:\n- The paper is well-written. The flow is easy to follow and the method is clear and simple.\n- The authors carefully design the prompting templates for using the frozen model out-of-the-box.\n- The ablation study is comprehensive, we can clearly see that a frozen LM and adapters and the speech modality are necessary to to achieve good performance.\n- The result shows the effectiveness of this proposed method: it can outperform a fine-tuning baseline or an autoregressive baseline by a large margin.\n- The same method can also be applied to the fully-supervised setting and it can achieve state-of-the-art results when compared with other supervised baselines.\n\nWeaknesses:\n- The idea of utilizing a frozen LM module is not new (e.g. https://arxiv.org/abs/2106.13884 which is a paper using autoregressive LMs to perform image-text few-shot learning). Although this method mainly focuses on VideoQA and BiLM, its novelty is still limited. However, this paper has a good engineering value, as no one has tried to use a frozen BiLM on VideoQA tasks. This paper is the first to do that can achieve surprisingly good results. - Is it possible to adapt a pretrained speech module to this framework? As audio is crucial information in videos. Merely adding ASR transcripts to the model can not fully preserve the audio information. If we can incorporate pretrained models like wav2vec, this direction would be much more interesting. The authors adequately addressed the limitations and potential negative societal impact in the Conclusion section.", " The paper addresses zero-shot video question answering by leveraging pre-trained language models and visual encoders. The proposed model encodes videos with CLIP and projects them into language. The language is encoded with a bidirectional language model (e.g. BERT) and trained with a masked language modeling objective. The paper only re-trains the visual projection and adapter layers on a large video-captions dataset, and it shows that freezing the language model and the visual encoder leads to better results than re-training everything. Extensive experiments are conducted and results are reported on 8 videoQA datasets. - The paper is well-motivated, well-written, and easy to understand. The ideas proposed in the paper (joining pre-trained models together and use them for zero-shot videoQA without retraining) are simple and elegant. Multiple experimental results and ablations on 8 datasets support the hypothesis of the paper.\n\n- The results of the paper are significant to the community: \n 1. Using frozen language models (with no visual information training at all) can lead to better performance than fine-tuning such models on visual-language datasets probably due to catastrophic forgetting.\n 2. Smaller language models like BERT and its variants can outperform larger language models based on GPT.\n 3. Models pre-trained for different tasks can be made to work together by only fine-tunning some adapter layers.\n - The drop in performance when removing the suffix is surprising. Does something similar happen when removing the prefix ([CLS])?\n- How important is the prompt design for the zero-shot performance? Have you tried with different prompt templates?\n- Have you tried using the model for image VQA? Already discussed in the paper.", " This paper mainly concerns the problem of zero-shot video question answering (VideoQA). It proposes FrozenBiLM, which performs video-text(-speech) pre-training over a frozen language model to enable it with visual understanding capability while preserving its language generalization ability. The resulted model could perform zero-shot VideoQA out of the box. The paper also demonstrates the capacity of FrozenBiLM in terms of few-shot/fully-tuned learning. New SotAs are set on a wide range of mainstream VideoQA benchmarks. Despite that adapting a frozen language model for VL tasks is not new (e.g., [88]), nor is adaptor-based transfer learning for Transformers (e.g., [26] and LoRA [r1]), the paper studies this simple yet important baseline on the challenging problem of VideoQA. The overall idea is convincing and well-executed. The experimental results are solid and adequately support the claims from the paper w.r.t. the importance of freezing the language model and adaptors on cross-domain transfer learning. The paper flows well and is easy to follow.\n\nNote that some relevant concurrent work such as Flamingo [r2] is missing in the related work, considering its relevance and the similar model design. Other related work on zero-shot/few-shot VideoQA include [r3] [r4].\n\n[r1] Hu et al., LoRA: Low-Rank Adaptation of Large Language Models. arXiv 2021.\n\n[r2] Alayrac et al., Flamingo: a visual language model for few-shot learning. arXiv 2022.\n\n[r3] Xu et al., Videoclip: Contrastive pre-training for zero-shot video-text understanding. EMNLP 2021.\n\n[r4] Wang et al, Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners. arXiv 2022. Please address the following questions:\n\ni) In line 217, it is evident that a simple averaging over multiple tokens does not preserve the semantic structure of the label/phrase (e.g., \"man riding horse\" vs. \"horse riding man\"). Hence, the resulted model could suffer in the wild when *rare* events/relationships happen (e.g., \"horse riding man\"). It would be great if the authors can shed some lights on possible solutions and perhaps other strategies they have attempted in the experiments to alleviate this issue. One of the related works is [r5].\n\nii) Considering that not all existing methods have speech transcripts as model input and the fact that transcripts are extremely informative esp. on instructional videos, the comparison seems unfair (e.g., in Tab. 6). Please comment on this and also denote methods with speech input in the result tables.\n\n[r5] Salazar et al., Masked Language Model Scoring. ACL 2020. Appears to be sufficient." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "vCruQ2zvQq", "nrBbq_Agt_1", "9jCdQXk4vcA", "nips_2022_9uRS5ysgb9", "eSctdZRa-v", "eSctdZRa-v", "BeuJpAWzhE7", "hkdciq9JU1Q", "nips_2022_9uRS5ysgb9", "nips_2022_9uRS5ysgb9", "nips_2022_9uRS5ysgb9" ]
nips_2022_qVtbqSwOxy6
Align then Fusion: Generalized Large-scale Multi-view Clustering with Anchor Matching Correspondences
Multi-view anchor graph clustering selects representative anchors to avoid full pair-wise similarities and therefore reduce the complexity of graph methods. Although widely applied in large-scale applications, existing approaches do not pay sufficient attention to establishing correct correspondences between the anchor sets across views. To be specific, anchor graphs obtained from different views are not aligned column-wisely. Such an Anchor-Unaligned Problem (AUP) would cause inaccurate graph fusion and degrade the clustering performance. Under multi-view scenarios, generating correct correspondences could be extremely difficult since anchors are not consistent in feature dimensions. To solve this challenging issue, we propose the first study of the generalized and flexible anchor graph fusion framework termed Fast Multi-View Anchor-Correspondence Clustering (FMVACC). Specifically, we show how to find anchor correspondence with both feature and structure information, after which anchor graph fusion is performed column-wisely. Moreover, we theoretically show the connection between FMVACC and existing multi-view late fusion and partial view-aligned clustering, which further demonstrates our generality. Extensive experiments on seven benchmark datasets demonstrate the effectiveness and efficiency of our proposed method. Moreover, the proposed alignment module also shows significant performance improvement applying to existing multi-view anchor graph competitors indicating the importance of anchor alignment. Our code is available at \url{https://github.com/wangsiwei2010/NeurIPS22-FMVACC}.
Accept
All reviewer agree that this paper is innovative and well-written, so I recommend to accept.
train
[ "pjfcPGSA_n", "KgNmZ-8kr1r", "om9Mo9QBjc7", "TZEHN71GfF6", "RVMFvzkE27N", "TvPwDkQRS7u", "ZTZGPnzZxP", "uVQV8IILv4Z", "bABADCSpOLG", "40AV9J-7qRR" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I agree that the idea of recognizing the multi-view anchor alignment problem has been overlooked in the existing literature which can benefit further large-scale multi-view clustering research.\n\nI think the authors have addressed all of my concerns. I will keep my score and suggest acceptance.", " Dear Reviewers, Area Chairs, Senior Area Chairs and Program Chairs,\n\nWe sincerely thank the efforts and constructive comments you have made for this paper. The reviewers put forward many insightful questions and valuable suggestions towards improving our paper.\n\nIn the rebuttal phase, we provided detailed responses to all reviewers' comments point by point, hoping to address the issues raised by reviewers Rr2L, U2ZL and nLLC, including more detailed comparison of existing partially-aligned multi-view setting (PVC), illustration of experimental results and added experiments of mentioned SOTA algorithms. Moreover, we indicate that the **Anchor-Unaligned Problem (AUP)** naturally exists in large-scale multi-view clsutering with view-independent anchors.\n\nThe discussion period is coming to an end, and we are actively awaiting for further discussion from Reviewers. If you have any questions, we are happy to discuss them with you at any time.\n\nThanks & Regards,\n\nAuthors of paper-228.", " Thanks for the detailed reply and new results, I satisfy with the response and would keep my recommendation. ", " Q4:The authors should survey not only multi-graph clustering models but also with other multi-view approaches in the part of related work.\n\nA4:Thanks for your constructive suggestions. In the introduction and related work sections, we enlarge our literature review with more strategies:multi-view NMF and multi-view ensemble clustering. Multi-view NMF adopts matrix factorization to learn the shared spaces with designed properties (i.e. sparse and low-rank) [1][2]. Multi-view ensemble clustering aims to fuse multiple partition results into a more robust and impressive clustering results [3,4]. We will update our 'Related work' section with the two afore-mentioned styles of multi-view clustering.\n\n[1]Zhao H, Ding Z, Fu Y. Multi-view clustering via deep matrix factorization[C]//Thirty-first AAAI conference on artificial intelligence. 2017.\n\n[2]Brbić M, Kopriva I. Multi-view low-rank sparse subspace clustering[J]. Pattern Recognition, 2018, 73: 247-258.\n\n[3]Tao Z, Liu H, Li S, et al. From ensemble clustering to multi-view clustering[C]//IJCAI. 2017.\n\n[4]Liu H, Wu J, Liu T, et al. Spectral ensemble clustering via weighted k-means: Theoretical and practical evidence[J]. IEEE transactions on knowledge and data engineering, 2017, 29(5): 1129-1143.\n\nQ5:The paper needs careful proofreading and some typos should be corrected, e.g. Eq.(3). Definitions for the constraints should be presented.\n\nA5:Thanks for the constructive comments. We have carefully revised our paper to make the new version more readable. Please check our revised manuscript.\n", " Thank you very much for your helpful suggestions and interest in our learning problem.We have taken the constructive comments and suggestions into consideration and the detailed responses to the comments and the corresponding revisions are summarized below.\n\nQ1:. The complexity analysis is missing and I suggest the authors should also list with existing baselines (i.e, SFMC and LMVSC) both space and time complexity.\n\nA1:Thanks for your advice. The detailed complexity analysis is provided at section A.4 in Appendix. \n\n**Space Complexity:** The space complexity of our FMVACC is $\\mathcal{O}(md+ mnv+ m^2v)$. In our algorithm, $m \\ll n$ and $d \\ll n$. Therefore, the space complexity of FMVACC is $\\mathcal{O}(n)$. \n\n**Time complexity:** The total computational complexity of FMVACC is $\\mathcal{O}(nmd+nm^2+m^2d+m^3)$, which is linear complexity $\\mathcal{O}(n)$.\n\nAfter the optimization, we perform SVD on $\\mathbf{Z}$ to obtain the spectral embedding and output the discrete clustering labels by $k$-means. The post-process needs $\\mathcal{O}(nm^2)$, which is also linear complexity respecting to samples. In total, our algorithm achieves MVC with both linear space and time complexity, which demonstrates the efficiency of FMVACC. \n\n| Methods | Space Complexity | Time Complexity |\n|:--------:|:----------------:|:---------------:|\n| MSC-IAS | $\\mathcal{O}(vn^2)$ | $\\mathcal{O}(n^3)$ |\n| AMGL | $ \\mathcal{O}(v n^2 + nk)$ | $\\mathcal{O}(n^3)$ |\n| PMSC | $ \\mathcal{O}(vn^2+vnk)$ | $\\mathcal{O}(n^3)$ |\n| RMKM | $ \\mathcal{O}((n+d)m)$ | $\\mathcal{O}(ndk)$ |\n| BMVC | $\\mathcal{O}(nm+nk)$ | $\\mathcal{O}(nmv+nkm+nk)$ |\n| LMVSC | $ \\mathcal{O}(vk(n + d))$ | $\\mathcal{O}(nm^3)$ |\n| SFMC | $\\mathcal{O}(nmv)$ | $\\mathcal{O}(nm^3+v^3)$ |\n| Proposed | $\\mathcal{O}(md+ mnv+ m^2v)$ | $\\mathcal{O}(nmd+(n+d)m^2+m^3)$ |\n\n\nwhere $n$,$v$,$m$ are the numbers of samples views and anchors among each view.$d = \\sum_{i}^{v} d_i$ is the sum of view dimensions. \n\nQ2:For better representation, Figure (e)(j)(o) in Figure 3 are the obtained permutation matrices and more discussions can be introduced.\n\nA2: Thanks for the comments. In Fig.3 , the three figures (e)(j) and (o) are the obtained permutation matrices by solving Eq. (9) for LMVSC, Our algorithm and SFMC respectively. The two figures (e) and (j) clearly show the obtained correspondences between the two views for ours and LMVSC. The anchors order (1234) in view 1 match with anchor order(4321) where cna be verified in the obtained anchor graphs in the two views. Moreover, since SFMC adopts the samples with same indexes as anchors, the idea permutation matrix is identity matrix $\\mathbf{I}_m$ where also verified by figure (o). The results clearly illustrate the effectiveness of our proposed alignment module.\n\nQ3:Although the settings are a little different, the authors could discuss the differences between PVC and the proposed method in details. How can they apply to PVC settings?\n\nA3: We summarize the difference between our method and PVC as follows:\n\n1.**Problem is different:** PVC is proposed to handle with **unmapped** multi-view data where some correspondences have been given as supervision signals. Our paper aims at solving the large-scale multi-view anchor-unaligned issue. \n\n\n2.**Setting is different:** PVC proposes to capture sample mapping with the supervision of some existing known correspondence. Hence it is regarded as a **semi-supervised(some mappings are known)**. However, our paper is designed to solve the anchor-unaligned issue in **multi-view multi-view graph clustering with anchors**.\n\n3.**Method is different:** PVC firstly projects multi-dimensional data into the latent common space by auto-encoder network. However, ours is totally unsupervised and seeks correspondences among each original view. Ours combines feature and structure correspondence to fuse multi-view anchor graphs.\n\n\n**How to extend Our FMVACC to PVC problem:** We can easily show that PVC is also a special case of our proposed FMVACC. Taking the $n$ samples as anchors in each view, the single-view anchor graph extends to the full graph with size of $n \\times n$. Then the unmapped data in PVC can be solved with the structural loss in our formulation where some entries of the permutation matrix $\\mathbf{P}$ have been set as 1 by the pre-known correspondences.\n\n\n", " We thank Reviewer U2ZL for careful reading and constructive suggestions. We have taken the constructive comments and suggestions into consideration and the detailed responses to the comments and the corresponding revisions are summarized below.\n\nQ1: Although the proposed method focuses on multi-view graph method, the literature review can be enlarged with other strategies of multi-view clustering approaches, for example, multi-view NMF and multi-view ensemble clustering. Some details should be clarified in Table 2 with representative large-scale multi-view methods. RMKM is not a graph method and should be introduced in the related work sections.\n\nA1: Thanks for your suggestions. In the introduction and related work sections, we enlarge our literature review with more strategies:multi-view NMF and multi-view ensemble clustering. Multi-view NMF adopts matrix factorization to learn the shared spaces with designed properties (i.e. sparse and low-rank)[1][2]. Multi-view ensemble clustering aims to fuse multiple partition results into a more robust and impressive clustering result[3,4]. We will update our 'Related work' section with the two afore-mentioned styles of multi-view clustering.The chosen algorithms in Table 2 (RMKM, BMVC, LMVSC and SFMC) are designed for large-scale multi-view clustering tasks. RMKM [5] adaptively learns the clustering indicator matrix via following k-means formulation and avoid the huge computational burden. \n\n[1]Zhao H, Ding Z, Fu Y. Multi-view clustering via deep matrix factorization[C]//Thirty-first AAAI conference on artificial intelligence. 2017.\n\n[2]Brbić M, Kopriva I. Multi-view low-rank sparse subspace clustering[J]. Pattern Recognition, 2018, 73: 247-258.\n\n[3]Tao Z, Liu H, Li S, et al. From ensemble clustering to multi-view clustering[C]//IJCAI. 2017.\n\n[4]Liu H, Wu J, Liu T, et al. Spectral ensemble clustering via weighted k-means: Theoretical and practical evidence[J]. IEEE transactions on knowledge and data engineering, 2017, 29(5): 1129-1143.\n\n[5]Cai X, Nie F, Huang H. Multi-view k-means clustering on big data[C]//Twenty-Third International Joint conference on artificial intelligence. 2013.\n\nQ2: The authors should illustrate how to initialize single-view anchors since initializations are important. \n\nA2: Different with SFMC [6] sampling multi-view samples with same indexes as anchors to avoid AUP issue, we propose to generate **flexible** anchors among each view. The anchors are firstly initialized with $k$-means centroids and then refined by the optimization algorithm 1.\n\n[6]Li X, Zhang H, Wang R, et al. Multiview clustering: A scalable and parameter-free bipartite graph fusion method[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 330-344.\n\nQ3: There are some formatting inconsistencies in the text that the authors should check them carefully, i.e. AUP.\n\nA3: Thanks for your careful reading. We will correct the typos in the revised version.\n\n\nQ4: I suggest the authors can compare the performance of some latest multi-view scalable graph clustering methods [7][8]. [7] is an extension version of LMVSC mentioned in the manuscript and [8] utilizes NMF into multi-view anchor graph study.\n\nA4:Thanks for your constructive comments. We have reviewed the mentioned two papers[7,8]. [7] generate anchors in each view and then obtain a unified low-rank anchor graph. It is noticeable that [7] still suffer form AUP issue since anchors are independeent among each views. [8] constructs individual anchor graphs and then adopts the matrix factorization to get the clustering labels. We also conduct comparison experiments and show the results as follows,\n\n| ACC | UCI-Digit | BDGP | SUNRGBD | MNIST | YTF-10 | YTF-20 |\n|-------------|-----------|-----------|-----------|-----------|-----------|-----------|\n| FMCNOF | 54.85 | 31.08 | 19.67 | 78.29 | 43.42 | 38.61 |\n| MSGL | 72.18 | 53.17 | 19.61 | **97.68** | 70.68 | 63.54 |\n| Proposed | **83.59** | **59.51** | **21.95** | 94.39 | **74.80** | **71.32** |\n| NMI(mean) | UCI-Digit | BDGP | SUNRGBD | MNIST | YTF-10 | YTF-20 |\n| FMCNOF | 57.17 | 10.29 | 15.66 | 74.49 | 39.15 | 45.45 |\n| MSGL | 74.83 | 26.98 | **26.21** | 93.72 | 70.68 | 63.54 |\n| Proposed | **81.13** | **35.55** | 19.22 | **95.11** | **76.53** | **77.89** |\n\nFrom the table, our proposed method also enjoys more preferable task performance on the benchmark datasets. It is expected we can update our anchor generation strategy to further improve the clustering performance in future work.\n\n[7]Structured Graph Learning for Scalable Subspace Clustering: From Single View to Multiview. IEEE TCYB, 2021. \n\n[8]Fast Multi-View Clustering via Nonnegative and Orthogonal Factorization. IEEE TIP, 2021.", " We thank Reviewer Rr2L for identifying the contribution of AUP problems in large-scale MVC. We have taken the constructive comments and suggestions into consideration and the detailed responses to the comments and the corresponding revisions are summarized below.\n\nQ1:The authors do not clearly define the experimental settings in the real-world multi-view datasets and how they tune the parameters. Do these datasets still suffer from the AUP problem?\n\nA1: Thanks for the comments. The **Anchor-Unaligned Problem (AUP) naturally** exists in large-scale multi-view datasets. To improve the efficiency of multi-view graph clustering methods, some anchors/landmarks are selected to replace the $n \\times n$ large graph into small anchor graphs $ n \\times m$. However, the anchors sets across multiple views are unaligned (Shown in Fig. 3 and 4) and therefore leads to incorrect anchor graph fusion.\n\nMoreover, we emphasize the difference between ours and PVC[1]/MVC-UM[2] as follows:\n\n1.**Problem is different:** PVC and MVC-UM are proposed to handle with **unmapped** multi-view data where the multi-view data are disordered. However, this paper aims at solving the large-scale multi-view anchor-unaligned issue. When existing fast large-scale multi-view clustering methods generate anchor independently among each view, the **Anchor-Unaligned Problem (AUP) naturally** exists.\n\n\n2.**Setting is different:** PVC[1] proposes to capture sample mapping with the supervision of some existing known correspondences. Hence it is regarded as a **semi-supervised(some mappings are known)**. However, our paper is designed to solve the **unsupervised large-scale multi-view clustering with anchors**.\n\n3.**Method is different:** [1] and [2] firstly projects multi-dimensional data into the latent common space by neural networks or matrix factorization. However, ours is totally unsupervised and seeks correspondences among each original view. Our proposed method combines both feature and structure correspondence to fuse multi-view anchor graphs. Extensive experiments on benchmark datasets demonstrate the effectiveness and our proposed multi-view anchor correspondence framework. Moreover, the proposed anchoralignment module also shows significant performance improvement applying to existing multi-view anchor graph competitors indicating the importance of anchor alignment. \n\n\n[1]Huang Z, Hu P, Zhou J T, et al. Partially view-aligned clustering[J]. NIPS2020, 33: 2892-2902.\n\n[2]Yu H, Tang J, Wang G, et al. A Novel Multi-View Clustering Method for Unknown Mapping Relationships Between Cross-View Samples[C]//KDD2021: 2075-2083.\n\nQ2:The visualizations of the anchor graphs on UCI digits show little difference between the aligned and unaligned cases.\n\nA2: Thanks for your careful reading. In $LMVSC_{Unaligned}$ and $LMVSC_{aligned}$ on UCI digits, you can find that the biggest value of similarity values have increased form 0.7 to 0.9, and the Aligned version contains much less noises than unaligned based on correct anchor graph fusion (ACC 63.37% to 80.87%). We can also find similar phenomenon with our proposed approach and conclude that **aligned** fusion can reduce the noise and enhance clustering performance. \n\nQ3:The reviewer suggests improving Fig.2 for better clarity. Current figures cannot deliver the main idea of the proposed method.\n\nA3: Thanks for the suggestion. We will rearrange the lines and give optimization goals on the right of Figure 2 to better illustrate our idea. \n\nQ4:What is the difference between Eq.8 and Eq.9?Detailed proof is expected.\n\nA4: Thanks for the comment. The difference between Eq. (8) and Eq. (9) is the constraint on the matrix $\\mathbf{P} \\in \\mathbb{R}^{m \\times m}$. In Eq. (8), $\\mathbf{P}$ is the permutation matrix where $\\mathbf{P} \\in \\{0,1\\}^{m \\times m}$. Then, we relax the constraint into its convex hull, the Birkhoff polytope with double stochastic region $\\mathbf{P} \\mathbf{1} =\\mathbf{1}, \\mathbf{P}^{\\top}\\mathbf{1}=\\mathbf{1}, \\mathbf{P} \\in [0,1]^{m \\times m}$. Please check our revision in the 196-th line of the manuscipt. \n\nQ5:In training, the permutation matrix is computed by solving Eq.9, which is actually relaxed. How to get the permutation matrix (binary value) in the inference period?\n\nA5:After obtaining the matrix $\\mathbf{P}$ by solving Eq. (9), we apply the Sinkhorn operator to $\\mathbf{P}$ and get the binary permutation matrix [3]. We will add the illustration in the revision version.\n\n[3]Mena G, Belanger D, Linderman S, et al. Learning Latent Permutations with Gumbel-Sinkhorn Networks[C]//International Conference on Learning Representations. 2018.", " This paper focuses on multi-view graph clustering. The author studied a practical problem in multi-view graph clustering, i.e., anchor-unaligned problem. To solve this, the author propose a new anchor graph fusion framework including an anchor alignment module to solve AUP. Experiments on several datasets show reasonable performance improvement and effectiveness of the proposed method. Strengths:\n1. The motivation is strong and practical.\n2. The solution to obtain the permutation matrix is interesting, which is proved effective theoretically and experimentally.\n3. This paper is the first work to study the AUP in multi-view graph clustering.\n\nWeaknesses:\n1. The reviewer suggests improving Fig.2 for better clarity. Current figures cannot deliver the main idea of the proposed method.\n2. The authors do not clearly define the experimental settings in the real-world multi-view datasets and how they tune the parameters. Do these datasets still suffer from the AUP problem? \n3. The visualizations of the anchor graphs on UCI digits show little difference between the aligned and unaligned cases, which is less convincing. 1. What is the difference between Eq.8 and Eq.9?Detailed proof is expected.\n2. In training, the permutation matrix is computed by solving Eq.9, which is actually relaxed. How to get the permutation matrix (binary value) in the inference period? N/A", " This paper focuses on the large-scale multi-view anchor graph fusion strategies where the anchor graphs in individual views are not naturally aligned. The authors discover this phenomenon and provide a matching framework to address this issue. The paper is well-written and easy to read. The idea is well-motivated with clear contributions of illustrating necessity of multi-view anchor alignment. The comprehensive experimental results not only show performance improvements with the proposed FMVACC but also enhances existing large-scale baselines with more flexibility, indicating the effectiveness of multi-view anchor alignment. Strength:\n1.\tThe paper is well-written and easy to read. The idea of recognizing the multi-view anchor alignment problem has been overlooked in the existing literature which it can benefit further research and community.\n2.\tThe authors propose the first study of generalized flexible multi-view anchor graph framework. Different from sampling fixed anchors for AUP problem in SOTA, a matching framework has been made with more flexibility and performance improvement. \n3.\tThe effectiveness of the alignment module has been proved both on simulated and real-world datasets. Moreover, existing baselines enjoy considerable performance improvement with the module.\n4.\tThe method can be applied to large-scale scenarios where the scalability is proved by theory analysis and experiments.\n5.\tExperiments compared with recent works and ablation studies are well presented. \n\nWeakness:\n1.\tAlthough the proposed method focuses on multi-view graph method, the literature review can be enlarged with other strategies of multi-view clustering approaches, for example, multi-view NMF and multi-view ensemble clustering.\n2.\tThe authors should illustrate how to initialize single-view anchors since initializations are important.\n3.\tThere are some formatting inconsistencies in the text that the authors should check them carefully, i.e. AUP.\n\n 1.Some details should be clarified in Table 2 with representative large-scale multi-view methods. RMKM is not a graph method and should be introduced in the related work sections.\n2.The authors should illustrate how to initialize single-view anchors since initializations are important.\n3.There are some formatting inconsistencies in the text that the authors should check carefully, i.e. AUP and \\mathcal for complexity analysis.\n4.Some other strategies of multi-view clustering should be discussed in the introduction part, for example, multi-view NMF and multi-view ensemble clustering.\n5.I suggest the authors can compare the performance of some latest multi-view scalable graph clustering methods [1][2]. [1] is an extension version of LMVSC mentioned in the manuscript and [2] utilizes NMF into multi-view anchor graph study.\n[1]Structured Graph Learning for Scalable Subspace Clustering: From Single View to Multiview. IEEE TCYB, 2021.\n[2]Fast Multi-View Clustering via Nonnegative and Orthogonal Factorization. IEEE TIP, 2021.\n Yes. The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper studies an important problem, i.e., anchor-based large-scale multi-view clustering-Anchor Unaligned problem (AUC). The authors review existing multi-view anchor fusion strategies and propose to establish alignment between multi-view anchor sets. The results are promising, and the effectiveness of anchor alignment has been proved. Strength:\n1.\tThe idea is interesting and novel. Figure 1 and Table 1 illustrate the issue of anchor-unaligned problem for existing multi-view anchor graph clustering.\n2.\tThis paper introduces a flexible anchor graph fusion framework termed FMVACC to tackle the AUP problem, which is a generalized large-scale multi-view anchor graph study.\n3.\tThe authors solve the unmeasurable multi-dimensional anchor matching problem by introducing two parts: feature and structure correspondences in Algorithm 2.\n4.\tThe experimental results seem promising. The effectiveness of the proposed anchor-aligned module has been proved in Table 1/3 and Figure 3/4.\n5.\tThe code is available and easy to reproduce.\n\nWeakness:\n1.\tThe complexity analysis is missing and I suggest the authors should also list with existing baselines (i.e, SFMC and LMVSC) both space and time complexity.\n2.\tThe paper needs careful proofreading and some typos should be corrected.\n3.\tFor better representation, Figure (e)(j)(o) in Figure 3 are the obtained permutation matrices and more discussions can be introduced.\n 1. Although the settings are a little different, the authors could discuss the differences between PVC and the proposed method in details. How can they apply to PVC settings?\n2. The paper needs careful proofreading and some typos should be corrected, e.g. Eq.(3). Definitions for the constraints should be presented.\n3.The complexity analysis is missing and I suggest the authors should also list with existing baselines (i.e, SFMC and LMVSC) both space and time complexity.\n4. Figure (e)(j)(o) in Figure 3 are the obtained permutation matrices and more discussions can be introduced.\n5. The authors should survey not only multi-graph clustering models but also with other multi-view approaches in the part of related work.\n YES" ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "TvPwDkQRS7u", "nips_2022_qVtbqSwOxy6", "TZEHN71GfF6", "RVMFvzkE27N", "40AV9J-7qRR", "bABADCSpOLG", "uVQV8IILv4Z", "nips_2022_qVtbqSwOxy6", "nips_2022_qVtbqSwOxy6", "nips_2022_qVtbqSwOxy6" ]
nips_2022_dozWFpOJcOD
RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection
The task of Human-Object Interaction (HOI) detection targets fine-grained visual parsing of humans interacting with their environment, enabling a broad range of applications. Prior work has demonstrated the benefits of effective architecture design and integration of relevant cues for more accurate HOI detection. However, the design of an appropriate pre-training strategy for this task remains underexplored by existing approaches. To address this gap, we propose $\textit{Relational Language-Image Pre-training}$ (RLIP), a strategy for contrastive pre-training that leverages both entity and relation descriptions. To make effective use of such pre-training, we make three technical contributions: (1) a new $\textbf{Par}$allel entity detection and $\textbf{Se}$quential relation inference (ParSe) architecture that enables the use of both entity and relation descriptions during holistically optimized pre-training; (2) a synthetic data generation framework, Label Sequence Extension, that expands the scale of language data available within each minibatch; (3) ambiguity-suppression mechanisms, Relation Quality Labels and Relation Pseudo-Labels, to mitigate the influence of ambiguous/noisy samples in the pre-training data. Through extensive experiments, we demonstrate the benefits of these contributions, collectively termed RLIP-ParSe, for improved zero-shot, few-shot and fine-tuning HOI detection performance as well as increased robustness to learning from noisy annotations. Code will be available at https://github.com/JacobYuan7/RLIP.
Accept
This paper proposes a free-form relational language-image pretraining for HOI detection which demonstrates advantageous performances on zero-shot and few-shot settings. All reviewers give consistent positive scores after the discussion phase. The authors have added more experiments on COCO, different backbones, and pretraining datasets. And additional quantitative and qualitative analyses to claim motivations are also presented. Given the good insights and performance of the proposed RLIP, the meta-reviewers thus recommend to accept this paper.
train
[ "vDnLFsSRHM3", "o9WY5a6Vkz", "WPF9Cu-rZbB", "ZBA-rt9urlX", "Hqnf52jd34f", "xmBATmLpk1p", "-Hu-PVfkLYB6", "ZjqTlwGsx5X", "65TjhInlmmE", "r9EszBO1IME3", "KHRMn28J_XT", "iVzx1vmNwZ5", "FAbYq0yfExo", "-sVn1hTOTJv", "mOtc1qJ9ODB", "RmJBg29XtbWH", "2GV0YnWQDMi", "7JUh8jxc6P", "SiYJ9a797MCO", "XBaS6Vnan6-", "1s-t4-U0KBG", "459_oI38wZk", "tH4Uu6qcK4W", "XHP4aR07vEY", "EO9kCSjJ-gY", "hL0EmVH3E0", "LAzp5br9CW8", "vBW5KiYxzHd", "r0X9ieSkI3G" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Chairs and Reviewers,\n\nAs the discussion period comes to an end, we want to present a brief summary of our rebuttal and discussions with the reviewers for further reference.\n\nFirst of all, we thank all reviewers for their efforts and valuable comments. We are encouraged that the reviewers found RLIP shows strong zero-shot and few-shot performance (Reviewer VZWS, Reviewer 7c46, Reviewer 4wiK), has a non-trivial idea/motivation (Reviewer 4wiK, Reviewer MVep), has thorough experiments&analysis (Reviewer MVep) and is well written (Reviewer 4wiK). We are also glad to receive positive feedback from reviewers that our responses and paper revision have addressed their concerns.\n_____________________\nSecondly, building on the rebuttal and discussions, we summarize this paper RLIP as follows:\n- **Observation:** Prior works have demonstrated the benefits of effective architecture design and integration of relevant cues for HOI detection.\nAmong all these methods, object detection had been a de-facto pre-training paradigm for HOI detection, while they under-explored the pre-training of relation inference ability.\n- **Motivation:** We could leverage the dataset annotated with free-form texts, use relations as a pre-training signal, and transfer the relation inference ability to downstream tasks in various settings.\n- **Methodology:** We propose RLIP for transferring relation inference ability. To be more specific, we propose ParSe to facilitate fine-grained entity- and relation-level contrastive learning, a synthetic language data generation framework to improve contrastive learning, and mechanisms to account for relational semantic ambiguity and noise.\n- **Experiments:** Extensive experiments and analysis are performed to prove RLIP's superiority over object detection, relation detection and modulated detection [1] pre-training, and its consistent boost for zero-shot, few-shot and fine-tuning HOI detection performance as well as increased robustness to learning from noisy annotations.\n- **Future Works:** As CLIP [2] became the milestone for image classification/retrieval and GLIP [3] for object detection, we expect future works will follow RLIP to improve upon it. We expect RLIP can scale up in a semi-supervised manner combined with GLIP. Moreover, we expect the incorporation of relational pre-training could also benefit visual question answering [4] and visual reasoning [5,6], where relation inference counts.\n\n_____________________\nThirdly, we make a brief summary of the key revisions that we made to the main paper and Supplementary Material (SuppM), and detail how they resolve reviewers' concerns:\n\n**[Additional experiments and analysis]**\n- **Results with uni-modal relation detection pre-training** in fully-finetuned and few-shot settings in Table 1 and Table 3 of the main paper and also in Table 3 of the SuppM, demonstrating RLIP's superiority over it using the same VG dataset and annotations;\n- **Comparison with previous zero-shot methods using COCO for pre-training** in Table 2 of the main paper, proving the usefulness of ParSe and RLIP in zero-shot setting;\n- **Comparison with previous methods using different backbones and pre-training datasets** in Table 2 and Table 3 of the SuppM, proving ParSe's superiority across various backbones and pre-training datasets;\n- **Detailed zero-shot performance, and qualitative and quantitative analysis for reasons why RLIP can perform zero-shot HICO detection** from Line 164 to Line 223 of the SuppM;\n- **RLIP's robustness towards upstream data distributions (semantic diversity)** in Table 14 of the SuppM;\n- **Similarity analysis between in-batch labels and out-of-batch labels** in Table 10 of the SuppM, proving the usefulness of contrastive learning for relations and entities;\n- **Design choice of distance function in RPL** in Table 8 of the SuppM;\n- **Successful and failed case analysis, and corresponding future work** from Line 237 to Line 253 of the SuppM;\n- **Potential future works to scale up dataset size and boost performance** in Sec A.2 of the SuppM.\n\n**[More descriptive revisions]** \nWe revised the main paper to have a more precise contribution claim of ParSe, add more pre-training details for object detection and relation detection, add related works and correct several typos.\n\n_____________________\nThanks again for their efforts to review and give further feedback on RLIP, enabling us to improve our submission.\n\nYours sincerely, \nAuthors of Paper226\n\n**Reference**: \n[1] MDETR-modulated detection for end-to-end multi-modal understanding, ICCV 2021. \n[2] Learning transferable visual models from natural language supervision, ICML 2021. \n[3] Grounded language-image pre-training, CVPR 2022. \n[4] Vqa: Visual question answering, CVPR 2015. \n[5] CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning, CVPR 2017. \n[6] Gqa: A new dataset for real-world visual reasoning and compositional question answering, CVPR 2019. ", " Thank you, authors, for your elaborate rebuttal. I have read the other reviewer's comments and the author's rebuttal, which addresses most of my concerns. Therefore, I would like to raise my score to 6.", " Dear Reviewer MVep,\n\nThanks for your appreciation for this detailed analysis. We have revised our Supplementary Material to include a new part **Probing into reasons for the verb zero-shot performance**, starting from Line 176. Hopefully, we will include part of this analysis into the additional page of the main paper if it is accepted. We hope the current revision has addressed all your concerns, and we'd appreciate it if you also think it further improves upon the last version.\n\nYours sincerely, \nAuthors of Paper226 ", " Dear Reviewer VZWS,\n\nThanks for your careful comments and your appreciation for our work. We have revised our paper and added the experiments and analysis concerning \n1. more thorough comparisons with CDN in Table 2 of the Supplementary Material, with corresponding analysis starting from Line 99 to Line 103;\n2. the influence of upstream data distributions in Table 14 of the Supplementary Material, with corresponding analysis starting from Line 224 to Line 236;\n3. more successful and failure case visualization, and potential future works in Figure 5 of the Supplementary Material, with corresponding analysis starting from Line 237 to Line 253; \n\nCurrently, all of your concerns can be resolved in the revised version of the paper and Supplementary Material. We want to leave a gentle reminder that the discussion period is closing. We would appreciate your feedback to make sure that our responses and revisions have resolved your concerns, or whether there is a leftover concern that we can address to ensure a quality work.\n\nYours sincerely, \nAuthors of Paper226", " Thanks for the reply, I think this discussion is helpful and inspiring for readers.", " Thanks, this analysis helps a lot to probe the performance improvement.", " We are encouraged that you raised the score, and we thank you for your appreciation for RLIP.\nWe think RLIP can contribute to the interaction/relation detection task as CLIP does to the retrieval/classification task and GLIP does to the object detection task.\nWith respect to your suggestions in the last response, we do the following editing (We put most of them temporarily in the Supplementary Material, but we will squeeze part of them to the main paper if it is accepted because of the additional one page.)\n1. we add the Computational Overhead of the Subject and Object Query Pairing in the Supplementary Material.\n2. We add the Design Choice of distance function in the Supplementary Material.\n3. We add the Robustness towards different backbones in the Supplementary Material.\n4. We add fairer comparisons of Zero-shot detection results in the Table 2 of the main paper.\n5. We add Similarity Analysis Between In-batch Labels and Out-of-batch Labels in the Supplementary Material.\n\n**[Concern1]** **With respect to the remaining concern about the contribution claim**, to present it more precisely and distinguish it from previous two-stage methods, we make some revisions in Line 11, Line 54, Line 62 and Line 107, aiming to emphasize the contribution in the family of holistically optimized models.\n\n**[Concern2]** **With respect to the discussion about real zero-shotness**, we appreciate your insights. Indeed, analyzing the data leakage may pave the path for potential research. Thus, we provide further analysis:\n\n***First of all***, we want to explore whether the boost stems from the mounting dataset size or the quantity of dataset annotations. Since it is a bit intractable for uni-modal pre-training models to perform zero-shot (NF) evaluation, we mainly present fully fine-tuning results. To answer this question, we need to conduct experiments to control the usage of annotations as well. Thus, we conduct uni-modal relation detection pre-training that also adopts relation annotations and then compare the fully fine-tuned results: \n\n| Method | Detector | Data | PT Paradigm | PT \\#Epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| ParSeD | DDETR | VG | OD | 50 | 19.59 | 25.03 | 23.78 |\n| ParSeD | DDETR | VG | Relation Detction | 50 | 21.36 | 29.27 | 27.45 |\n| RLIP-ParSeD | DDETR | VG | RLIP | 50 | 24.45 | 30.63 | 29.21 ||\n\nComparing OD and relation detection pre-training, we the observe relatively better performance of relation detection pre-training (23.78->27.45) because of the usage of relation annotations. It can also be interpreted as a more aligned pre-training with the downstream task. However, it's still inferior to RLIP (27.45 < 29.21 on Full and a wider gap on Rare 21.36 < 24.45). \nRLIP and relation detection pre-training both incorporate relations as a pre-training signal, while the latter is sub-optimal due to the semantic similarity of the free-form text labels, which can be solved by pre-training with language supervision [2,3].\n\nRecall the fact we provided in the last response: among the 2,203 HOI verb annotations contained in VG, 30 HOI verbs do not have an annotation. Note that we use strict string matching to find annotations that may omit some grammatical variations of words, but this is a common phenomenon in natural language, and it is a bit intractable to avoid all variations. While the zero-shot results on HICO-DET indicate that mAP for the 30 verbs is 5.56, and mAP for the remaining 87 verbs is 18.12. If we are using a uni-modal pre-training (object detection or relation detection), we will fail to predict the verbs (all verbs or unseen verbs) without external information introduced.\n\n| Dataset | \\#images | HOI verb annos | HOI verb annos' ratio | Imbalance ratio |\n| ----- | :-----: | :-----: | :-----: | :-----: |\n| VG | 108K | 2,203 | 1.47\\% | 304 |\n\nThe above facts indicate that language-image pre-training still possesses its superiority over uni-modal pre-training even if controlling the variable of dataset size and annotations.", " ***Secondly***, we want to explore where the boost stems from by incorporating language-image pre-training. Back to the zero-shot analysis provided above, among the 30 verbs, they have diverse performances. \nAs shown in Figure 4 in the Supplementary Material, we can observe that some verbs have decent results.\n\nIn the following part, we would exemplify why zero-shot verbs can have decent performance and where the ability of zero-shot inference stems from.\n\nFor example, \"pay\" has the highest performance among verbs not seen by VG. \nIn the main paper, we present the conditional query generation that constrains the verb inference to be related to subjects and objects, providing verb inference with a conditional context. \nThus, to analyze how this ability of verb zero-shot inference emerges, we need to consider the subject and object context as they are essential to predict the verb in ParSe. \nFor the verb \"pay\" in HICO-DET, there is only one possible triplet annotated, \"person pay parking meter\". \nThen, we want to answer, \"is there any triplet annotated with similar or identical subjects and objects that transfer the inference ability to 'pay'?\"\nAiming to answer this, we firstly find triplets annotated with similar subjects and objects in VG. For the subjects, we heuristically select ones whose textual descriptions have any one of the following strings: *man, woman, person, friend, guy, dude, human, people, driver, passenger, hand, limb*. For the objects, we heuristically select ones whose textual descriptions have the string: *parking meter*. \nBy this processing, only 13 triplets are selected. Building on these, we report the verb distribution of the selected triplets. We rank the verbs in ascending order of Euclidean distance of this verb to \"pay\". (The Cosine distance can also output similar rankings.)\n\n| Verb | putting money in | collecting money at | puts change into | repairing | checking | next to | leaning | ... | \n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| Count | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ... |\n| Euclidean | 11.56 | 11.70 | 13.34 | 14.21 | 15.16 | 16.12 | 16.13 |... |\n| Cosine | 0.4560 | 0.4576 | 0.3108 | 0.2554 | 0.1583 | 0.0709 | 0.0165 | ... |\n\nWe append more examples in Table 12 of the Supplementary Material to demonstrate this phenomenon is prevailing. From this table, we can see that the verbs quantitatively closer (in Euclidean distance or in Cosine distance) to \"pay\" have similar semantic meanings to \"pay\", shown by their lexical variants or grammatical variants (e.g., \"putting money in\" has similar meanings to \"pay\"). In short conclusion, with the assistance of the sequential inference structure of verbs, we think that the zero-shot inference ability in RLIP is not from the scale of annotations, but the ability to transfer the verb inference knowledge from semantically similar annotations. This analysis also accords with previous papers [2,3] that semantic diversity is important as it introduces large-scale potential annotations, ensuring a model transfers well to different data distributions.\n\n***Thirdly***, to demonstrate quantitatively how RLIP pre-trains the model to perform zero-shot, we resort to the Uniformity metric introduced in [10]. Uniformity is a metric to assess a model's generalization in contrastive learning. We detail the calculation of this metric in Line 349 of the main paper. In this case, since label textual embeddings serve as a classifier in RLIP, we calculate the Uniformity of the seen verbs, unseen verbs and all verbs, aiming to observe how the generalization changes before and after RLIP, and how the generalization varies between seen verbs and unseen verbs. The results are shown in the table below (Lower is better):\n\n| Verb Set | Seen (87) | Unseen (30) | All (117) |\n| ----- | :-----: | :-----: | :-----: |\n| Before RLIP | -0.00367 | -0.00436 | -0.00388 |\n| After RLIP | -3.73780 | -3.59457 | -3.71330 |\n\nAs can be seen from the table, Uniformity values are all high before RLIP. It means that the representations before RLIP are compactly distributed, serving as an awful classifier. However, after RLIP is performed, the seen 87 verbs have a distinctively lower Uniformity value, corresponding with the decent zero-shot performance. Similarly, the 30 unseen verbs and the combination of 117 verbs also have excellent Uniformity values, contributing to the unseen zero-shot performance. Through this quantitative observation, we think that from the perspective of representations, RLIP contributes to the real zero-shotness. \n\nFrom all the above analysis, we think that the zero-shotness may not be caused by the mounting dataset size or annotations, but stem from the generalization in representations obtained by pre-training with language supervision. ", " **[Question]** **With regard to the potential bias in CLIP-style models**, we think this can be caused by the collected data. Usually, to pre-train a large-scale language-image pre-training model, data is from diverse sources, and the quantity of them can also be varied. This bias will be obvious when we adapt this pre-trained model to downstream tasks especially when the task is specialized [2,3], because under this circumstance, the data distributions of upstream and downstream tasks are misaligned (also termed natural distribution shifts in CLIP [2]). \n\nOne reason that text suffers more bias might be caused by the redundancy in free-form texts [4]. Thus, a reasonable way to tackle this **during upstream pre-training** is to manually filter the texts and change textual distribution manually. Due to the limited usage of language syntax in image-level language-image pre-training, it's intuitive to use filtered bag-of-words to replace the original texts [4]. (RLIP might be a good way to incorporate syntax information as it has the subject-predicate-object structure.) Also, we can perform semi-aligned learning [4,5] by adding external images without texts, which can serve as a way to align with the downstream datasets. However, we tend to think a pre-training model should be as comprehensive as possible. Thus, the misalignment can also be tackled in the downstream transfer.\n\nWhen **adapting the pre-trained models to downstream datasets**, there are several possible ways to overcome the potential bias from the pre-trained models. The most naive one is to perform fully fine-tuning, which is also the one adopted in this work. This paradigm adapts all the parameters to downstream tasks, aiming to transfer to their distributions, while it could be over-parameterized when the downstream dataset is in a small scale. The second way is using prompt tuning [2,3,6,7], which adds a language context to the textual embeddings to align with the downstream datasets. This language context can be fine-grained and detailed natural language [2,3], continuous learnable vectors [3,6] and image-conditioned continuous learnable vectors [7]. The third way is to use adapters [8,9] to adapt features to downstream distributions. The last two methods will be two potential future works for RLIP to efficiently adapt to downstream tasks.\n\nWe hope the above answer could address all your concerns. We thank you for your timely follow-ups and look forward to your reply.\n\n**Reference**: \n[1] https://visualgenome.org/api/v0/api_home.html \n[2] Learning transferable visual models from natural language supervision, ICML 2021. \n[3] Grounded language-image pre-training, CVPR 2022. \n[4] A fistful of words: Learning transferable visual models from bag-of-words supervision, arXiv 2021. \n[5] Self-training with Noisy Student improves ImageNet classification, CVPR 2020. \n[6] Learning to prompt for vision-language models, IJCV 2022. \n[7] Conditional Prompt Learning for Vision-Language Models, CVPR 2022. \n[8] Parameter-efficient transfer learning for NLP, ICML 2022. \n[9] Clip-adapter: Better vision-language models with feature adapters, arXiv 2021. \n[10] Understanding contrastive representation learning through alignment and uniformity on the hypersphere, ICML 2020. ", " Dear Reviewer 7c46,\n\nThanks for your careful comments and your appreciation for our work. We have revised our paper and added the experiments and analysis of incorporating VG and COCO datasets for previous methods into the Supplementary Material. Currently, all of your concerns can be resolved in the revised version of the paper and Supplementary Material. We want to leave a gentle reminder that the discussion period is closing. We would appreciate your feedback to make sure that our responses and revisions have resolved your concerns, or whether there is a leftover concern that we can address to ensure a quality work.\n\nYours sincerely, \nAuthors of Paper226", " Thank you for raising your score and your appreciation for RLIP.\nWe have made several revisions to the main paper and Supplementary Material as you suggested:\n\n1. we add pre-training details (epochs) for object detection and relation detection pre-training;\n2. we add relation detection pre-training results on HICO-DET and V-COCO in Table 1;\n3. we add few-shot transfer results with relation detection pre-training on HICO-DET in Table 3;\n4. we add GEN-VLKT into the related work;\n5. we polish the limitation part in the Supplementary Material to include potential research directions to scale up datasets and boost performance.\n\nWith respect to the remaining concern about ParSe, we want to emphasize the starting point of RLIP, which leads to the design departure from GEN-VLKT. GEN-VLKT aims to simplify association, thus Position Guided Embedding and two groups of queries are designed to index queries and decode for entities; also, it aims to transfer knowledge from CLIP, thus an object-verb coupled classifier is designed to inject textual representations augmented by manual prompts and distill the knowledge from the CLIP visual encoder, **while RLIP targets a language-image pre-training method aligned with HOI detection**. In light of this idea, we want to achieve fine-grained cross-modal alignment (i.e., matching each textual concept with corresponding visual representations rather than matching one image with one sentence), which can be beneficial to a detection task [1,2]. Thus, we design ParSe to facilitate the instantiation of RLIP. The entity-level cross-modal alignment is more flexible and apt to expand to large-scale triplets. We hope these two research directions can both motivate further research.\n\nThanks again for your sincere suggestions and your timely follow-ups.\n\n**Reference**: \n[1] Grounded language-image pre-training, CVPR 2022. \n[2] X-DETR: A Versatile Architecture for Instance-wise Vision-Language Tasks, ECCV 2022.", " Overall, my main concerns are basically addressed. I appreciate the responses and efforts of the authors. Though I still have some concerns about the novelty and claim discussions, I think the revised paper would give the community some good results and inspiration. I raise my rating to 5.", " Q7: addressed.\n\nQ8: addressed. Thanks for the detailed responses, please add them to the paper if possible.", " Q4: Thanks for the additional results and the efforts. My concern is addressed, please add these to the paper.\n\nQ5: This new table depicts the comparison very clearly. Concern addressed.\n\nQ6: Thanks. Indeed, old detector-based two-stage methods have a different context from recent Transformer-based e2e methods. However, they are different at the implementation level instead of the theoretical level. I still suggest the authors revise the contribution claim about the disentangled representation which may cause misleading.", " Thanks for the detailed responses from the authors. I will first respond to my concerns one by one:\n\nQ1: Addressed. Thanks, please add this clarification if possible in the main text or suppl.\n\nQ2: Thanks for the comparison between the labels from the two datasets. CLIP paves a new way for our community, it is indeed hard to compare what the model sees in training and whether this will affect the inference in transfer learning. However, this does not mean we do not need to consider this problem. Recently, many analysis papers are also proposed to see what CLIP or similar big models learn in the training with mixed large-scale datasets. I still believe it is essential to give this comparison and corresponding discussion about the possible data leak in the paper.n The reason is very simple: we should know what these v-l models learn and why they perform well, is the improvement from the increasing training data with possible label population or the fancy general representation? And what we can do in the future to go further, as the collection of more data is harder and harder. Besides, some works also find that CLIP-style works usually have an obvious bias toward text more than visual images/videos. This is very interesting and is also related to my question. \n\nQ3: thanks for the response, please add it to the ablations.", " Thanks for your response. The experimental results and analysis based on the VG relationship pre-training have verified the effectiveness of the proposed RLIP module and sufficiently solved my main concerns. The authors may replace the original results with VG detection pre-training with such results for a fair comparison in the revised version. However, the concern about the novelty of ParSe remains unsolved for me. Though some implementation details differ from GEN, the core idea is the same. Therefore, I tend to change my rating from 4 BR to 5 BA.", " We thank Reviewer MVep for their valued feedback, and are encouraged that they find the relational contrastive learning valuable and non-trivial, the model design reasonable, and our ablation studies and analysis extensive. To address their concerns, we present more explanations concerning the h-o pairing (Q1), zero-shot analysis and fairer zero-shot comparisons (Q2, Q5), more experiments about the choice of distance function (Q3), more experiments to compare performance with CDN and QAHOI (Q4), contribution analysis with previous methods (Q6), the necessity of ParSe (Q7) and similarity analysis of positive and negative prompts (Q8).\n\n\n>**Q1**: Efficiency of the sequential h-o pairing.\n\n**A1**: The pairing of humans and objects are performed by index-matching as is stated in Line 132. Thus, we pair humans and objects with identical indices (e.g., the first decoded feature from the subject queries and the first decoded feature from the object queries are paired.). Due to the simplicity of this matching strategy, the cost is trivial ($\\mathcal{O}(1)$ cost) compared to the overall overhead during model inference.\n\n>**Q2**: As the pre-training using VG, many zero-shot relations of HICO-DET maybe not zero-shot no more, as there may be many similar and even same relations in VG for training. So a detailed analysis should be taken to probe the real \"zero-shotness\".\n\n**A2**: First of all, we wish to re-clarify the meaning of zero-shot in our context. As is stated in Line 52, what we mean by zero-shot is that we assess the model without fine-tuning to assess the generalization of a pre-training model to unseen distributions, following CLIP. \nAs noted by the reviewer, in language-image pre-training, it is almost impossible to avoid all similar annotations as the dataset is annotated with free-form texts. Indeed, we want to benefit from this kind of annotation.\nAs GLIP [1] states, we should scale up visual concepts with massive image-text data to ensure a good transferability in language-image pre-training. Even prior to the emergence of CLIP, the use of semantic embeddings and knowledge graph to transfer to zero-shot learning [2] and few-shot learning [3] is also one of many trends.\n\nSecondly, we provide some analysis to give a sense of the verb overlap of HICO with VG. Note that in the table below, we use ``relationship aliases'' [10] to obtain as many HOI verb annotations from VG as possible.\n\n| Dataset | \\#images | HOI verb annos | HOI verb annos' ratio | Imbalance ratio |\n| ----- | :-----: | :-----: | :-----: | :-----: |\n| VG | 108K | 2,203 | 1.47\\% | 304 |\n\nWe can see from the table that in VG, we have only 2,203 HOI verb annotations even when considering relationship aliases, which is about 1.47\\% of the number of relationship annotations in HICO-DET. 30 HOI verbs do not have a single annotation, and 45 HOI verbs have 5 or fewer annotations. In RLIP-ParSe (COCO+VG), we observe that mAP for the 30 verbs is 5.56, and mAP for the remaining 87 verbs is 18.12. If we use a uni-modal relation detection pre-training, the result for the 30 verbs degrades to zero.\nIn light of this, we conjecture that existing relations can transfer their knowledge to the inference of non-existing relations in HOI detection.\n\n\n>**Q3**: In Eq. 5, is the Euclidean distance the best choice? How about the others like cosine distance?\n\n**A3**: During the design of our model, We tried two distance measuring function Euclidean distance and Cosine distance. The zero-shot (NF) results of Cosine distance using RLIP-ParSeD is shown in the table below.\n\n| Distance Metric | $\\eta$ | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: |\n| Cosine | 0.3 | 11.21 | 12.53 | 12.23 |\n| Cosine | 0.4 | 11.92 | 12.82 | 12.61 |\n| Cosine | 0.5 | 11.76 | 12.71 | 12.49 |\n| Cosine | 0.6 | 11.30 | 12.22 | 12.01 |\n| Euclidean | 0.3 | 12.30 | 12.81 | 12.69 |\n\nWe observe that Euclidean distance is slightly better (the last row of results is selected in the paper). Since both methods have similar computational overhead, in the paper, we choose the Euclidean distance and provide the sensitivity analysis of $\\eta$ in the Supplementary Material.", " >**Q4**: Performance comparisons with CDN. Results with ResNet101 (CDN) and Swin-Tiny (QAHOI).\n\n**A4**: With respect to CDN, the performance of CDN-B (27.55 33.05 31.78) using 12 decoding layers in total, which is twice as large as our method. Thus, in the paper, we compare ParSe with CDN-S that has 6 decoding layers. \nAlso, to compare more thoroughly with CDN and QAHOI, we perform more experiments to demonstrate the effectiveness of the uni-modal detection pipeline ParSe (in the below, PT denotes pre-training).\n\n| Method | Backbone |\\#Decoding layers | PT paradigm | PT data | \\#Tuning epochs | Rare/Non-Rare/Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | \n| CDN-L | ResNet101 | 12 | OD | COCO | 90+10 | 27.19 / 33.53 / 32.07 |\n| ParSe | ResNet101 | 6 | OD | COCO | 90 | 28.59 / 34.01 / 32.76 |\n| QAHOI | Swin-T | 6 | - | - | 150 | 22.44 / 30.27 / 28.47 |\n| ParSe | Swin-T | 6 | - | - | 60 | 23.77 / 31.40 / 29.65 |\n| ParSe | Swin-T | 6 | - | - | 150 | 25.76 / 31.84 / 30.44 |\n\n\nAs the table indicates, ParSe outperforms CDN-L with half the number of decoding layers and a single-stage fine-tuning with a clear gain (+0.69 mAP on Full set). When compared to QAHOI, ParSe improves by 1.18 mAP on Full set with only two fifths number of fine-tuning epochs. If using the same number of epochs, ParSe can surpass it by 1.97 mAP on Full set and more improvement on the Rare set (+3.32mAP).\n\n>**Q5**: Is the zero-shot setting fair (tab 2) as the proposed method can use extra data and the other methods do not?\n\n**A5**: To clarify the improvements yielded by our model, we conduct more experiments with ParSe pre-trained on COCO for a fair comparison. We present the results below and also update them into the paper.\n\n| Zero-shot | Method | Data | Unseen | Seen | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: |\n| UC-RF | VCL | COCO | 10.06 | 24.28 | 21.43 |\n| UC-RF | ATL | COCO | 9.18 | 24.67 | 21.57 |\n| UC-RF | FCL | COCO | 13.16 | 24.23 | 22.01 |\n| UC-RF | ParSe | COCO | 18.53 | 32.21 | 29.06 |\n| UC-RF | RLIP-ParSe | COCO+VG | 19.19 | 33.35 | 30.52 |\n| UC-NF | VCL | COCO | 16.22 | 18.52 | 18.06 |\n| UC-NF | ATL | COCO | 18.25 | 18.78 | 18.67 |\n| UC-NF | FCL | COCO | 16.22 | 18.52 | 18.06 |\n| UC-NF | ParSe | COCO | 19.65 | 24.50 | 23.38 |\n| UC-NF | RLIP-ParSe | COCO+VG | 20.27 | 27.67 | 26.19 |\n\nFrom this table, we observe that ParSe still outperforms previous methods by a significant margin. By performing RLIP building on ParSe, the relationship knowledge benefits seen combinations more. We conjecture that after fine-tuning on the downstream datasets, the model gradually loses the compositionality of language, leaving more relation inference ability instead to boost the seen performance.\n\n>**Q6**: Many previous methods (iCAN [4], TIN [5], etc) adopted the separated representations for a human, object, and relations, which needs analysis.\n\n**A6**: The motivation of our paper is to align the pre-training stage with the downstream fine-tuning with the assistance of language-image pre-training using a novel pre-training signal. Thus, to make the contrastive learning fine-grained, we propose to decouple the representations of the triplets. Although previous methods [4,5,6,7] can also have decoupled representations, they usually i) adopt off-the-shelf object detectors to extract visual features for other post-processing steps, and ii) are equipped with multiple branches. Especially the first characteristic makes it underperform, as in a complex reasoning problem, a holistic end-to-end optimized model can better adapt its features to the task itself [8]. With the help of DETR and Deformable DETR, we can leverage this insight to design ParSe, a much neater pipeline compared to previous methods [4,5,6,7]. From the perspective of performance, ParSe surpasses them by a significant margin. From the above analysis, we think that ParSe contributes a better baseline model to the research community.", " >**Q7**: L88: Rendering it sub-optimal for RLIP. Why, please give a more detailed discussion.\n\n**A7**: Here, what we mean by ``sub-optimal'' is that during contrastive learning, we hope the alignment of visual representations with the textual descriptions can be fine-grained and one-to-one matching as it could benefit the downstream detection task [8,9,10]. To be more specific, we design the model to align at the entity and relation levels. If we adopt previous models like QPIC [11] and CDN [12], we will fail to achieve this alignment. Besides, this alignment can benefit the detection task as it also prevents multi-task learning for a given decoded feature in a position-sensitive task like detection [13]. We present the ablation of this decoupled design to observe its contribution (also in the Supplementary Material):\n\n| ParSe Architecture | Coupling | Rare | Non-Rare | Full |\n| ----- | :----- | :-----: | :-----: | :-----: |\n| - | coupled subject, objects and relations [10] | 23.18 | 31.45 | 29.55 |\n| w/ Se | coupled subject and objects [11] | 25.58 | 32.50 | 30.91 |\n| w/ ParSe | fully decoupled | 26.36 | 33.41 | 31.79 |\n\nWe can see that by decoupling the representations, we could boost performance gradually. Thus, we think that previous methods are sub-optimal for RLIP.\n\n\n>**Q8**: How similar of the pos and neg prompts? Need an analysis.\n\n**A8**: First of all, since \"prompt\" does not appear in our paper, we interpret the positive and negative prompts as in-batch labels and out-of-batch labels (we invite the reviewer to respond in case we have misunderstood their question.)\n\nThe in-batch labels are aggregated from images' annotations, and the out-of-batch labels are sampled from the whole dataset, which does not overlap with in-batch labels. Since the contrastive loss optimizes to push away the negative textual labels, we can observe the change of the similarities of the negative and positive labels. To be more specific, we simulate the training process by out-of-batch sampling, and observe the change of similarities by calculating the average pairwise distance of the positive labels to the negative labels. We mainly compare the object and relation similarity based on the RoBERTa model before and after RLIP pre-training. The results are shown in the table below. (Cos and Euc denote using Cosine distance and Euclidean distance as a distance metric.)\n\n| Model | Object (Cos) | Relation (Cos) | Object (Euc) | Relation (Euc) |\n| ----- | :----- | :-----: | :-----: | :-----: |\n| Before RLIP | 0.9991 | 0.9986 | 0.2502 | 0.3156 |\n| After RLIP | 0.0084 | 0.0208 | 18.1943 | 16.9177 |\n\nFrom this table, we can see that the cosine similarity decreases, and Euclidean distance increases. Note that before RLIP, the discrimination ability of text embeddings are poor, which corresponds with previous work [14].\nThis indicates whichever distance function we adopt (Cosine or Euclidean distance) and whichever kind of feature we observe (object or relation), the similarity between in-batch labels and out-of-batch labels decreases after performing RLIP. This enables the language model to adapt well to the visual representations and serve as a good classifier.\n\n**Reference**: \n[1] Grounded language-image pre-training, CVPR 2022. \n[2] Zero-shot recognition via semantic embeddings and knowledge graphs, CVPR 2018. \n[3] Semantic relation reasoning for shot-stable few-shot object detection, CVPR 2021. \n[4] ican: Instance-centric attention network for human-object interaction detection, arXiv 2018. \n[5] Transferable interactiveness knowledge for human-object interaction detection, CVPR 2019. \n[6] Deep contextual attention for human-object interaction detection, ICCV 2019. \n[7] Learning to Detect Human-Object Interactions, WACV 2018. \n[8] MDETR-modulated detection for end-to-end multi-modal understanding, ICCV 2021. \n[9] RegionCLIP: Region-based Language-Image Pretraining, CVPR 2022. \n[10] How Much Can CLIP Benefit Vision-and-Language Tasks? ICLR 2022. \n[11] Qpic: Query-based pairwise human-object interaction detection with image-wide contextual information, CVPR 2021. \n[12] Mining the benefits of two-stage and one-stage HOI detection, NeurIPS 2021. \n[13] Revisiting the sibling head in object detector, CVPR 2020. \n[14] Representation degeneration problem in training natural language generation models, ICLR 2019. ", " We thank the reviewer for their valued feedback and are encouraged that they find our idea and motivation reasonable, our method easily adaptable to various settings and our writing generally clear. To address their concerns, we present more analysis to compare to previous work (Q1), propose methods to potentially address limited application scenarios (Q2), provide more details and observations to clarify the rationality of performance (Q3) and provide thorough baselines to demonstrate the usefulness of RLIP (Q4).\n\n\n>**Q1**: Comparisons with GEN-VLKT [1], CDN [2] and PST [3].\n\n**A1**: We thank the reviewer for highlighting this reference. As noted by the reviewer, GEN-VLKT represents concurrent work (GEN-VLKT is submitted to arXiv on March 26th 2022 while the NeurIPS abstract deadline was 16th May, 2022.). We will include this in our related work and clarify our differences here. \n\nWhile GEN-VLKT mimics the image representations from CLIP, RLIP starts from **a different perspective to directly transfer relation inference ability** from a dataset annotated with free-form relation texts to downstream tasks. Compared to CLIP pre-trained on 400 million image-text pairs, our model utilizes a much smaller dataset (108K) to perform language-image pre-training at the level of entities and relations to align with the downstream task. It can potentially be combined with GLIP [7] to scale up. The proposed aligned pre-training enables the model to have the ability of zero-shot HOI detection without any fine-tuning, exhibit good performance when data is scarce and be robust towards relation label noise. When considering only the architectural designs employed for triplet detection, ParSe and GEN share similar architectures but also have several differences: **i)** when transferring knowledge, ParSe targets an HOI classifier that is relation-object disentangled, aiming to align with the decoupled representations and the contrastive loss, which enables an extension to identify as many combinations as possible. By contrast, GEN couples the relations and objects into an interaction classifier in order to utilize the off-the-shelf CLIP text encoder; **ii)** To infer relations, ParSe iteratively decodes relation features for the last layer's output from Parallel Entity Inference for all the Sequential Relation Inference layers, while GEN decodes relations for every subject and object pair from Instance Decoder by one-layer decoding; **iii)** ParSe does not have Position-Guided Embeddings. We believe that the index-matching adopted by ParSe can optimize the queries to decode for related subjects and objects.\n\nWith respect to comparisons with CDN, RLIP is motivated by language-image pre-training. The triplet detection structure ParSe builds on CDN as stated in the paper and further improves upon it with more decoupled representations for better contrastive learning, because\n**i)** from a perspective of language-image pre-training, we can align the representations of individual entities to their corresponding texts; \n**ii)** from a perspective of uni-modal detection, entity detection is position-sensitive [4], and thus, avoiding multi-task learning can improve the performance [2]. \n\nWith respect to PST, its overall design follows PPDM [5] with a DETR instantiation, which performs parallel decoding for Sum and Part queries, while ParSe is a sequential structure. Due to PST's parallel structure, PST has to design a factorized self-attention layer to further enhance part-level learning and extra modules to enhance part-sum interaction. In comparison, ParSe is simpler (with Parallel Entity Inference and Sequential Relation Inference). Also, the generation of relation queries differs significantly because in ParSe, relation queries are conditionally generated by subjects and objects, which injects stronger priors into the relation inference stage. ParSe shows much stronger performance (+7.86 mAP on Full set) over it.\n\nAll in all, we think RLIP represents a substantial departure from existing work such as CDN and PST because of the new pre-training signal and paradigm we propose.\nWe believe this which can benefit the research community, not limited to HOI but also VQA or visual reasoning where relations can potentially help.", " >**Q2**: Limited Application Scenarios.\n\n**A2**: This is the first work to directly incorporate relations as a language-image pre-training signal, which underpins further relational pre-training research and motivates more application scenarios with scarce data and free-form text inputs. Although existing relation annotations are limited (as we note in limitations), we do not anticipate that this will remain the case. Indeed, we hope that our work will inspire future work to focus on this problem and dataset contributions will follow. \nBesides, we provide ways to scale up datasets as future works.\nFor example, we could reuse a grounding dataset with entities annotated. Then, a language processing tool like spaCy [6] can be adopted as a tool to obtain their relations from captions. Even if we do not have subjects and objects but only image-caption pairs, we can combine the use of spaCy and methods like GLIP [7] to create abundant triplet annotations. Based on the analysis, we think our method is still promising and inspiring, paving a path for further research.\n\n\n>**Q3**: Missing details about OD+VG, and being confused about its performance since the performance has dropped a lot when performing OD pre-training on VG.\n\n**A3**: In the paper, to accelerate the convergence of a detection model, we use Deformable DETR (DDETR) as a base detector to perform this series of experiments and use the common training settings (50 epochs).\n\nThe ability of relation detection is comprised of two parts: object detection ability and relation inference ability. \nThe widely-adopted COCO pre-training contributes to a good object detection foundation (abundant object annotations with identical classes to HICO-DET and VCOCO), thus previous methods can perform well. \nHowever, the relation inference ability is under-explored.\n\nVG dataset is adopted for its potential relation inference foundation, though it's annotated with very noisy free-form texts.\nBy only performing OD pre-training on VG, we try to uncover the object detection foundation of VG for HOI detection. \nAccording to the results in the table below, we can see that VG has an inferior object detection foundation for HOI downstream tasks (note: PT stands for pre-training).\n\n| Method | Detector | Data | PT Paradigm | PT \\#Epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| ParSeD | DDETR | VG | OD | 50 | 19.59 | 25.03 | 23.78 |\n| ParSeD | DDETR | COCO | OD | 50 | 22.23 | 31.17 | 29.12 |\n| RLIP-ParSeD | DDETR | VG | RLIP | 50 | 24.45 | 30.63 | 29.21 |\n| RLIP-ParSeD | DDETR | COCO+VG | RLIP | 50 |24.67 | 32.50 | 30.70 |\n\n\nWe give more statistical data as an intuitive observation. Note that in the table below, we use ``object aliases'' [9] to obtain as many HOI object annotations from VG as possible.\n\n| Dataset | \\#images | HOI object annos | Imbalance ratio |\n| ----- | :-----: | :-----: | :-----: |\n| COCO | 118K | 860K | 1308 |\n| VG | 108K | 518K | 69318 |\n\nWe can conclude from the table that COCO has more images, more HOI object annotations, and a much smaller annotation imbalance ratio [10]. \nBesides, the object annotations in VG are relatively of poor quality [11]. \nFurthermore, due to the semantic ambiguity of free-form texts, one-of-N objectives are sub-optimal to optimize the OD detection model on VG.\nConsidering all the factors mentioned above, we can conjecture that the low HOI performance of OD pre-training on VG is reasonable. \n\nThe COCO+VG model is a simple tryout to transfer part of the object detection ability in COCO to RLIP on VG, serving as a remedy for VG's relatively low object detection foundation. \nFurther research can focus on more elaborate transferring methods, which can be formulated as a semi-supervised task.", " >**Q4**: Missing important baseline.\n\n**A4**: To verify the effectiveness of the proposed RLIP, we provide further results by performing relation detection pre-tarining on VG. The results are shown in the table below. (PT denotes pre-training.)\n\n| Method | Detector | Data | PT Paradigm | PT \\#Epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| ParSeD | DDETR | VG | OD | 50 | 19.59 | 25.03 | 23.78 |\n| ParSeD | DDETR | VG | Relation Detction | 50 | 21.36 | 29.27 | 27.45 |\n| RLIP-ParSeD | DDETR | VG | RLIP | 50 | 24.45 | 30.63 | 29.21 |\n| RLIP-ParSeD | DDETR | COCO+VG | RLIP | 50 |24.67 | 32.50 | 30.70 |\n\nComparing OD and relation detection pre-training, we observe relatively better performance of relation detection pre-training (23.78->27.45) because of the usage of relation annotations. It can also be interpreted as a more aligned pre-training with the downstream task. However, it's still inferior to RLIP (27.45<29.21). RLIP and relation detection pre-training both incorporate relations as a pre-training signal, while the latter treats every class as a one-hot vector and optimize the model with one-of-N objectives for both the verbs and entities. This paradigm can be sub-optimal due to the semantic similarity of the free-form text labels [8]. Besides, we can not ignore the fact that 30 verbs do not appear in the VG dataset. Thus, we may fail to perform zero-shot (NF) evaluation with uni-modal relation detection pre-training during application, which proves the importance and practicality of RLIP.\n\nApart from the above experiments, we provide more results with relation detection pre-training upon COCO initialization.\n\n| Method | Detector | Data | PT Paradigm | PT \\#Epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| ParSe | DETR | COCO+VG | Relation Detction | 150 | 26.00 | 33.40 | 31.70 |\n| RLIP-ParSe | DETR | COCO+VG | RLIP | 150 | 26.85 | 34.63 | 32.84 | \n\nWe can see from the Table that even when providing a good foundation for object detection, adopting RLIP still surpasses relation detection pre-training, which further demonstrates the usefulness of RLIP.\n\n**Reference**: \n[1] GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection, submitted to arXiv on Mar. 26th 2022. \n[2] Mining the Benefits of Two-stage and One-stage HOI Detection, NeurIPS 2021. \n[3] Visual Relationship Detection Using Part-and-Sum Transformers with Composite Queries, ICCV 2021. \n[4] Revisiting the sibling head in object detector, CVPR 2020. \n[5] Ppdm: Parallel point detection and matching for real-time human-object interaction detection, CVPR 2020. \n[6] spaCy: Industrial-strength Natural Language Processing in Python, GitHub. \n[7] Grounded language-image pre-training, CVPR 2022. \n[8] Learning transferable visual models from natural language supervision, ICML 2021. \n[9] https://visualgenome.org/api/v0/api_home.html \n[10] Equalization loss for long-tailed object recognition, CVPR 2020. \n[11] Scene graph generation by iterative message passing, CVPR 2017.", " We thank reviewer 7c46 for their valued feedback.\nWe are encouraged that they find the zero-shot, few-shot and fine-tuning performance significant compared to previous methods.\nTo address their concerns, we present more thorough experiments and analysis to show the contribution of ParSe (Q1) and demonstrate the superiority of RLIP even if comparing with previous methods adopting the VG dataset (Q2).\n\n>**Q1**. Technical significance of the proposed ParSe model compared to the existing architectures. How do ParSe and ParseD show favorable performance compared to the previous method without RLIP?\n\n**A1**: The motivation for the design of ParSe is to achieve better language-image contrastive learning. \nFor a fine-grained task like HOI detection, we need to align visual representations and textual representations if performing language-image contrastive learning. \nIf we use previous methods like QPIC [1] or CDN [2], we need to align visual representations of (or some subset of) the subject, object and relation triplets with their corresponding textual representations, which will result in inferior performance. \nAs the representations for the detection task are position-sensitive [3], the model can achieve better performance by avoiding decoded queries to perform multi-task learning.\nSince we use Cross-Entropy loss and Focal loss as contrastive losses, which are identical to the ones we use in fine-tuning the uni-modal ParSe, we can observe the superiority of this design directly by observing the uni-modal fine-tuning results (these also appear in the Supplementary Material).\n\n| ParSe Architecture | Coupling | Rare | Non-Rare | Full |\n| ----- | :----- | :-----: | :-----: | :-----: |\n| - | coupled subject, objects and relations [1] | 23.18 | 31.45 | 29.55 |\n| w/ Se | coupled subject and objects [2] | 25.58 | 32.50 | 30.91 |\n| w/ ParSe | fully decoupled | 26.36 | 33.41 | 31.79 |\n\nWe can see from the above table that by ablating the decoupled design itself, we could gradually improve the performance. \nFrom the perspective of visualizations in Figure 3 in the main paper, we can observe that ParSe attends to distinct regions where appropriate to better represent the target triplets.\n\n>**Q2**. In order to more comprehensively validate the effectiveness of the proposed pre-training method with external data, it would be better to compare with the existing methods when using VG data as well.\n\n**A2**: We think it to be a useful suggestion. To have a fairer comparison with previous methods, we adopt CDN [2] as a base method and then add the VG dataset to its pre-training stage. Since RLIP also adopts the relation annotations in VG, we also try to include these annotations in the uni-modal pre-training. Thus, we resort to relation detection on VG. To be more specific, we perform uni-modal relation detection pre-training by using linear classifiers for verbs and entities rather than matching with texts. The results are shown in the table below.\n\n| Method | Detector | Data | PT Paradigm | PT \\#Epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| CDN | DETR | COCO+VG | Relation Detction | 150 | 25.65 | 32.75 | 31.12 |\n| ParSe | DETR | COCO+VG | Relation Detction | 150 | 26.00 | 33.40 | 31.70 |\n| RLIP-ParSe | DETR | COCO+VG | RLIP | 150 | 26.85 | 34.63 | 32.84 | \n\nWe can see from the table that by using uni-modal relation detection pre-training, CDN still trails RLIP-ParSe with the same number of epochs of pre-training and fine-tuning, which shows the effectiveness of RLIP. Even if comparing it with ParSe using relation detection pre-training, we can still observe an improvement of ParSe over CDN, demonstrating the usefulness of decoupling triplet representations. \n\n**Reference**: \n[1] Qpic: Query-based pairwise human-object interaction detection with image-wide contextual information, CVPR 2021. \n[2] Mining the benefits of two-stage and one-stage HOI detection, NeurIPS 2021. \n[3] Revisiting the sibling head in object detector, CVPR 2020. ", " We thank reviewer VZWS for their valued comments.\nWe are encouraged that they find our work novel.\nTo address their concerns, we present more thorough experiments and analysis to compare with previous methods (Q1), showcase the robustness of the model (Q3), and clarify the performance improvement (Q2).\nWe will also include more failure cases in the Supplementary Material as suggested.\n\n\n>**Q1**: The proposed method does not show a strong improvement over the CDN method for fully-supervised HOI detection method for both VCOCO and HICO-DET datasets.\n\n**A1**: The design of ParSe aims to facilitate Relational Language-Image Pre-training by decoupling the representations of <subject, relation, object> triplets.\nThis decoupled design is also beneficial to uni-modal HOI detection.\nTo see this, we can compare with CDN [1] using the same number of decoder layers, where we observe that ParSe outperforms CDN across different backbones. \n\n| Method | Backbone | PT Data | \\#Tuning epochs | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| CDN | ResNet-50 | COCO | 90+10 | 27.39 | 32.64 | 31.44 |\n| ParSe | ResNet-50 | COCO | 90 | 26.36 | 33.41 | 31.79 |\n| CDN | ResNet-101 | COCO | 90+10 | 27.19 | 33.53 | 32.07 |\n| ParSe | ResNet-101 | COCO | 90 | 28.59 | 34.01 | 32.76 |\n\nTo better understand how this decoupled design helps uni-modal HOI detection, we also ablate the design itself (note that CDN uses dynamic re-weighting as a two-stage tuning method---this is a good design but we focus on the model design here), the result of which is shown below (also in the Supplementary Material).\n\n| ParSe Architecture | Coupling | Rare | Non-Rare | Full |\n| ----- | :----- | :-----: | :-----: | :-----: |\n| - | coupled subject, objects and relations [2] | 23.18 | 31.45 | 29.55 |\n| w/ Se | coupled subject and objects [1] | 25.58 | 32.50 | 30.91 |\n| w/ ParSe | fully decoupled | 26.36 | 33.41 | 31.79 |\n\nWe can see that by decoupling the representations, we boost performance. \n\nWith respect to the performance on V-COCO, if we use COCO pre-training, we can have a clear gain (61.7 $\\rightarrow$ 62.5 in Scenario1 and 63.8 $\\rightarrow$ 64.8 in Scenario2).\nHowever, when using RLIP on VG, the gain is small. \nWe attribute this to the reduced domain alignment of COCO pre-training, because V-COCO is a dataset based on COCO images, thus using COCO pre-training is more favorable than RLIP on VG (this observation is noted in Line 284).\n\n>**Q2**. In fact, the performance of the method goes down when including COCO dataset for RLIP compared to just the OD pre-training method using ParSe indicating overfitting of the pre-training method when including COCO.\n\n**A2**: To clarify any potential misunderstanding, we report results below.\nWhen using COCO with RLIP, performance always surpasses using OD pre-training on COCO.\nThe comparison is shown in the Table below.\n\n| Data | PTP | Method | Detector | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| COCO | OD | ParSeD | DDETR | 22.23 | 31.17 | 29.12 |\n| COCO+VG | RLIP | RLIP-ParSeD | DDETR | 24.67 | 32.50 | 30.70 |\n| COCO | OD | ParSeD | DDETR | 26.36 | 33.41 | 31.79 |\n| COCO+VG | RLIP | RLIP-ParSeD | DDETR | 26.85 | 34.63 | 32.84 |\n\nWe can see that when we use the same base detector, RLIP can benefit from COCO initialization (potentially because COCO initialization provides a good object detection foundation).\nThis gap widens when data becomes more scarce (i.e. in the few-shot setting).", " >**Q3**. Did the authors try any dataset other than VG with similar entity description to showcase the robustness of the model?\n\n**A3**: While this is certainly a good suggestion, there are limited datasets available with relation annotation of comparable size.\nWe therefore take a different approach to showcase RLIP's robustness across different upstream dataset distributions. \n\nWe do this by significantly altering the distribution of VG annotations and assessing the influence on RLIP's performance. \nTo this end, first note that since VG is human-annotated with free-form text, it is extremely long-tailed. \nWe alter its distribution by dropping tail object classes and verb classes to create a dataset with limited semantic diversity.\nConcretely, we drop object classes whose instance counts are fewer than 1,000 and relation classes whose instance counts are fewer than 500. \nWe pre-train RLIP on the resulting dataset and then perform zero-shot (NF) evaluation on HICO-DET. \nThe results are shown in the Table below. (Obj, rel and annos denote object, relation and annotations respectively.)\n\n| Method | Data | Obj classes | Obj annos | Rel classes | Rel annos | Rare | Non-Rare | Full |\n| ----- | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| ParSeD | VG | 100,298 | 3.80m | 36,515 | 1.99m | 12.30 | 12.81 | 12.69 |\n| ParSeD | VG- | 497 | 1.73m | 151 | 1.27m | 9.45 | 12.13 | 11.51 |\n\nWe observe from this table that despite a very significant change to the training distribution, performance on the Full set drops only moderately.\nWe do, however, witness a relatively larger decline on the Rare set due to the lack of semantic diversity in the modified data.\nThis finding accords with the observations of GLIP [3].\nTo make full use of language-image pre-training, semantic diversity is important which can ensure a good domain transfer as is indicated by CLIP and GLIP [3,4].\n\n\n**Reference**: \n[1] Qpic: Query-based pairwise human-object interaction detection with image-wide contextual information, CVPR 2021. \n[2] Mining the benefits of two-stage and one-stage HOI detection, NeurIPS 2021. \n[3] Grounded language-image pre-training, CVPR 2022. \n[4] Learning transferable visual models from natural language supervision, ICML 2021. ", " -- The paper proposes Relational Language-Image Pre-training(RLIP) a pre-training paradigm for Human-Object Interaction (HOI) Detection. This paper posits that this pre-training methodology aims to overcome the gaps left by Object Detection pre-training methodologies that a not fully tailored towards HoI Detection. \n\n-- The paper presents multiple modules to achieve this pre-training objective. Parallel entity detection and Sequential relation inference (ParSe) proposes a DETR-like training architecture that employs query groups for Subject and Object representation and further query groups conditional on these for modeling Relational representations. To enable contrastive learning during training, the paper proposes Label Sequence Extension (LSE) that performs out-of-batch-sampling to improve quality of negative samples. Finally, to address label noise and ambiguity in relationship modeling, the paper proposes employing Relational Quality Labels (RQL) and Relational Pseudo-Labels (RPL) modules during training.\n\n-- The paper reports the benefits of the pre-training methodology for HOI detection methods for fine-tuned, zero-shot and few-shot settings. + The paper leverages the entity descriptions of Visual Genome (VG) datasets efficiently to improve the performance on HOI detection, one of the challenging but relevant problems for scene understanding. The proposed ParSE method leverages the relational entities with context along with subject and object entities to improve <subject-object-relation> prediction.\n\n+ The paper proposed modules to refine the label noise and semantic ambiguity in the VG dataset using the RQL and RPL modules which is a relevant problem for VG dataset.\n\n\n+ The paper presents strong results for the zero-shot HOI detection over other SoTA approaches demonstrating the effectiveness of pre-training method for zero-shot detection. The paper also shows the contribution of each of the modules through ablation studies showcasing the contributions of each of the modules.\n\n- The proposed method does not show a strong improvement over the CDN method for fully-supervised HOI detection method for both VCOC and HICO-DET datasets. Infact, the performance of the method goes down when including COCO dataset for RLIP compared to just the OD pre-training method using ParSE indicating overfitting of the pre-training method when including COCO.\n\n- The improvement for zero-shot setting could be down to the strong prior in the distribution provided by the VG dataset since the zero-shot formulation is based on OOD rather Unseen setting. Including another dataset with triplet entities of Subject, Object, Relation might showcase the robustness of the method to different data distributions.\n\n- The method ranks low in novelty since it reuses many of the existing HOI detection methodologies and adapts the same for pre-training using VG dataset. -- Did the authors try any dataset other than VG with similar entity description to showcase the robustness of the model?\n\n-- The authors could try to include more qualitative examples in the supplementary material to showcase success and failure cases.\n\n-- There are couple of small changes that could be made in writing\n\n - #L57 'negatives samples' -> 'negative samples'\n - #L144 'We next similarly' -> Next, we similarly -- There are no suggestions on the societal impact front --", " For Human-Object Interaction (HOI) detection, the authors propose Relational Language-Image Pre-training (RLIP), a strategy for contrastive pre-training that leverages both entity and relation descriptions. \nTo make effective use of such pre-training, they make three technical contributions: \n(1) a new Parallel entity detection and Sequential relation inference (ParSe) architecture that enables the use of both entity and relation descriptions during pre-training; \n(2) a synthetic data generation framework, Label Sequence Extension, that expands the scale of language data available within each minibatch; \n(3) ambiguity-suppression mechanisms, Relation Quality Labels, and Relation Pseudo-Labels, to mitigate the influence of ambiguous/noisy samples in the pre-training data. \nThrough extensive experiments, they demonstrate the benefits of these contributions, collectively termed RLIP-ParSe, for improved zero-shot, few-shot, and fine-tuning HOI detection performance as well as increased robustness to learning from noisy annotations.\n Strength\n\n- The authors show the effectiveness of the proposed method on various setups such as few-shot and zero-shot setups.\n\n- The proposed ParSe model consistently shows favorable performance compared to the existing methods.\n\nWeakness\n\n- The original contribution of the ParSe model is unclear. In Sec.3, the authors should further emphasize the technical significance of the proposed ParSe model compared to the existing architectures. From the current explanation, it is hard to see the novelty of the proposed model.\n\n- In Table 1, in order to more comprehensively validate the effectiveness of the proposed pre-training method with external data, it would be better to compare with the existing methods when using VG data when using VG data as well.\n\n\n========== ------- Comments after the rebuttal ------======== \n\nI have read the other reviewer's comments and the author's rebuttal, which addresses most of my concerns. Therefore, I would like to raise my score.\n In Table 1, even without the RLIP paradigm, how do ParSe and ParseD show favorable performance compared to the previous method? The authors addressed the limitations of the proposed method and the potential negative social impact of their work.", " This paper aims to adopt the language-image contrastive pre-training techniques to boost the performance and robustness of the Human-object Interaction (HOI) detection task. For this purpose, this paper first modifies the conventional DETR-based HOI detection framework by decoupling detection and interaction classification and disentangling the subject and object queries. Then, this paper converts the entity labels and relation labels into text and embeds them into a latent space, and constructs contrastive pairs in or out of batches. Extensive experiments have been conducted in HICO-Det and V-COCO datasets from regular, few-shot, and zero-shot settings. Strengths:\n- The idea and motivation of this paper are generally reasonable.\n- The proposed method can be easy to adapt zero-shot, few-shot, and regular HOI detection tasks.\n- The writing of this paper is generally clear and the proposed method is simple and easy to follow. \n\nWeaknesses\n- The proposed architecture is not new in the HOI detection area, which combines the CDN [1] and PST [2], and the idea is entirely the same as the proposed architecture GEN in the recent paper GEN-VLKT [3]. Though GEN-VLKT is a CVPR 2022 paper, it has been uploaded in Arxiv and released code in Mar. 2022. Thus the author may present a detailed discussion with this paper.\n\n- Limited Application Scenarios. The proposed pre-training techniques still require relation triplets annotations, which are expensive to obtain.\n\n- Missing details about OD+VG. The DETR requires a long time to converge, so the author should report the number of epochs to pre-train the DETR in VG. The significant performance gap between OD and RLIP with VG may come from the insufficient pre-training for OD+VG.\n\n- Missing important baseline. The OD+VG baseline only trains a detector in VG with the detection annotation, but RLIP adopts the relation and detection annotations in the VG simultaneously. Thus a fair baseline is to directly to pre-train the ParSe with the VG visual relationship detection based on the COCO pre-trained model.\n\n- An additional large-scale dataset with a minor improvement. Equipped with COCO+VG pre-training, the performance only improved by 1.05 mAP (31.79->32.84).\n\n[1] Mining the Benefits of Two-stage and One-stage HOI Detection. NIPS 2021.\n[2] Visual Relationship Detection Using Part-and-Sum Transformers with Composite Queries. ICCV 2021.\n[3] GEN-VLKT: Simplify Association and Enhance Interaction Understanding for HOI Detection. Arxiv Mar. 2022. - My main concerns and questions lie in the 'Missing important baseline' in the weaknesses. The author should provide the results with the mentioned setting for a fair comparison to verify the effectiveness of the proposed RLIP.\n\n- Confused by the OD+VG performances. VG is a large-scale visual relationship detection dataset, but the performance has dropped a lot when pre-training in this dataset. A good choice may be to train DETR in VG first and then train in COCO because the object labels of HICO-Det are the same as COCO. A detailed analysis of VG should be included. The authors provided limitations in supplementary materials.", " This work proposes a visual-language pre-training method for HOI detection. Following MDETR and CDN, a pipeline named ParSe is proposed to implement contrastive learning upon humans, objects, and relations. Correspondingly, a pos-neg language labels scheme and a language ambiguity suppression method are proposed for the pre-training. On commonly-used benchmarks, the proposed method is compared with SOTAs on both supervised and zero-shot settings. Pros:\n+ The HOI contrastive learning is an interesting topic and would advance the HOI detection, considering the possibility of open-vocabulary visual-language learning.\n\n+ Some reasonable model designs are proposed to handle the pos-neg label scheme, language label processing, etc.\n\n+ Extensive ablation studies are conducted and presented. Some analysis is useful for follow-ups.\n\n+ Borrowing the idea from MDETR into the v-l HOI contrastive learning is non-trivial.\n\nCons:\n- First, is the novelty of the first contribution. As mentioned by the authors too, they follow CLIP, CDN, and MDETR to build the ParSe model. Though under the v-l setting, few novel designs are proposed to bring new insights. \n\n- Second, the experiment. In Tab 1, the comparison between methods is mainly within the setting using Res-50 which is reasonable, but for CDN, some results are missing: CDN-B with Res-50: 27.55 33.05 31.78 (which is comparable with the proposed method 26.36 33.41 31.79 in COCO as PT data); and many previous works are also not compared here. If compared with the above result of CDN, it seems marginal improvement is achieved. Though performance does not mean all, considering the similar structure of the proposed method with CDN, concern raises. Meanwhile, if using Res-101 and Swin-tiny, how would ParSe perform, and does it have an advantage compared with CDN and QAHOI as the same series methods. Moreover, ParSe can use extra data in the pre-training, but few improvements are achieved. \n\n- In the discussion of the first contribution, in fact, many previous methods (iCAN, TIN, etc) adopted the separated representations for a human, object, and relations. Even using a transformer, the differences and similarities should also be discussed. And CDN is another case, which is followed by this work. Thus, I suggest revising the part about the first contribution.\nAs for the relation contrastive learning, there are some recent works like Contrastive Visual and Language Translational Embeddings for Visual Relationship Detection, Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene Graphs with Language Structures via Dependency Relationships (just for your information, does need to discuss as they are too new).\nL84: open-vocabulary recognition remains underexplored. This point is also open to discussion, as VCL, FCL, Detecting Unseen Visual Relations Using Analogies, etc. also have considered open h-v-o scenarios.\nL88: rendering it suboptimal for RLIP. Why, please give a more detailed discussion.\n\n- Method part is somewhat hard to follow as the dense information presented is a short section. And many adopted methods are just given a citation without a brief introduction, which makes the reading interrupted.\n\n- Prompt diversity needs a detailed analysis. \n\n- Is the zero-shot setting fair (tab 2) as the proposed method can use extra data and the other method do not? Please clarify this.\n\nTypo: L63 Parse --> ParSe 1. Efficiency of the sequential h-o pairing? \n\n2. How similar of the pos and neg prompts? Need an analysis.\n\n3. As the pre-training using VG, many zero-shot relations of HICO-DET maybe not zero-shot no more, as there may be many similar and even same relations in VG for training. So a detailed analysis should be taken to probe the **real \"zero-shotness\"**.\n\n4. In Eq. 5, is the Euclidean distance the best choice? How about the others like cosine distance? N/A." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "nips_2022_dozWFpOJcOD", "LAzp5br9CW8", "xmBATmLpk1p", "hL0EmVH3E0", "65TjhInlmmE", "ZjqTlwGsx5X", "iVzx1vmNwZ5", "iVzx1vmNwZ5", "iVzx1vmNwZ5", "LAzp5br9CW8", "RmJBg29XtbWH", "SiYJ9a797MCO", "SiYJ9a797MCO", "7JUh8jxc6P", "2GV0YnWQDMi", "459_oI38wZk", "r0X9ieSkI3G", "r0X9ieSkI3G", "r0X9ieSkI3G", "vBW5KiYxzHd", "vBW5KiYxzHd", "vBW5KiYxzHd", "LAzp5br9CW8", "hL0EmVH3E0", "hL0EmVH3E0", "nips_2022_dozWFpOJcOD", "nips_2022_dozWFpOJcOD", "nips_2022_dozWFpOJcOD", "nips_2022_dozWFpOJcOD" ]
nips_2022_1tIUqrUuJxx
Dynamic Graph Neural Networks Under Spatio-Temporal Distribution Shift
Dynamic graph neural networks (DyGNNs) have demonstrated powerful predictive abilities by exploiting graph structural and temporal dynamics. However, the existing DyGNNs fail to handle distribution shifts, which naturally exist in dynamic graphs, mainly because the patterns exploited by DyGNNs may be variant with respect to labels under distribution shifts. In this paper, we propose to handle spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing {\it invariant patterns}, i.e., structures and features whose predictive abilities are stable across distribution shifts, which faces two key challenges: 1) How to discover the complex variant and invariant spatio-temporal patterns in dynamic graphs, which involve both time-varying graph structures and node features. 2) How to handle spatio-temporal distribution shifts with the discovered variant and invariant patterns. To tackle these challenges, we propose the Disentangled Intervention-based Dynamic graph Attention networks (DIDA). Our proposed method can effectively handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Specifically, we first propose a disentangled spatio-temporal attention network to capture the variant and invariant patterns. Then, we design a spatio-temporal intervention mechanism to create multiple interventional distributions by sampling and reassembling variant patterns across neighborhoods and time stamps to eliminate the spurious impacts of variant patterns. Lastly, we propose an invariance regularization term to minimize the variance of predictions in intervened distributions so that our model can make predictions based on invariant patterns with stable predictive abilities and therefore handle distribution shifts. Experiments on three real-world datasets and one synthetic dataset demonstrate the superiority of our method over state-of-the-art baselines under distribution shifts. Our work is the first study of spatio-temporal distribution shifts in dynamic graphs, to the best of our knowledge.
Accept
The paper addresses spatio-temporal distribution shifts in dynamic graphs by discovering and utilizing invariant patterns, i.e., structures and features whose predictive abilities are stable across distribution shifts. The paper is an early try to address distribution shifts in dynamic graphs, which is an interesting and important problem. The experiments show the effectiveness of the proposed methods on synthetic and real-world graphs. The authors are strongly encouraged to add more discussion on the experiments and baselines, related work, definitions of 'ego-graph', 'distribution shifts', 'invariant and variance structural patterns', computational complexity, and the other clarifications requested by the reviewers in the final version.
train
[ "OTXJw4S6ZGZ", "1HLyYoLO9l9", "KgEybT3zjO", "HX3HQ0SV8D", "Be13RQ55AvO", "7HprgfhVUhu", "XjgVTbz8wTY", "PA0Y4Utyceh" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q1: The invariant pattern can be further divided into the time dependent and time independent ones.**\n\nA1: Thank you for your comment. We agree that further analyzing and dividing the invariant patterns as you suggest is an interesting idea. As our paper is the first work on studying spatio-temporal distribution shifts in dynamic GNNs, we leave such further explorations as promising future works. \n\n**Q2: The distribution shift in the experimental datasets are manually conducted, it would be better to have some automatically designed mechanism.**\n\nA2: Thank you for your suggestions. We agree that automatically designed mechanism to add distribution shifts are important for research on the generalization of dynamic GNNs. However, since there is currently no prior work on designing the generation mechanism of distribution shift on dynamic graphs, we propose this manual distribution shift in experiments, which we believe is simple yet reasonable. The experimental results also suggest that such manual distribution shifts can differentiate the generalization ability of different models to a certain extent. We leave designing more automated generation mechanisms for distribution shifts on dynamic graphs as future works. \n\n**Q3: Can the proposed model be generalized to the continuous dynamic graph?**\n\nA3: Thank you for your question. We agree that continuous dynamic graph is also an important research problem. As the first work to study spatio-temporal distribution shifts in dynamic GNNs, we currently focus on conducting experiments in discrete dynamic graphs. One possible extension of our method to continuous dynamic graphs may be adopting a continuous time-encoding technique and a continuous dynamic graph predictor, which we leave as future explorations.\n\n**Q4:What is the variant and invariant pattern in dynamic graph? Is there any common understanding rather than the specific graph type?**\n\nA4: Thank you for your question. Invariant patterns generally refer to parts of the data that are sufficiently predictive, whose relationships with labels are stable across distribution shifts. For dynamic graphs, we define invariant patterns as subsets of ego-graphs across time stamps whose predictivity to labels are stable across time periods and graph communities. Here we also provide some conceptual examples. In road networks, for example, two traffic jams in different places and times may happen simultaneously by chance or there can be causal relations, e.g., the road structure let one traffic jam to block other roads and inevitably lead to another traffic jam. Only the latter case forms invariant patterns and can be used for stable predictions. Take recommendation systems for another example. Users' purchase of a sequence of items may be correlational or there can exist stable and invariant patterns, e.g., first buy a main product and then buy the accessories of the main product. In the case study shown in Appendix C.5, we show that DIDA can summarize invariant patterns in the temporal and neighborhood structure to capture the users' interests in shopping and make predictions of future interactions by matching the summarized recent interests, leading to better generalization abilities. \n\n", " **Q1: Strict proof or detailed illustration to show why spatio-temporal intervention works.**\n\nA1: Thank you for your comment. We would like to clarify how our proposed spatio-temporal intervention works. In short, our proposed method utilizes the do-calculus from causal theory to cut the backdoor path from variant patterns to labels and help the model to focus on the invariant patterns to labels. Based on the invariance literature, our proposed method can alleviate the harm of variant patterns under spatio-temporal distribution shifts and improve the generalization ability. We provide these analyses and some background knowledge of causal theory in Appendix B. We also give a case study to illustrate that DIDA learns to exploit invariant patterns to make predictions in Appendix C.5. We agree that strict theoretical analyses could further enhance our paper. Considering that this is the first work in studying spatio-temporal distribution shifts in dynamic graphs, we leave such explorations as promising future works.\n\n**Q2: Computational complexity is not discussed in the main contents.**\n\nA2: Thank you for your comment. Following your suggestions, we analyze the computational complexity of our proposed method as follows. Denote $|V|$ and $|E|$ as the total number of nodes and edges in the graph, respectively, and $d$ as the dimensionality of the hidden representation. The spatio-temporal aggregation has a time complexity of $O(|E|d+|V|d^2)$. The disentangled component adds a constant multiplier $2$, which does not affect the time complexity of aggregation. Denote $|E_p|$ as the number of edges to predict and $|S|$ as the size of the intervention set. Our intervention mechanism has a time complexity of $O(|E_p||S|d)$ in training, and does not put extra time complexity in inference. Therefore, the overall time complexity of our method is $O(|E|d+|V|d^2 + |E_p||S|d)$. Notice that $|S|$ is a hyper-parameter and is usually set as a small constant. In summary, our proposed method has a linear time complexity with respect to the number of nodes and edges, which is on par with the existing dynamic GNNs. Empirically, we also find that our intervention mechanism does not put much extra computational costs as shown in Appendix C.3. We will add this discussion in the revised version.\n\n**Q3: In equation (6), why are the expressions for m_i and m_v identical?**\n\nA3: Thank you for your comment. In the main paper, Eq. (6) is \n\n$$\\mathbf{m}_{I}=\\operatorname{Softmax}\\left(\\frac{\\mathbf{q} \\cdot \\mathbf{k}^{T}}{\\sqrt{d}}\\right) $$\n\n$$\\mathbf{m}_{V}=\\operatorname{Softmax}\\left(-\\frac{\\mathbf{q} \\cdot \\mathbf{k}^{T}}{\\sqrt{d}}\\right)$$\n, where it should be noticed that $\\mathbf{m}_V$ and $\\mathbf{m}_I$ differ in a minus sign in the Softmax function. Our design objective is to let dynamic neighbors with higher attention scores be in the invariant patterns, and let those with lower attention scores be in variant ones. Therefore, the invariant and variant patterns have a negative correlation and capture complementary information. \n\n**Q4: Failing to find the accurate definition of 'ego-graph', 'distribution shifts', 'invariant and variant structural patterns' , etc. As a result, it is not easy to understand this paper correctly without reading several previous papers.**\n\nA4: Thank you for your comments. We clarify these concepts as follows. 'distribution shifts' describes that the training and testing data distributions are inconsistent, i.e. $p_{train}(\\mathbf{X},\\mathbf{Y})\\neq p_{test}(\\mathbf{X},\\mathbf{Y})$, so that minimizing empirical risks in the training datasets may not lead to good results in the test datasets. 'invariant patterns' generally refer to sufficiently predictive parts of the data whose relationships with labels across distribution shifts are stable. For dynamic graphs, we define 'invariant structural patterns' as a subset of ego-graphs across time stamps whose predictive patterns to labels are stable across time periods and graph communities, while 'variant structural patterns' are the complement of invariant structural patterns so that their relationships are unstable, i.e., spurious correlations. An ego-graph is formally defined as $\\mathcal{G}_v=(\\mathbf{X}_v,\\mathbf{A}_v)$ where $\\mathbf{A}_v$ is the adjacency matrix including all edges in node $v$'s $L$-hop neighbors $\\mathcal{N}_v$ (where $L$ is an arbitrary integer) and $\\mathbf{X}_v$ includes the features of nodes in $\\mathcal{N}_v$. We will clarify these expressions in the revised version. \n", " **Q1: More related works**\n\nA1: Thank you for your suggestion. We will add more related works in the revised version.\n\n**Q2: Better to give the details for the baselines (e.g., the differences with the proposed model), as well as more details for the datasets (e.g., give a statistic table) in the main paper.**\n\nA2: Thank you for your comments. We agree that incorporating these details can further improve our paper. However, due to the page limit, currently we are only able to include them in the appendix. We will reorganize and move them from the appendix into the main paper when the page limit permits.\n", " **Q1.1:Why IRM and GroupDRO achieve inferior performance under the \"w/ DS\" setting?**\n\nA1.1: Thank you for your question. IRM and GroupDRO rely on ground-truth environment labels to achieve OOD generalization. Since they are unavailable for real dynamic graphs, we follow the literature and use random environment labels for IRM and GroupDRO in our experiments. The inferior performance indicates that IRM and GroupDRO cannot generalize well without accurate environment labels, which verifies that lacking environmental labels is a key challenge for handling distribution shifts of dynamic graphs. We will add this discussion in the revised version.\n\n**Q1.2:Why the compared methods show different trends on the real-world and synthetic datasets, e.g., GCRN performs quite well on the synthetic datasets?**\n\nA1.2: Thank you for your question. Compared to real-world datasets, synthetic datasets have manually designed distribution shifts. A plausible reason for the inconsistent performance of GCRN is that the model manages to capture the manually designed distribution shift in synthetic graphs, but fails to tackle the more complex distribution shifts in real-world datasets. We will add this discussion in the revised version.\n\n**Q2:As compared to [18], what is the advantage of the proposed method?**\n\nA2: Thank you for your comment. EERM [18] proposes multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments so that the model can extrapolate from a single observed environment. It shows a strong generalization ability in node-level predictions. However, EERM is designed for static graphs, and can not be directly applied to dynamic graphs where spatial-temporal distribution shifts exist. In comparison, our method is specially designed for dynamic graph and achieve strong performance of tackling spatio-temporal distribution shifts on dynamic graphs. We will add this discussion in the revised version.\n", " This work studies the patio-temporal distribution shift issue of dynamic graph neural networks. To pursue the robustness of DyGNNs, the authors proposed a specific invariant learning method and conducted experiments on both real-world and synthetic datasets. Strong points:\n1. This paper revels the impact of distribution drift in DyGNNs, which forms a new research problem.\n2. This paper presents a new method for training distributionally robust DyGNNs.\n3. Extensive experiments validate the effectiveness of the proposed method.\n\nWeak points:\n1. The experiment results need more explanations. For instance, why IRM and GroupDRO achieve inferior performance under the \"w/ DS\" setting. Why the compared methods show different trends on the real-world and synthetic datasets, e.g., GCRN performs quite well on the synthetic datasets.\n2. As compared to [18], what is the advantage of the proposed method.\n 1. The experiment results need more explanations. For instance, why IRM and GroupDRO achieve inferior performance under the \"w/ DS\" setting. Why the compared methods show different trends on the real-world and synthetic datasets, e.g., GCRN performs quite well on the synthetic datasets.\n2. As compared to [18], what is the advantage of the proposed method.\n No.", " This paper investigates graph neural networks on dynamic graphs, especially under spatio-temporal distribution shifts. The authors recognize that distribution shift is an important factor for dynamic graph embedding, which is not well-handled by the existing approaches. To address this, the authors propose a novel model named DIDA, to handle spatio-temporal distribution shifts in dynamic graphs by discovering and fully utilizing invariant spatio-temporal patterns. Experiments on four datasets demonstrate the effectiveness of the proposed model. Strengths:\n\n1. The paper is well-written and easy to follow.\n2. Dealing with the spatio-temporal information on dynamic graphs from the perspective of discovering and utilizing invariant patterns, I feel might be an effective direction.\n3. The experiments are sufficient to demonstrate the performance of the proposed model.\n\n\nWeaknesses:\n\n1. The related studies discussed in Related Work are not quite sufficient. I suggest the authors cite and discuss more.\n2. It is better to give the details for the baselines (e.g., the differences with the proposed model), as well as more details for the datasets (e.g., give a statistic table) in the main paper. Please see the weaknesses. None", " This paper introduces a method of dynamic graph neural networks with spatio and temporal intervention mechanism. Strength:\n(1) the empirical study shows considerable improvement on existing method.\n(2) Innovative using attention layers to capture spatio-temporal information.\n\nWeakness:\n(1) no strict proof or detailed illustration to show why spatio-temporal intervention works.\n(2) computational complexity is not discussed in the main contents. in equation (6), why are the expressions for m_i and m_v identital? \n I failed to find the accurate definition of 'ego-graph', 'distribution shifts', 'invariant and variance structural patterns' , etc. As a result, it is not easy to understand this paper correctly without reading several previous papers.", " This paper studies the problem of spatio-temporal distribution shift in dynamic graphs.\nBy disentangling the patterns in dynamic graphs into invariant and variant ones, the invariant patterns are utilized for stable prediction and the impact of distribution shift can be reduced.\nAlthough the distribution shift has been widely studied in the literature on computer vision and natural language processing, the authors made early attempts in the dynamic graph.\nI am overall positive about this work. Strengths\n\n[+] The distribution shift in the dynamic graph is critical and the authors made the early attempts on this topic, which should be encouraged in the community.\n\n[+] The proposed solution is technical sound, the disentanglement as well as the causal spatio-temporal intervention mechanism can satisfy the requirements.\n\n[+] The experiments are extensive and the results are encouraging.\n\nWeaknesses\n\n[-] The invariant pattern is assumed to be dependent on the time. From my opinion of view, the invariant pattern can be futher divided into the time dependent and time independent ones.\n\n[-] The distribution shift in the experimental datasets are manually conducted,it would be better to have some automatically designed mechanism. 1. Can the proposed model generalized to the continuous dynamic graph?\n2. What is the variant and invariant pattern in dynamic graph? Is there any common understanding rather than the specific graph type? The core definition of variant and invariant are not well explained, which limits the generalization and scalability of the proposed method.\n" ]
[ -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "PA0Y4Utyceh", "XjgVTbz8wTY", "7HprgfhVUhu", "Be13RQ55AvO", "nips_2022_1tIUqrUuJxx", "nips_2022_1tIUqrUuJxx", "nips_2022_1tIUqrUuJxx", "nips_2022_1tIUqrUuJxx" ]
nips_2022_V0GwAmDclY
Mix and Reason: Reasoning over Semantic Topology with Data Mixing for Domain Generalization
Domain generalization (DG) enables generalizing a learning machine from multiple seen source domains to an unseen target one. The general objective of DG methods is to learn semantic representations that are independent of domain labels, which is theoretically sound but empirically challenged due to the complex mixture of common and domain-specific factors. Although disentangling the representations into two disjoint parts has been gaining momentum in DG, the strong presumption over the data limits its efficacy in many real-world scenarios. In this paper, we propose Mix and Reason (MiRe), a new DG framework that learns semantic representations via enforcing the structural invariance of semantic topology. MiRe consists of two key components, namely, Category-aware Data Mixing (CDM) and Adaptive Semantic Topology Refinement (ASTR). CDM mixes two images from different domains in virtue of activation maps generated by two complementary classification losses, making the classifier focus on the representations of semantic objects. ASTR introduces relation graphs to represent semantic topology, which is progressively refined via the interactions between local feature aggregation and global cross-domain relational reasoning. Experiments on multiple DG benchmarks validate the effectiveness and robustness of the proposed MiRe.
Accept
Some of the reviewers had concerns about novelty, and one of the reviewers was worried about the care taken in training a baseline. however, another reviewer has a strong positive opinion of the work; and I believe the authors have done a good job in rebuttal making an effort to address the concerns about baselines. I am recommending acceptance; but I expect the authors to release code to allow further scrutiny w.r.t. baselines.
train
[ "855brQq01nk", "hXnb747Gj5", "VWPk2P2okb", "dWMpRJCMSss", "S2Z7cRsjYD", "h9ubU-rWJTe", "0bEcBQXYkem", "xpXcCGcxGjgo", "BLlgIBSSKvv", "--9i3pChYHZ", "mu0MlNypcjl", "eqp6GICYoGg", "J-uJR5axuE-E", "mYc9QRSbGv4", "AHOInLLhnC4", "xoOy7AgJQsk", "_HI031sf3GH", "6A7Tnlfyq-K", "Bd4y922KDc", "xI84Kufi91h", "IoGjQ2WD_Eg", "Hpru00A7Uh6", "3gj03t8gHcw" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer GJ2F,\n\nWe are wondering if you could give some comments and final thoughts about our previous discussion. Please let us know if you have further follow-up discussions. We would be immensely grateful if you could raise the rating to reflect the contributions of our paper. \n\nThank you very much.\n\nBest regards,\n\nAuthors", " We thank Reviewer Ywq6 for the prompt response and engagement. Much appreciated. Please let us know if you have further comments or suggestions that have an influence on the final rating.\n", " Thank you for your time reviewing our paper and for your valuable discussions. We answer your additional questions as follows.\n\n> **Q1. The results of ERM on PACS and VLCS datasets in Part2.**\n\nAccording to the the reported results of a number of previous DG works and the careful checking of our source codes many times, we believe that the ERM results in our paper are reasonable. We do appreciate it if the reviewer could double-check his/her codes, e.g., whether the parts of MixStyle [1] are commented out (L197, L200, and L203 in https://github.com/KaiyangZhou/Dassl.pytorch/blob/master/dassl/modeling/backbone/resnet.py). Noting that the original training-validation split (https://drive.google.com/drive/folders/0B6x7gtvErXgfUU1WcGY5SzdwZVk?resourcekey=0-2fvpQY_QSyJf2uIECzqPuQ) provided by [2] is used. Also, we used the last-step checkpoint to perform an evaluation on the target domain and obtained the following results, which have a negligible difference compared to the training-domain validation setting (i.e., select the model that exhibits the best performance in the validation dataset).\n\n| Method | Art | Cartoon | Photo | Sketch | Avg |\n| ------------------- | :--- | ------- | :---- | ------ | ---- |\n| DeepAll (last step) | 78.9 | 75.6 | 96.2 | 67.0 | 79.4 |\n\nWe provided the trained models and corresponding training logs in this anonymous link (https://drive.google.com/file/d/188MlrDgywunE8nDtD4o0BaxMlIGtPJ01/view?usp=sharing).\n\nOn the PACS dataset, we used the source codes of MixStyle and trained the model for 50 epochs. Then, we used the last-step checkpoint and obtained the following results.\n\n| Method | Art | Cartoon | Photo | Sketch | Avg |\n| -------- | :--- | ------- | :---- | ------ | ---- |\n| MixStyle | 82.1 | 78.6 | 96.9 | 73.3 | 82.7 |\n\n> **Q2. Do you keep the training parameters of DeepAll the same as MiRe's? I believe MiRe's hyperparameters are fully tuned, but I am not sure about your DeepAll training strategy.**\n\nWe are sure that the training parameters of DeepAll are the same as MiRe's after a careful check.\n\n***\n\n[1] Domain Generalization with MixStyle. In ICLR, 2021.\n\n[2] Deeper, broader and artier domain generalization. In ICCV, 2017. ", " Thanks for the authors' responses. The authors have addressed my main concern regarding Part 1. However, I am still doubtful about the results of ERM on PACS and VLCS datasets in Part2. I trained the model for 50 epochs and used the last-step checkpoint (same as [1]). For the ResNet-18 backbone, test domain accuracies are 82.7% on PACS and 74.5% on VLCS, respectively, without any complex data augmentation. For the ResNet-50 backbone, my result on the PACS dataset is consistent with the reported result in [2] with training-domain validation set as model selection.\n\nDo you keep the training parameters of DeepAll the same as MiRe's? I believe MiRe's hyperparameters are fully tuned, but I am not sure about your DeepAll training strategy.\n\n[1] K. Zhou, Y. Yang, Y. Qiao, and T. Xiang, “Domain Generalization with MixStyle,” in ICLR, 2021.\n\n[2] I. Gulrajani and D. Lopez-Paz, “In Search of Lost Domain Generalization,” in ICLR, 2021.", " I appreciate your answers to other questions. But I think 1/8 is definitely a hyperparameter even if you do not tune it (then other people can, right?).\n\nThere should be an ablation study in the future (not in this rebuttal given so limited time left) to validate metrics other than cosine.\n\nAppreciate your memory and flops comparison table.", " > **Q1. From the table of cropping 1/8 results, it seems 1/6 can also work well (only .1 difference). Therefore, 1/8 is a hyperparameter, right? But in your response to Q5, you said that 1/8 is not a hyperparameter, which seems in contradictory. Can you explain this?**\n\nThank you for pointing out this problem. In experiments, we set the cropping ratio to a fixed value and **do not tune it on any datasets**. Thus, we said it is not a hyperparameter in our original design. In our response, following your suggestions, we evaluated the robustness of the proposed method to the variations of this parameter. To avoid the potential confusion or even contradiction, this parameter will be seen as a hyperparameter in our final version.\n\n> **Q2. My question \"why adopting cos(⋅,⋅)\" distance is still not answered.**\n\nWe apologize for missing this question. Computing the similarity of two prototypes (under the presence of domain shifts) in the shared latent space based on cosine distance is a common method since cosine distance is domain-unrelated and insensitive to the feature dimension.\n\n> **Q3. Regarding the computational concern, why not show the time complexity and memory consumption, which will be more obvious.**\n\nThanks for your kind advice. As suggested, we compare the proposed method to several state-of-the-art DG methods in terms of number of parameters (\\#params.) and FLOPs. We report the comparison results on the PACS dataset (ResNet-18) as follows.\n\n| Method | \\#params. | FLOPs |\n| ------------ | :-------- | ----- |\n| SagNet [1] | 22.7M | 4.3G |\n| MixStyle [2] | 11.2M | 2.1G |\n| EFDM [3] | 11.2M | 1.9G |\n| Ours | 12.2M | 2.3G |\n\n***\n\n**Reference:**\n\n[1] Reducing Domain Gap by Reducing Style Bias. In CVPR, 2021.\n\n[2] Domain Generalization with MixStyle. In ICLR, 2021.\n\n[3] Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization. In CVPR, 2022.", " > **Q1. I think you should avoid the usage of \"data-dependent\" since this method is definitely data-dependent. Echoing my previous response, this approach will fail if applied to non-image samples. Thus, it is data-dependent. Specifically, why do you think data-dependent is not good? I do not agree that previous data-dependent approaches are data-dependent is not bad.**\n\nThanks for your kind advice. In our context (L36 in the main paper), data-dependent means that the types of spurious correlations may be distinct across different datasets. More specifically, different datasets usually have different domain-specific factors, and the potential spurious correlations between these factors and the semantic label may be distinct across datasets. Thus, we call it ''data-dependent spurious correlations''. Here, this word might lead to some misunderstandings and we will remove it in the final version.\n\n**Q2. With regards to diversity, I think you should go beyond images and think more boldly: where does the diversity come from and how we can respond to them?**\n\nThank you for this great question. Let us use sound separation as an example. Individual sounds are usually mixed with background noises and it is prohibitively difficult to directly disentangle them without prior knowledge of the source characteristics [1]. In this case, the idea of CDM (i.e., putting the target object in different backgrounds via recombination) can also applied to improve the diversity of training data and thus benefit the task of (unsupervised) sound separation. \n\n\n\n[1] Unsupervised Sound Separation Using Mixtures of Mixtures. In NeurIPS, 2020. ", " We understand the reviewer's concern and believe that the results on DomainBed (in the supplementary) is capable of addressing this potential defect. On the other hand, we would like to raise the reviewer's attention that some of state-of-the-art methods only compare to small number of baseline methods. For example, EFDM [1] compares to three baseline methods and SagNet [2] compares five baseline methods. By contrast, in our main paper, we try to include almost all types of DG methods, resulting in the relatively different comparison methods on different datasets. \n\n***\n\n**Reference:**\n\n[1] Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization. In CVPR, 2022.\n\n[2] Reducing Domain Gap by Reducing Style Bias. In CVPR, 2021.", " Thank you for the responsive reply. We hope these discussions could help the reviewer reconsider the score and would be very happy to answer any further questions. We answer your additional questions as follows.\n\n> **Q1. The idea of \"Mixing data in virtue of the complementary effect of between class and domain labels\" is not explored before. But the motivation is still not clear since this method cannot be generic: it can only be applied to images, while other mixup-based approaches can be applied to all domains of data. Therefore, I question the applicability of this approach. If the inputs are time series or natural language, then this approach will fail.**\n\n* We agree that CDM is originally designed for vision tasks, such as image classification (natural and medical images in the main paper) and semantic segmentation (urban scene understanding in the supplementary). In this regard, experiments in the main paper have extensively evaluate the effectiveness of CDM. Due to the significant difference between image and other modalities (such as time series and natural language), it is prohibitively difficult for us to explore CDM in broader ranges. In fact, we do not claim that our CDM is a variant of mixup and can be applicable to different modalities. Instead, our motivation is to solve the potential spurious correlations in **images** (object recognition tasks), which has been clearly stated and verified in our introduction and experiments. \n\n* On the other hand, the logic that 'the applicability of a DG approach is questionable as it cannot applied to different modalities' also goes for a number of state-of-the-art methods. For example, EFDM [1] proposes to match the empirical Cumulative Distribution Functions of image features. SagNet [2] disentangles style encodings from class categories to prevent style biased predictions and focus more on the contents of images. JiGen [3] solves the task of object recognition across domains by introducing self-supervised signals regarding how to solve a jigsaw puzzle on the same images. Similarly, our paper focuses on the task of **object recognition** across domains, which is an important yet challenging problem for the DG community. \n\n* At a last point, the motivation of mixing data for addressing domain generalized object recognition tasks is clear and has been demonstrated in our main paper. First, conventional DG methods strive to enforce domain- or class-wise invariance but are susceptible to include some misleading spurious correlations as the complex combinations of domain-specific and common factors lack in-depth exploration. Second, many recent efforts are devoted to disentangles common and domain-specific factors but face critical challenges in the real-world cases (cf. L34-52). \n\nOverall, the proposed CDM focuses on the task of object recognition across domains and can be applied to both natural and medical images. \n\n> **Q2. Indeed, when applying mixup to two samples, we often do not operate on the label space; instead, we use the loss mixup. This can be seen a trick.**\n\nVanilla mixup aims to conduct data interpolation via convex combinations of pairs of examples and **their labels**. By contrast, the proposed CDM targets on generating diverse training samples by replacing the background of a certain image with a randomly cropped patch from other images but keeps its object label fixed. \n\n***\n\n**Reference:**\n\n[1] Exact Feature Distribution Matching for Arbitrary Style Transfer and Domain Generalization. In CVPR, 2022.\n\n[2] Reducing Domain Gap by Reducing Style Bias. In CVPR, 2021.\n\n[3] Domain Generalization by Solving Jigsaw Puzzles. In CVPR, 2019.", " I appreciate the author's response. But I think a fair comparison will be comparing with all common benchmarks, instead of specific choice on different datasets. Thus, it is better to put them all with the same baselines. I also appreciate the explanation on using mixstyle codebase.", " 1. I think you should avoid the usage of \"data-dependent\" since this method is definitely data-dependent. Echoing my previous response, this approach will fail if applied to non-image samples. Thus, it is data-dependent. Specifically, why do you think data-dependent is not good? I do not agree that previous data-dependent approaches are data-dependent is not bad.\n2. With regards to diversity, I think you should go beyond images and think more boldly: where does the diversity come from and how we can respond to them?", " 1. From the table of cropping 1/8 results, it seems 1/6 can also work well (only .1 difference). Therefore, 1/8 is a hyperparameter, right? But in your response to Q5, you said that 1/8 is not a hyperparameter, which seems in contradictory. Can you explain this?\n2. My question \"why adopting $cos(\\cdot, \\cdot)$\" distance is still not answered.\n3. Regarding the computational concern, why not show the time complexity and memory consumption, which will be more obvious.\n4. I appreciate the response to the hyperparameter robustness.", " Thanks for this detailed feedback! I still have the following questions:\n\n1. The idea of \"Mixing data in virtue of the complementary effect of between class and domain labels\" is not explored before. But the motivation is still not clear since this method cannot be generic: it can only be applied to images, while other mixup-based approaches can be applied to all domains of data. Therefore, I question the applicability of this approach. If the inputs are time series or natural language, then this approach will fail.\n2. Indeed, when applying mixup to two samples, we often do not operate on the label space; instead, we use the loss mixup. This can be seen a trick.", " Thank you for your insightful review and for recognizing our novel contributions. Below please find our point-by-point response to your comments.\n\n> **Q1. Would it be possible to show some visualization on what the \"semantic topology\" looks like or what was learned by the ASTR module?**\n\nWe greatly appreciate this suggestion, and we have added some visualization results to the revised supplementary. As shown in Fig. 2 of the supplementary, we visualize the relations (edge weights) of semantic anchors (nodes) on the PACS dataset. From the figure, we can observe that those semantically similar anchors will be assigned with larger weights, while those dissimilar anchors will be assigned with small weights. More importantly, such relations will be generalizable across domains, fitting the human intelligence that humans are talented at comparing and reasoning when learning new concepts.\n\n> **Q2. In Table 7, only ASTR is evaluated. Does CDM also work for the medical datasets?**\n\nCDM requires that the images should be comprised of the foreground objects and the background. In that sense, not all medical datasets satisfy this requirement as some medical images do not have explicit lesion regions. For example, in the task of tuberculosis diagnosis, attributes like atelectasis and pulmonary cavitation cannot be characterized by class and domain labels. By contrast, tasks like benign-malignant classification of pulmonary nodules in chest CT images that have explicit lesion regions could be used to evaluate our CDM. We will try to include more medical datasets in the final version. \n\n> **Q3. Regarding the use of Grad-CAM, does it always give accurate masks? Are there any failure cases? And what happens if Grad-CAM doesn't produce accurate masks?**\n\nGrad-CAM aims to provide the attentive regions of domain and class classification losses. Here, we leverage their complementary effects to depict the complete foreground regions. However, we cannot guarantee that Grad-CAM will always give accurate masks. As the proposed CDM explicitly mixes the foreground and background regions of different images, the potential inaccurate masks could increase the diversity of the generated data. The ablation of CDM (cf. Table 5) and its combination with prior augmentation-based works (cf. Table 6) provide empirical support, verifying that CDM is robust and scalable to different domain generalization scenarios. ", " We thank the reviewer for helping improve our paper and appreciate that they recognized the novelty and value of our work. We address the concerns as follows and will revise the manuscript accordingly. We sincerely hope that the reviewer raises any further questions if he/she is still confused by our answers.\n\n> **Q1. The standard deviation (std) over 10 runs is missing. Considering most datasets are small, such as PACS and VLCS, it is more convincing to report the average accuracy with std for comparing model performances.**\n\nThanks for your kind advice. We have included results of std (over 10 runs) in Table 1-4 in the revised manuscript. We summarize the results as follows: \n\n**Table 1. Domain Generalization results on PACS benchmark.**\n\n| Art | Cartoon | Photo | Sketch | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 84.6$\\pm$0.5 | 79.5$\\pm$0.4 | 96.8$\\pm$0.2 | 78.4$\\pm$1.0 | 84.8 |\n\n**Table 2. Domain Generalization results on VLCS benchmark.**\n\n| VOC | LabelMe | Caltech | Sun | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 70.3$\\pm$0.3 | 63.6$\\pm$0.7 | 96.2$\\pm$0.4 | 69.4$\\pm$0.7 | 74.9 |\n\n**Table 3. Domain Generalization results on Office-Home benchmark.**\n\n| Art | Clipart | Product | Real | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 60.2$\\pm$0.8 | 53.2$\\pm$0.9 | 75.1$\\pm$0.6 | 76.4$\\pm$0.6 | 66.2 |\n\n**Table 4. Domain Generalization results on DomainNet benchmark.**\n\n| Clipart | Infograph | Quickdraw | Painting | Real | Sketch | Avg |\n| ------------ | :----------- | ------------ | ------------ | ------------ | :----------- | ---- |\n| 64.7$\\pm$1.0 | 27.8$\\pm$1.2 | 53.1$\\pm$0.8 | 16.4$\\pm$1.5 | 64.1$\\pm$0.6 | 52.3$\\pm$0.9 | 46.4 |\n\nFrom the table, we can see that the proposed method is insensitive to the random seed on most benchmark datasets, revealing the efficacy and robustness of our data mixing and structural relation modeling modules.\n\n> **Q2. Some opinions in this paper are weak and unconvincing.**\n\n> Q2.1: Domain-wise invariance cannot guarantee generalizable representations. The authors argue that such invariance may be susceptible to including some misleading spurious correlations. This situation may occur in simulated data, where some semantically independent properties exist across all source domains. However, spurious correlation is difficult to hold simultaneously across all source domains in real-world cases, so it is reasonable to expect feature extractors to learn more semantic information through domain-invariant representation learning.\n\nWe greatly appreciate this suggestion, and we have tune-downed our claim in the updated manuscript. Our key insight is that domain-wise and category-wise invariance are pairwise (one-vs-one) alignment strategies, which cannot explore the complex many-vs-many interactions, i.e., the relations of different semantic categories. In essence, the classical one-vs-one alignment (such as adversarial training) can be seen as the simplest case of relational reasoning. Most conventional DG methods assume that perfect alignment equals to precise knowledge transfer, while the many-vs-many relations between different entities are ignored. Moreover, alignment-based approaches naturally neglect the intra-domain relations as no entities can be explicitly aligned within each domain. In this regard, our work gives a hint to bridge the gap between alignment-based and relational reasoning based DG by jointly modeling the intra-domain and inter-domain topological relations. \n\n> Q2.2: The widely-adopted style-content-separation idea may fail to extract true semantic factors. The authors’ basis is that the activation map induced by domain classification does not focus on the background. This phenomenon is easy to understand because the style of foreground objects is also related to domain classification. However, I think that whether domain classification focuses on the background or not has no direct relationship with whether style-content-separation helps extract semantic information.\n\nApologies for the misunderstanding caused by this sentence and we have modified the argument in question. We agree that whether domain classification focuses on the background or not has no direct relationship with whether style-content-separation helps extract semantic information. Here, we aim to highlight that the activation map induced by domain classification does not focus on the background. ", " > **Q3. In Table 1 – Table 6, the authors should report DeepALL results that are implemented by themselves with the same training strategy as the proposed method. It is unfair to compare other methods with the extremely low DeepALL baseline, especially in Table 5. According to my experiments, it is easy for DeepALL to achieve an average accuracy above 82% with the ResNet-18 backbone on the PACS dataset. Experiments on DomainBed also report excellent performances of DeepALL (ERM).**\n\nThanks for your kind advice. According to our experiments, DeepAll which achieves an average accuracy above 82% with the ResNet-18 backbone on the PACS dataset is very likely to use the test domain for model selection (oracle). In our experiments, we strictly follow the train and val splits established by previous methods to conduct model selection, i.e., training-domain-validation setting. We summarize our DeepAll results as follows: \n\n**Table 1. Domain Generalization results on PACS benchmark.**\n\n| Art | Cartoon | Photo | Sketch | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 77.0$\\pm$0.3 | 74.8$\\pm$0.5 | 95.8$\\pm$0.1 | 70.0$\\pm$0.6 | 79.4 |\n\n**Table 2. Domain Generalization results on VLCS benchmark.**\n\n| VOC | LabelMe | Caltech | Sun | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 71.8$\\pm$0.6 | 61.1$\\pm$0.4 | 95.8$\\pm$0.2 | 62.5$\\pm$0.6 | 72.8 |\n\n**Table 3. Domain Generalization results on Office-Home benchmark.**\n\n| Art | Clipart | Product | Real | Avg |\n| ------------ | :----------- | ------------ | :----------- | ---- |\n| 59.4$\\pm$0.4 | 48.0$\\pm$1.1 | 72.7$\\pm$0.5 | 75.3$\\pm$0.4 | 63.9 |\n\n**Table 4. Domain Generalization results on DomainNet benchmark.**\n\n| Clipart | Infograph | Quickdraw | Painting | Real | Sketch | Avg |\n| ------------ | :----------- | ------------ | ------------ | ------------ | :----------- | ---- |\n| 61.8$\\pm$0.6 | 20.2$\\pm$0.7 | 45.0$\\pm$1.3 | 14.3$\\pm$1.0 | 57.3$\\pm$0.7 | 44.8$\\pm$0.6 | 40.6 |\n\nThe above results have been added to the revised manuscript. ", " We thank the reviewer for the provided comments. After reading the comments seriously, we are afraid that the reviewer has probably misunderstood the contributions and the novelty of our paper. We will try our best to eliminate the misunderstandings via the following responses, and sincerely hope that the reviewer raises any further questions if he/she is still confused by our answers.\n\n> **Q1. Technically, the method is not novel. It is a combination of Grad-Cam, Mixup, and GCN. I admire the application of these existing techniques. But there is no further insight and motivation is not strong.**\n\nWe would like to clarified that our contributions do not lie in the combination of these existing techniques. We emphasize the novelty and significance of our paper from the following two aspects:\n\n* The proposed CDM is novel in terms of its motivation and implementation. Mixing data in virtue of the complementary effect of between class and domain labels is novel and has never been explored in DG before as far as we know (also pointed out by **Reviewer Rsk7**). To achieve this goal, we introduce Grad-Cam to obtain the attentive regions of different classification losses and properly fuse them for depicting the complete foregrounds. Then, we mix the foreground and background of two images from different domains, which is significantly different from the classic mixup [1]. First, in contrast to [1] which requires a pre-defined balancing parameter, our CDM leverages the computed activation maps to guide the data fusion process in an adaptive manner. Second, CDM does not linearly combine the corresponding one-hot label of samples when mixing two samples. Instead, we still use the category label of the foreground object. The goal of this operation is to reduce the background bias, i.e., the potential spurious correlations between foreground and background regions. Finally, we have empirically verified that CDM is compatible with existing augmentation-based methods (e.g., MixStyle [2] and EFDM [3]) to further boost their performance (cf. Table 6 in main paper).\n* A long-standing and challenging problem for DG (also for domain adaptation) is what should be transferred across domains. Prior works, such as adversarial training and statistic matching, focus on enforcing *one-vs-one* invariance, i.e., the representations of samples from the same semantic category should be invariant across domains. In this paper, we propose the concept of structural (*many-vs-many*) invariance on the top of semantic topology, i.e., the semantic relations between different categories should also be maintained across domains. This is exactly one of the motivations and the novelties of this paper (also pointed out by **Reviewer Rsk7 and GJ2F**) and has been clearly presented in our original submission. Technically, we instantiate the semantic topology as relation graphs and design the ASTR module to progressively refine the representations (node) and the topological relations (edge) of semantic anchors. The local feature aggregation and global cross-domain relational reasoning modules are proposed to perceive and maintain structural semantic relations. Here, spectral graph convolution [4] is introduced to perform graph representation learning. In a nutshell, instead of the simple application of GCN, our contributions lie in the instantiation and exploration of semantic topology via graphical structures. ", " > **Q2. In figure 3, why only cropping 1/8 of the original image? There lacks motivation.**\n\nIn experiments, we found that the generalization performance is insensitive to the ratio of cropping. We empirically verified this point by conducting experiments on PACS and Office-Home. \n\n| cropping value | PACS | Office-Home | Avg |\n| -------------- | :--- | ----------- | :--- |\n| 1/4 | 83.9 | 65.8 | 74.9 |\n| 1/6 | 84.5 | 66.2 | 75.4 |\n| 1/8 | 84.8 | 66.2 | 75.5 |\n| 1/10 | 84.1 | 66.0 | 75.1 |\n| 1/12 | 84.3 | 66.1 | 75.2 |\n| 1/14 | 84.5 | 66.2 | 75.4 |\n| 1/16 | 84.0 | 66.1 | 75.1 |\n\nFrom the table, we can see that varying the cropping value in range [1/4, 1/16] just incurs a maximum variation of 0.6% in average classification accuracy. The justification is that images that serve as the backgrounds are first processed by Gaussian smoothing, making the them less informative compared to that of foreground images.\n\n> **Q3. Similarly, in Eq. 4, why adopting $cos(f,c_j)/2$? How did the term $1/2$ come up? This also goes to Eq. 6.**\n\nThe term $1/2$ in Eq. 4 is used to average the local (the relations between each instance and all semantic anchors) and global (the relations among different semantic anchors) similarity. Similarly, term $1/2$ in Eq. 6 is used to average the cross-domain and cross-model similarity. Both local *vs.* global and cross-domain *vs.* cross-model are two orthogonal yet complementary parts. Here, we opt for a simple and commonly-used way to integrate these complementary parts.\n\n> **Q4. The second part of the method is too complicated and computationally expensive for multiple domains. Thus I fear the comparison to other methods is not fair.**\n\nIn general, most cross-domain constraints in DG are first designed for two source domains and then extended to multiple domains. In our method, the second part introduces relation graphs to characterize the semantic topology in the embedding space. Here, we directly build the relation graphs on the top of prototype features and do not introduce any additional projection modules. Moreover, as the number of nodes of the relation graph equals to the number of classes in each dataset, the constructed graph is relatively small. The computational cost of edge connections (i.e., Eq. 4 and Eq. 6) is negligible due to the limited number of graph nodes. Thus, we believe the proposed ASTR is not computationally expensive. Also, considering the size and computational cost of the constructed relation graph, ASTR does not inferior to its competitors in terms of simplicity.\n\n> **Q5. Regarding the reproducibility, this approach has introduced many extra hyperparameters to be tuned, to name a few, the hyperparameters in Grad-Cam, threshold in Eq. 1, 1/8 in Figure 3, 1/2 in Eq. 4 and 6, the Mixup hyperparameter in Figure 3, $\\xi$ in Eq. 7, and $\\lambda$ in Eq. 8. Given that there are so many hyperparameters, I highly doubt the reproducibility of this approach.**\n\n**(i)** To the best of our knowledge, there is no hyper-parameter in Grad-Cam that should be tuned. **(ii)** 1/8 in Figure 3 and 1/2 in Eq. 4 and 6 are default designs and cannot be seen as hyper-parameters in our method. **(iii)** The mixup parameter is computed from the activation maps and cannot be seen as a hyper-parameter in our method. It is an adaptive parameter and do not require pre-definition. **(iv)** To this end, we think there are three hyperparameters that need to be discussed, i.e., threshold in Eq. 1, $\\xi$ in Eq. 7 and $\\lambda$ in Eq. 8. We will discuss the robustness of these hyperparameters in the following question.\n\n> **Q6. Furthermore, the robustness of the hyperparameters should be reported. Thus, not only accuracy, but also the variation.**\n\nWe greatly appreciate this suggestion, and we have provided experimental results regarding the robustness of the hyperparameters in the revised supplementary.\n\n***\n\n**Reference:**\n\n[1] mixup: Beyond empirical risk minimization. In ICLR, 2018.\n\n[2] Domain Generalization with MixStyle. In ICLR, 2021.\n\n[3] Exact feature distribution matching for arbitrary style transfer and domain generalization. In CVPR, 2022.\n\n[4] Semi-supervised classification with graph convolutional networks. In ICLR, 2017.", " > **Q1. In L36, how to evaluate that MiRe does not have the \"data-dependent'' spurious correlations? I fear that MiRe is still a data-driven method and it heavily rely on the Mixup operation of the fore-background.**\n\nWe have highlighted that spurious correlations can be alleviated but is difficult to be absolutely eliminated (footnote of the second page). In that sense, we cannot guarantee that MiRe does not have the \"data-dependent'' spurious correlations in many real-world cases. To solve this issue, we introduce the concept of semantic topology that is robust to domain variations, implicitly reducing the bias towards data-dependent spurious correlations.\n\n> **Q2. In L42, authors claimed that previous methods require \"some distribution of values for an attribute\". I would assume MiRe does not. However, the adoption of Grad-Cam indeed assumes that the input data is image with clear fore and background information. Thus, it is an overclaim.**\n\nWe agree that the adoption of Grad-Cam assumes that the input data is an image with clear fore and background information. In fact, the attribute here refers to the middle-level features which humans use to describe objects. We greatly appreciate this suggestion, and we have tune-downed our claim in the updated submission, as shown in blue lines in the revised manuscript. \n\n> **Q3. In L44, I fail to find the answers to the question: \"how many factors of the latent factors\".**\n\nSorry for the confusion. This sentence is not a question. Here, we aim to highlight that in real-world cases, we cannot explicitly know the latent feature is composed of how many factors, and thus disentanglement-based methods cannot work well. \n\n> **Q4. In L132, why do existing data augmentation approaches lack diversity? Any support?**\n\nThe scope of this statement is limited to existing augmentation-based DG methods [1-4]. As stated in the main paper, these methods typically randomly change the image style (e.g., color and texture) while maintaining the content unchanged. In other words, their augmentation strategies rely on manipulating the low-level feature statistics. By contrast, our CDM directly mixes the foreground and background of images from different domains. we have tune-downed our claim in the updated submission, as shown in blue lines in the revised manuscript. \n\n***\n\n**Reference:**\n\n[1] Reducing domain gap by reducing style bias. In CVPR, 2021.\n\n[2] Addressing model vulnerability to distributional shifts over image transformation sets. In ICCV, 2019. \n\n[3] Exact feature distribution matching for arbitrary style transfer and domain generalization. In CVPR, 2022.\n\n[4] Domain generalization with mixstyle. In ICLR, 2021. ", " > **Q1. The experiments seem extensive. But after a careful check, I found that there is no reason to compare with different SOTA results in different datasets (e.g., the comparison methods in PACS and Office-Home are different). Plus, the model selection strategy is also different for each dataset. There is no support or explanation in the paper to illustrate why this is done.**\n\nAs stated in Section 4.2, to facilitate a fair and comprehensive comparison, we compare the proposed method to five types of state-of-the-art DG methods (almost cover all types of DG methods in the deep learning era). Please refer to Section 2 for the taxonomy of DG methods. For each type, we select 2 or 3 representative methods. Although PACS and Office-Home are two widely-used DG benchmarks, not all comparison methods have reported results on both datasets. As a consequence, we compare different SOTA results in different datasets, making sure that we could cover baseline methods as much as possible. In the meantime, these comparison methods follow the Train and Val splits established by several earlier works (cf. Section 4.1), and thus we strictly follow the mainstream settings (including their model selection strategies) to conduct our experiments. In our paper, we highlight the model selection strategy in the caption of each table so as to remind the readers how these comparison methods conduct their experiments.\n\nGiven the above discussions, we argue that both the comparison methods and model selection strategy used in our main paper are reasonable and fair. Moreover, we have provided all these details in our original submission. \n\n>**Q2. It is unknown where the authors obtain the results for comparison methods for the main paper. Then, if you can use DomainBed, why not just use it as the main benchmark?**\n\nIn our main paper, we follow MixStyle [1] (https://github.com/KaiyangZhou/mixstyle-release) to conduct experiments, including data preparation, model training, and selection. To facilitate a fair comparison, we select comparison methods that have open-source codes and make sure that the results for these comparison methods are obtained under the same evaluation protocol.\n\nWe have also reported the results on DomainBed [2] benchmark (cf. Table 1 in the supplementary). However, as we all know, DomainBed is computationally expensive and requires about 4,142 models per DG algorithm, and thus we fail to finish all experiments before the main paper deadline. We are very willing to move the results based on DomainBed to the main paper in the revised version if it could help the readers to better understand our contributions.\n\n***\n\n**Reference:**\n\n[1] Domain Generalization with MixStyle. In ICLR, 2021.\n\n[2] In Search of Lost Domain Generalization. In ICLR, 2021.", " The paper proposes Mix and Reason, a new domain generalization approach consisting of two components. The first component, category-aware data mixing, basically performs data augmentation by replacing the background of an image with a random patch of another image, aided by binary masks produced by Grad-CAM. The second component, adaptive semantic topology refinement, is built using a graph neural network, which aims to learn semantic features through interactions among source domains. The effectiveness of the approach is demonstrated on a number of commonly-used domain generalization benchmarks. **Originality**: From the technical point of view, the proposed ideas, including the data mixing strategy and the graph neural network-based module, are novel. It's interesting that Grad-CAM can be used in this way for data augmentation.\n\n**Quality**: Overall the quality is good. The main experiments are comprehensive and solid for justifying the technical contributions. The ablation studies are also sufficient. It's worth mentioning that the datasets cover a wide range of tasks including generic object recognition and medical image analysis. However, the paper misses more in-depth analysis on \"how\" the graph neural network-based ASTR module improves the performance. The quantitative results are sufficient to justify the effectiveness but a more scientific analysis is needed to help readers understand how the \"topology\" is built by the neural network and in what way it helps learn structural invariance. In other words, the concept of semantic topology is new and sounds \"big\" but there is no concrete studies on how this concept works. (Maybe use a theory or visualize what was learned by the model?)\n\n**Clarity**: The paper is well-written.\n\n**Significance**: The paper provides new insights on how data augmentation can be combined with graph neural networks to improve domain generalization. Moreover, the results demonstrate that each of the two components alone could be useful to the community: the data mixing strategy works with some existing methods like MixStyle and EFDM, while the graph neural network can be used for medical image analysis. - Would it be possible to show some visualization on what the \"semantic topology\" looks like or what was learned by the ASTR module?\n- In Table 7, only ASTR is evaluated. Does CDM also work for the medical datasets?\n- Regarding the use of Grad-CAM, does it always give accurate masks? Are there any failure cases? And what happens if Grad-CAM doesn't produce accurate masks?\n\n-------- Post-rebuttal updates --------\n\nThe authors' responses have resolved all the reviewer's concerns.\n\nAs mentioned in the 1st round review, the reviewer appreciated the contributions and believed the proposed approach and insights will be useful to the community (the reviewer's stance remains the same). The added materials during the rebuttal have significantly strengthened the motivation of the paper and made the story more convincing.\n\nThe reviewer has also read other reviewers' comments as well as the authors' responses, and found no critical issues that could lead to rejection.\n\nOverall, the reviewer likes the contributions of the paper and believes they would be of interest to the domain generalization community. Therefore, the reviewer strongly recommends acceptance and stands firmly on this side. Currently there is no discussion about the limitations of the approach. Perhaps there are some limitations related to the use of Grad-CAM (see the Questions section).", " This paper proposes Category-aware Data Mixing (CDM) to augment data on the data level and Adaptive Semantic Topology Refinement (ASTR) to maintain an invariant semantic topology of classes across different domains. The method (combination of CDM and ASTR) achieves excellent results on multiple DG datasets. Besides, experiments also show that CDM can bring performance gains to several other DG methods. Strengths (originality, quality, clarity and significance):\n1. As far as I know, the originality is good.\n2. The proposed method is technically sound. The experimental results compared with other DG methods is excellent.\n3. This paper is generally easy to follow.\n4. The idea of maintaining semantic topology of classes is novel.\n\nWeaknesses:\n1. The standard deviation (std) over 10 runs is missing. Considering most datasets are small, such as PACS and VLCS, it is more convincing to report the average accuracy with std for comparing model performances.\n2. Some opinions in this paper are weak and unconvincing:\n- Domain-wise invariance cannot guarantee generalizable representations. The authors argue that such invariance may be susceptible to including some misleading spurious correlations. This situation may occur in simulated data, where some semantically independent properties exist across all source domains. However, spurious correlation is difficult to hold simultaneously across all source domains in real-world cases, so it is reasonable to expect feature extractors to learn more semantic information through domain-invariant representation learning.\n- The widely-adopted style-content-separation idea may fail to extract true semantic factors. The authors’ basis is that the activation map induced by domain classification does not focus on the background. This phenomenon is easy to understand because the style of foreground objects is also related to domain classification. However, I think that whether domain classification focuses on the background or not has no direct relationship with whether style-content-separation helps extract semantic information.\n3. In Table 1 – Table 6, the authors should report DeepALL results that are implemented by themselves with the same training strategy as the proposed method. It is unfair to compare other methods with the extremely low DeepALL baseline, especially in Table 5. According to my experiments, it is easy for DeepALL to achieve an average accuracy above 82% with the ResNet-18 backbone on the PACS dataset. Experiments on DomainBed also report excellent performances of DeepALL (ERM). Please see weaknesses for my concerns. The authors didn't address the limitations and potential negative societal impact of their work.\n- It is better to point out potential limitations, e.g., the proposed method (ASTR) can't work when the domain label is unavailable.\n- I didn't see potential negative societal impact.", " This paper introduces a method called MiRe, for Mix and reason over semantic topology, to do domain generalization. The motivation is that existing work typically ignores the semantic information of the inputs and further lack topological structure, which authors claim to be consistent across domains. MiRe consists of two steps: background (other domains) + foreground (current domain) Mixup to generate new samples in different backgrounds, and graph network to mine the neighbor class feature information. Experiments are done in various benchmark datasets and demonstrates the efficacy of MiRe. \n\n=======\n\nPost rebuttal: I really appreciate authors' efforts in providing detailed rebuttal, which resolves most of my concerns. I hope that the future version can include the modifications w.r.t. all comments. But please understand that everyone has a standard for accepting a paper. To me, my main concern is that the novelty is not that \"sparking\" and the method is extremely complicated with so many hyperparameters to tune. Thus, I could not give you high scores. However, I will not stand against rejection. I will give you a 4 accordingly. I'm trying to be nice to you even if my papers still do not get any responses... ### Strength\n\n1. The idea of bringing Mixup operation to domain generalization and further enhance its semantic representation is interesting.\n2. The writing is sound, with interesting figures to show the ideas.\n3. The experiments are extensive, with different comparison methods.\n\n### Weakness\n\n#### 1. Overclaims or several claims are not evaluated.\n\nThere are many claims in this paper and I fail to find supports for some of them. Thus, this paper could have some overclaims:\n- In L36, how to evaluate that MiRe does not have the \"data-dependent'' spurious correlations? I fear that MiRe is still a data-driven method and it heavily rely on the Mixup operation of the fore-background.\n- In L42, authors claimed that previous methods require \"some distribution of values for an attribute\". I would assume MiRe does not. However, the adoption of Grad-Cam indeed assumes that the input data is image with clear fore and background information. Thus, it is an overclaim.\n- In L44, I fail to find the answers to the question: \"how many factors of the latent factors\".\n- In L132, why do existing data augmentation approaches lack diversity? Any support?\n\n#### 2. Methodology.\n\n- Technically, the method is not novel. It is a combination of Grad-Cam, Mixup, and GCN. I admire the application of these existing techniques. But there is no further insight and motivation is not strong. Plus, there are the following issues:\n- In figure 3, why only cropping 1/8 of the original image? There lacks motivation.\n- Similarly, in Eq. 4, why adopting $\\operatorname{cos}(f, c_j) / 2$? How did the term $1/2$ come up? This also goes to Eq. 6.\n- The second part of the method is too complicated and computationally expensive for multiple domains. Thus I fear the comparison to other methods is not fair.\n- Regarding the reproducibility, this approach has introduced many extra hyperparameters to be tuned, to name a few, the hyperparameters in Grad-Cam, $\\operatorname{threshold}$ in Eq. 1, $1/8$ in Figure 3, $1/2$ in Eq. 4 and 6, the Mixup hyperparameter in Figure 3, $\\xi$ in Eq. 7, and $\\lambda$ in Eq. 8. Given that there are so many hyperparameters, I highly doubt the reproducibility of this approach.\n- Furthermore, the robustness of the hyperparameters should be reported. Thus, not only accuracy, but also the variation.\n\n#### 3. Experiments.\n\n- The experiments seem extensive. But after a careful check, I found that there is no reason to compare with different SOTA results in different datasets (e.g., the comparison methods in PACS and Office-Home are different). Plus, the model selection strategy is also different for each dataset. There is no support or explanation in the paper to illustrate why this is done.\n- It is unknown where the authors obtain the results for comparison methods for the main paper. Then, if you can use DomainBed, why not just use it as the main benchmark?\n\n#### References\n\nOverall, the references are good. But you can also cite more recent works from the following DG survey articles:\n\n[1] Zhou et al. Domain generalzation in vision: a survey.\n\n[2] Wang et al. Generalizating to unseen domains: a survey on domain generalization.\n\n#### Minor comments\n\nTypos:\n- L26, \"has attract\".\n- L137, \"we propose develop\". See above comments. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "dWMpRJCMSss", "S2Z7cRsjYD", "dWMpRJCMSss", "xoOy7AgJQsk", "h9ubU-rWJTe", "eqp6GICYoGg", "mu0MlNypcjl", "--9i3pChYHZ", "J-uJR5axuE-E", "xI84Kufi91h", "Bd4y922KDc", "6A7Tnlfyq-K", "_HI031sf3GH", "IoGjQ2WD_Eg", "Hpru00A7Uh6", "Hpru00A7Uh6", "3gj03t8gHcw", "3gj03t8gHcw", "3gj03t8gHcw", "3gj03t8gHcw", "nips_2022_V0GwAmDclY", "nips_2022_V0GwAmDclY", "nips_2022_V0GwAmDclY" ]
nips_2022_wOUH1VQ9Rcj
Independence Testing-Based Approach to Causal Discovery under Measurement Error and Linear Non-Gaussian Models
Causal discovery aims to recover causal structures generating the observational data. Despite its success in certain problems, in many real-world scenarios the observed variables are not the target variables of interest, but the imperfect measures of the target variables. Causal discovery under measurement error aims to recover the causal graph among unobserved target variables from observations made with measurement error. We consider a specific formulation of the problem, where the unobserved target variables follow a linear non-Gaussian acyclic model, and the measurement process follows the random measurement error model. Existing methods on this formulation rely on non-scalable over-complete independent component analysis (OICA). In this work, we propose the Transformed Independent Noise (TIN) condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables. By leveraging the non-Gaussianity and higher-order statistics of data, TIN is informative about the graph structure among the unobserved target variables. By utilizing TIN, the ordered group decomposition of the causal model is identifiable. In other words, we could achieve what once required OICA to achieve by only conducting independence tests. Experimental results on both synthetic and real-world data demonstrate the effectiveness and reliability of our method.
Accept
The paper considers structure recovery when there is a causal DAG on variables X where causal mechanisms are linear but exogenous noise variables are non-Gaussian (similar setting to the one in the standard prior work LiNGAM). However, each variable is not directly observed but through measurement independent noise. Authors show that by using independence tests between transformations of variables, one can recover the order group decomposition of the graph. I think the identifiability result is novel. Reviewers are overall positive. Main concerns were: a) Main concern of the reviewer with the lowest score is simply that causal sufficiency has not been spelled out b) Another concerns is missing references to assumptions when deriving theoretical results clearly. These two are not very major and I suggest the authors to pay attention to these and comprehensively list all assumptions clearly upfront in their camera ready.
train
[ "wbR9vm2xf9t", "ghe14-Hi10_", "QQR9gdMsS6g", "zgCT06fduYl", "BYVH4gFJDI1", "nRsnyraVvXw", "KYTwdET-oL4", "9DWllf1-7Oe", "iM-Jn9WiqYK", "KTiX5jVnXTd", "LT6sP6DIKag", "p-R24Ww0rMt", "tY2DqqBEXr", "uGazqMkq2Tr", "Buk3zAOY4Iy", "yfcp09bj2sA", "Bqd-ckTb1Nu", "uqNkHfH43R4" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Pi4t,\n\nOnce again, thanks a lot for reviewing our submission and for the further iterations! We hope we have properly addressed your major concerns. If that is the case, could you please consider increasing your score? (As you kindly mentioned in your comments, you would be happy to increase your score after those clarifications.)\n\nIf you have any other concerns, could you let us know at your earliest convenience? (The discussion involving authors will end in 4.5 hours.) We will immediately respond to them.\n\nWith best wishes,\nAuthors of submission 202", " Dear program committee members,\n\nMany thanks for your insightful feedback! We have tried to address your questions in the response and updated manuscript.\n\nIn addition, we have also made available an online demo for the Transformed Independent Noise (TIN) condition, a key contribution of this paper. By playing with TIN on specific structure examples, users may gain a better intuition about TIN. You are welcome to try it out at [http://tincondition.xyz](http://tincondition.xyz) .\n\nPlease kindly let us know if you have any further comments. We are looking forward to the opportunity to respond to them. Thanks again for your time spent on our submission.\n\nBest wishes,\nAuthors of Paper 202", " Dear Reviewer ZM5r,\n\nThanks for providing the Author Rebuttal Acknowledgement. Given that the discussion involving authors will end in 5.5 hours, could you please let us know whether your main concerns were addressed by our response and updated submission? If there are any other concerns, please let us know, and we will immediately respond to them. \n\nIf your main concerns are addressed, could you please update your recommendation to reflect it? We understand you are very busy and appreciate your time. Your feedback is valuable to us--we will be waiting for it. Thank you.\n\nBest wishes,\nAuthors of submission 202", " Yes, I think making the theorem statements self-contained by explicitly cross-referencing the assumptions they rely on will be very helpful. Thank you for your submission!", " We are working on improving the presentation of the paper accordingly (it involved iterations because the paper is a bit dense). Thanks for your timely feedback!", " Many thanks for sharing your insight!  \n\nWe completely agree with the statement that there can be corner cases without assumptions to avoid them and the intuition that the measurement should preserve information. In our study, the assumed \"random measurement error model\" (Eq. 1) and linear, non-Gaussian, acrylic model for the measurement-error-free variables (line 49) make this theorem hold true.  As you mentioned, in cases where our assumptions are violated, Theorem 3 might be false. We will add two sentences to emphasize this point.", " Thank you for the explanations. I think it would be very helpful to add these/move some of these from appendix to the main paper for clarity. ", " Thank you for the detailed explanations and for clarifying!\n\nRegarding the last point. I think there can be corner cases without assumptions to avoid them: For example if the measurement model always outputs a constant; or if it introduces some form of unfaithfulness the d-separation statements among the observed variables would not imply d-separation among the latent variables. That's what I had meant by the measurement should preserve information. ", " Dear Reviewer ZM5r,\n\nThank you very much for your time spent on our submission. We have tried to address your concerns in the response and updated submission--any feedback from you would be appreciated. If you have further comments, please kindly let us know--we hope for the opportunity to respond to them.\n\nBest wishes,\nAuthors of paper 202", " **(Q1s)** The reviewer wonders for more intuition about the theoretical part. Specifically,\n\n---\n\n> \"'Necessarily being one, all results in this paper still hold' - this might not always be a valid assumption?\"\n\n**R:** For all results in this paper, it is indeed a valid statement, since we do not care about the scales of the latent variables $\\tilde{X}_i$ - we may just always view $c_i\\tilde{X}_i$ as the latent variables. You may view $\\tilde{\\mathbf{X}}$ and $\\mathbf{X}$ altogether as variables generated by a single graph (where $\\tilde{X}_i \\rightarrow X_i$ are normal linear causal influences), and this statement follows from the graphical criteria (Theorem 2).\n\nMoreover, in general, yes, we agree with you that \"it depends on the downstream objective\". For instance, if someone just cares about the scales of $\\tilde{X}_i$, then this statement is no longer valid. In this case, regarding the identification of $\\tilde{G}$, the estimated graph structure of $\\tilde{G}$ will be the same, and what changes is the estimated edge coefficients in $\\tilde{G}$ (what we do not care about in this paper).\n\n> \"The method requires a precise measurement model, which is a disadvantage, correct? If an observed node is caused by two latent variables, this will not work. Please verify.\"\n\n**R:** This is a great point, and we agree. Here we consider the measurement error model, where each measurement is caused by one latent variable. For GIN, it can generally handle the cases where measurements are caused by multiple latent variables, as long as each latent variable has enough pure indicators (Definition 1 of [5]). Interestingly, however, we found that this may also be relaxed for our case (where there are not enough pure indicators), and our TIN-based method may still work. See Appendix F.6 (may also F.5) for a detailed discussion.\n\n> \"Could you please explain the proof of Theorem 2? How is max-flow min-cut formulated on an edge-weighted graph?\"\n\n**R:** We are afraid that this might be a misunderstanding. The weighted $\\mathbf{B}$ matrix is not directly mapped to the minimum vertex cut by the max-flow min-cut theorem. Instead, the proof sketch is formed in two steps: the first step is an algebraic combinatorics one, from the weighted $\\mathbf{B}$ matrix to the non-intersecting paths (already unweighted), by the Lindström-Gessel-Viennot theorem; and the second step is a pure graph theory one, from the non-intersecting paths to the minimum vertex cut, by the max-flow min-cut theorem (vertex version, known as Menger's theorem). Please refer to Appendix A.3 and Theorem 6 for details.\n\n> \"Theorem 3 should require some sort of guarantee that the measurement does not lose information?\"\n\n**R:** Interestingly, the answer is no. We did not require additional guarantees about the information loss. \n\nIt is worth mentioning that TIN is just one particular aspect of the statistical information of the variables. Clearly, some information is lost because of the contamination of measurement noise (e.g., the aspects of conditional independence, IN, and SEM are lost, as we show in lines 57, 92, and 105). However, the aspect of TIN remains the same among true and contaminated variables (exactly the fun point of TIN). See Appendix A.4 for the proof (either graphically or mathematically).", " **(Q2s)** The reviewer wonders for more insights about the methodology part. Specifically,\n\n---\n\n> \"'If each latent variable has two measurements, GIN can fully identify the structure of $\\tilde{G}$, which is already a breakthrough' - Can you elaborate on this? This was not identified by the previous authors?\"\n\n**R:** Yes, previous methods only learned a partial graph (see e.g., BPC (Section 5 of [1]), extended-t-separation (Page 2 left of [2]), and FOFC (Section 4 of [3])). This is generally the case because these methods are based on the Tetrad constraint [4], and Tetrad makes use of only second-order statistics. However, GIN further exploited higher-order information (independence in the non-Gaussian case) and thus identifies more. We have incorporated an illustrating example to show how GIN differs from earlier methods. Please refer to Appendix B in the revision for details.\n\n> \"Elaborate on the recursion idea for GIN and here as well? How can one recurse when the source cannot be conditioned on?\"\n\n**R:** First, let us mention that the recursion is not used in our proposed TIN-based method. Instead, we just estimate _one-over-others-TIN_ over every single variable to identify the ordered group decomposition (Lemma 1). Nevertheless, as future work, it may be exploited to further recover the structure of $\\tilde{G}$ (see Appendix F.2).\n\nAs for the recursion idea in GIN (either in Section 4 of [5] or the intro Examples 1-3 of this paper), we agree that one cannot directly condition on or regress on the source node, since we only have access to its descendant. Instead, one can _\"condition\"_ on the source node by incorporating the corresponding measured variables into $\\mathbf{Z}$ sets (termed as _\"reference variables\"_) to realize independent _\"pseudo-residuals\"_ (line 117) - just _\"as if\"_ the effect of the source node is removed for recursion. \n\n> \"How does '...a simpler one...' statement follows from Theorem 4? One still needs to search for the existence of an $\\omega$ in these subsets.\"\n\n**R:** First, we would like to mention that the two methods are theoretically equivalent. The only difference lies on the empirical level. Under finite samples, subsets tackling down allows a smaller combination set, a more flexible thresholding (in judging all/any), and probably less error by more times of independence tests.\n\nConsider this example: to identify the root $\\tilde{X}_1$ from an $n$-nodes chain structure, i.e., $\\operatorname{TIN}(X_1, \\mathbf{X}\\backslash X_1)=1$.\n\n+ By directly estimating the dimension of $\\Omega_{\\mathbf{Z;Y}}$, we need to linearly combine all the rest $n-1$ variables - which is a large set and may already contain lots of random errors, and checks for independence for $n-2$ times (i.e., $n-2$ bases of $\\Omega_{\\mathbf{Z;Y}}$) - which is a small times of tests, and yields a high degree of freedom for each $\\omega$. The TIN result may get error-prone, e.g., 0, or 2, 3, ...\n+ Equivalently, by tackling down to subsets of $\\mathbf{Y}$, we need to try over all the 2-sized subsets of the rest $n-1$ variables - i) they are relatively small sets, and ii) we only need to check the _existence_ of the independent linear transformations of 2 variables on a singleton variable - both yielding easier (and more accurate) independence results. Moreover, such tests are needed for ${n-1 \\choose 2}$ times over _all_ the 2-sized subsets. Though slower, this large number of tests reduces random error and enables a more flexible thresholding to judge quantifiers all/any (see Appendix G.1).\n\n> \"How can we efficiently check all solutions if there are uncountably many?\"\n\n**R:** We only need to check up to $|\\mathbf{Y}|$ solutions. Here is a justification:\n\nThough the search space for $\\omega$ is uncountably infinite, by the property of independence we have that 1) if $\\omega^\\intercal \\mathbf{Y} \\perp \\mathbf{Z}$, then $(c\\omega)^\\intercal \\mathbf{Y} \\perp \\mathbf{Z}$ for any $c\\in\\mathbb{R}$ (closed under scalar multiplication), and 2) if $\\omega_1^\\intercal \\mathbf{Y} \\perp \\mathbf{Z}$ and $\\omega_2^\\intercal \\mathbf{Y} \\perp \\mathbf{Z}$, then $(\\omega_1+\\omega_2)^\\intercal \\mathbf{Y} \\perp \\mathbf{Z}$ (closed under addition). Indeed, as we show in Definition 3, $\\Omega_{\\mathbf{Z;Y}}$ is a subspace. Thus we only need to find the $k$ orthogonal bases of $\\Omega_{\\mathbf{Z;Y}}$, where $k$ is the subspace dimension.\n\nThen, how to find such $k$ bases? We are not searching directly from the whole $\\mathbb{R}^{|\\mathbf{Y}|}$. Instead, we greatly reduce the searching space to e.g., 1) $|\\mathbf{Y}|$ orthogonal row vectors (TIN-ISA, see Section 5.2 and Appendix G.1 for explanation), and 2) the bases of the nullspace induced by zero-covariances/cumulants equations (TIN-2steps and TIN-rank, see Appendix E.1 and specifically equation (E.10) for explanation).", " **(Q3s)** The reviewer wonders for more clarifications to examples/details. Specifically,\n\n---\n\n> \"An example/insight for Prop. 1 would be useful.\"\n\n**R:** Thanks for the suggestion! Consider the underlying graph $\\tilde{X}\\rightarrow \\tilde{A} \\rightarrow \\tilde{B}\\leftarrow \\tilde{Y}$:\n+ $\\tilde{X}\\perp\\tilde{Y}|\\{\\tilde{A} ,\\tilde{B}\\}$, but $X\\not\\perp Y | \\{A, B\\}$: generally, d-separation on $\\tilde{G}$ is lost on $G$, since we only have descendants.\n+ $\\tilde{X}\\not\\perp\\tilde{Y}|\\{\\tilde{B}\\}$, and also $X\\not\\perp Y | \\{B\\}$: if d-connected on $\\tilde{G}$, then must also d-connected on $G$.\n+ $\\tilde{X}\\perp\\tilde{Y}|\\{\\tilde{A}\\}$, and also $X\\perp Y | \\{A\\}$ (and $\\tilde{X}\\perp\\tilde{Y}$, $X\\perp Y$): _rare_ d-separation. The only way to preserve d-separation on $\\tilde{G}$ is by marginally $\\perp$. See Appendix A.1 for the proof.\n\n> \"L172: how to verify 'this is impossible on $\\tilde{G}$'?\"\n\n**R:** Please refer to Example 5 (line 239) and the orange block in the left matrix, where we tried to provide an explanation. In short, Theorem 1 characterizes the independence subspace $\\Omega_{\\mathbf{Z;Y}}$ as the nullspace of some $\\mathbf{B}$ block. Then, to verify such impossibility claims (i.e., $\\Omega_{\\mathbf{Z;Y}}=\\mathbf{0}$), it suffices to show the (algebraic) full row rank of the respective $\\mathbf{B}$ block.\n\n> \"Ancestors in Theorem 1 probably means exogenous ancestors.\"\n\n**R:** No. It is defined in line 194. We do not additionally index exogenous noises for each variable. In Theorem 1, it just denotes the (indices to) ancestral variables, which equals the nonzero column indices of the $\\mathbf{B}$ rows block.\n\n> \"Assumption 1 seems reasonable as an alternative formulation of faithfulness.\"\n\n**R:** Yes, in the sense of \"no parameters coupling\".\n\n> \"How to find $\\mathbf{W}_{\\mathbf{YY}}$ in (11)?\"\n\n**R:** As mentioned in Section 5.2, conduct the Independent Subspace Analysis where the de-mixing matrix is masked to only update the lower-right $\\mathbf{Y}\\times\\mathbf{Y}$ block $\\mathbf{W}_{\\mathbf{YY}}$, with the upper-left $\\mathbf{Z}\\times\\mathbf{Z}$ fixed as the identity and elsewhere fixed as zero. For the implementation details of ISA, we directly follow the original paper [6] (see Appendix G.1 and [our code](https://anonymous.4open.science/r/TIN/utils/ISA.py)). This has been included in the revision (Appendix E.2).\n\n---\n\n[1] Silva, Ricardo, et al. \"Learning the Structure of Linear Latent Variable Models.\" _Journal of Machine Learning Research_ 7.2 (2006).\n\n[2] Spirtes, Peter L. \"Calculation of entailed rank constraints in partially non-linear and cyclic models.\" _arXiv preprint arXiv:1309.7004_ (2013).\n\n[3] Kummerfeld, Erich, and Joseph Ramsey. \"Causal clustering for 1-factor measurement models.\" _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. 2016.\n\n[4] Shafer, Glenn, Alexander Kogan, and Peter Spirtes. \"Generalization of the tetrad representation theorem.\" Rutgers University. Rutgers Center for Operations Research [RUTCOR], 1993.\n\n[5] Xie, Feng, et al. \"Generalized independent noise condition for estimating latent variable causal graphs.\" _Advances in Neural Information Processing Systems_ 33 (2020): 14891-14902.\n\n[6] Theis, Fabian. \"Towards a general independent subspace analysis.\" _Advances in Neural Information Processing Systems_ 19 (2006).", " We appreciate the reviewer's feedback. Thanks for your careful reading!\n\nWe roughly divide your questions into three categories. Due to the limit on the number of characters, we will put three separate comments in this thread. Please see below for our response.", " We appreciate the reviewer's encouragement and helpful feedback. Please see below for our response.\n\n---\n\n**(Q1)** The reviewer suggests experiments on more real-world datasets with measurement error.\n\n**R:** Thanks for the suggestion! We have conducted experiments on another real-world dataset, Teacher Burnout data [1]. In short, the result produced by our method is similar to the domain knowledge, and achieves the lowest (best) Kendall-tau distance to the ground-truth. Please see Appendix H.5 in the revision for a detailed analysis.\n\n---\n\n**(Q2)** The reviewer suggests a comparison with other methods such as NOTEARS [2] and SCORE [3].\n\n**R:** In light of your suggestion, we have also compared with NOTEARS and SCORE. The performance of NOEARS is rather poor in this case. Interestingly, we found that SCORE is among the two strongest competitors (the other is ICA-LiNGAM). Please see Section 6 and Figure 5 in the revision for details.\n\n---\n\n**(Q3)** The reviewer wonders whether it is possible to apply the method directly to causal structure recovery.\n\n**R:** In this paper, the answer is no. By directly applying the _one-over-others-TIN_ method, generally speaking, we cannot recover the causal structure, but only the ordered group decomposition - although it is already very informative.\n\nHowever, if we apply the proposed TIN condition over more general pairs of variables (not only one-over-others), the result may be more informative than just the ordered group decomposition. For example, though with the same ordered group decomposition, the chain structure and the fully connected DAG can actually be distinguished by TIN. Please see Section 7 and Appendix F.2 for details.\n\nThen, is the answer \"yes\" achievable? In the general case, no. The exact structure of $\\tilde{G}$ cannot be recovered, since 1) as mentioned in line 310 and Definition 6, some variables are naturally unidentifiable under measurement error, and 2) different DAGs may produce completely the same TIN results, and thus TIN can identify them up to an \"equivalence class\" (please see Appendix F.2 for examples). As mentioned in line 1177, an alternative is to further apply O-ICA over the search space that is already greatly reduced by TIN.\n\nThanks for this exciting question!\n\n---\n\n[1] Byrne, Barbara M. \"Structural equation modeling with Mplus: Basic concepts, applications, and programming.\" _routledge_, 2013.\n\n[2] Zheng, Xun, et al. \"Dags with no tears: Continuous optimization for structure learning.\" _Advances in Neural Information Processing Systems_ 31 (2018).\n\n[3] Rolland, Paul, et al. \"Score matching enables causal discovery of nonlinear additive noise models.\" _International Conference on Machine Learning_. PMLR, 2022.", " We are grateful for the reviewer's insightful comments and constructive suggestions. Please see below for our response.\n\n---\n\n**(Q1)** The reviewer has questions regarding the causal sufficiency assumption. Specifically,\n\n> \"The assumption of causal sufficiency is not even mentioned in the paper.\"\n\n**R:** Many thanks for your concern. In our presentation, we considered causal sufficiency as implied by the formal definition of LiNGAM (please refer to the original paper [1], p.2005, paragraph 3). In light of your concern, for clarity, we have explicitly made the causal sufficiency assumption (in addition to LiNGAM) in the revision (line 48: \"assume causal sufficiency relative to $\\tilde{\\mathbf{X}}$\"). \n\n> Further, \"what would happen if the causal assumption is violated?\"\n\n**R:** This is indeed an important practical issue. In short, if directly using the _one-over-others-TIN_ method in this paper, the output causal ordering may be incorrect. Interestingly, however, we found that this may depend on the specific structural patterns - we provide two illustrating examples: one where the group ordering is still (partially) identifiable, and one in the contrast. Please see Appendix F.5 in the revision for our detailed discussion.\n\nOverall, we really appreciate the reviewer for this insightful question. Though as mentioned by the reviewer, \"it is okay to make this assumption in this context\" (which, to the best of our knowledge, is indeed a common assumption in the current literature on handling measurement error), the assumption itself, after all, is strong and not testable. It would be useful (and fun!) to investigate the case where causal sufficiency is violated (in a sense of \"latent of latent\") systematically. With the TIN condition, if we can characterize the \"specific patterns\" mentioned above, we may further construct correction rules or algorithm relaxations so that the identifiability is still (partially) preserved. We leave the systematic investigation as a line of our future research.\n\n> \"The authors say that Assumption 1 is the only one made besides LiNGAM.\"\n\n**R:** Because of your comment and concern, we have updated the paper to further explicitly include the causal sufficiency assumption (as discussed above) and the random measurement error model (which is a standard model to deal with measurement error in the current literature). Please see lines 49 and 249 of the updated manuscript.\n\nIt might be helpful to mention the difference between our assumptions and those required by other methods: existing methods for causal discovery with latent variables usually either make\n - additional structural assumptions (e.g., \"require at least two measurements for each latent variable\"[2, 3]; \"require the DAG to be a polytree or some other specific families\"[4]), or\n- additional parametric assumptions (e.g., \"require that the conditional probabilities $P(X_i | \\tilde{X}_i)$ can be estimated from data, by assuming e.g., mutually irreducible distributions\"[5]).\n\nOur problem setting and the assumptions are identical to that in [6], which is quite general compared to others. While comparing to [6], we achieve the same identifiability results by escaping from O-ICA but only conducting independence tests. \n\n---\n\n**(Q2)** \"The proofs and intuitions are moved to the appendix.\"\n\n**R:** Although we had to move the proofs to the appendix due to the page limit, we have tried to give interpretations of Theorems 1 and 2's proofs in lines 220 and 260 of the main paper. In addition, we included illustrative examples 1, 2, 3, and 4 also for that purpose; we tried to connect them to our basic idea: \"_create independence_, by leveraging the parametric assumption and benefit from non-Gaussianity\" (line 95), and \"asymmetry actually exists beyond GIN, by $\\omega$ characterized from higher-order statistics\" (line 173).\n\nThanks again for your interest in our proofs and intuitions!\n\n---\n\n[1] Shimizu, Shohei, et al. \"A linear non-Gaussian acyclic model for causal discovery.\" _Journal of Machine Learning Research_ 7.10 (2006).\n\n[2] Kummerfeld, Erich, and Joseph Ramsey. \"Causal clustering for 1-factor measurement models.\" _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_. 2016.\n\n[3] Salehkaleybar, Saber, et al. \"Learning Linear Non-Gaussian Causal Models in the Presence of Latent Variables.\" _Journal of Machine Learning Research_ 21 (2020): 39-1.\n\n[4] Anandkumar, Animashree, et al. \"Learning linear bayesian networks with latent variables.\" _International Conference on Machine Learning_. PMLR, 2013.\n\n[5] Halpern, Yoni, Steven Horng, and David Sontag. \"Anchored discrete factor analysis.\" _arXiv preprint arXiv:1511.03299_(2015).\n\n[6] Zhang, Kun, et al. \"Causal Discovery with Linear Non-Gaussian Models under Measurement Error: Structural Identifiability Results.\" _UAI_. 2018.", " This paper proposes an algorithm to learn an ordered group decomposition of the causal graph over variables $\\mathbf{X}$ which are not measured directly. In other words, each measured variable $\\tilde{X}$ is a proxy of a target variables $X_i$ such that $X_i = \\tilde{X}_i + E_i$, where $E_i$ is the measurement error. \n\nThe approach assumes the LiNGAM’s setting, I.e.:\n1) acyclic model\n2) causal sufficiency (no unmeasured confounders between the measured and also between the target variables), \n3) linear functions, and \n4) non-Gaussian error terms.\n\nFurther, it assumes a random measurement error model, where $X_i = \\tilde{X}_i + E_i$ and $E_i$ are additive errors assumed to be mutually independent and independent of $X_i$. \n\nSpecifically, the authors propose the Transformed Independent Noise (TIN) condition, which checks independence between a particular linear combination of some variables and others. This condition generalizes the current approaches IN and GIN and provides information about the graphical structure over the measured variables, even when there are measurement errors. The authors show that the ordered group decomposition of the causal model is identifiable in this setting.\n\nThe proposed techniques were evaluated both through simulations and an application to real data. \n\n The paper tackles an important and relevant problem. The considered assumptions are a bit restrictive. The parametric assumptions were well justified and motivated in the text and the theoretical contributions were illustrated through examples and seem solid. However, most of the proofs and intuitions were moved to the appendix, making the main part of the paper hard to read. \n\nA critical issue of the paper is that the assumption of causal sufficiency is not even mentioned in the paper. When describing LinNGAM in line 49, the authors say that\n\n“the generating process for $\\mathbf{\\tilde{X}}$ is linear, non-Gaussian, acyclic model (LiNGAM)”.\n\nThen, in line 249, the authors say: \n\n“ Assumption 1 is the only one we make besides LiNGAM throughout  the paper, where violation of Assumption 1 is of Lebesgue measure 0, and LiNGAM is testable.” \n\nThe causal sufficiency assumption is not mentioned neither for LiNGAN nor for the proposed method. This is the strongest assumption and is NOT testable. Although it may be okay to make this assumption in this context, an appropriate justification must be provided. Further, it would be really appreciated if some simple examples illustrating what would happen if the causal assumption is violated. \n\nDisclaimer: Apart from the discussion on causal sufficiency, the text is well-written and the problem is well-motivated. Examples also helped to understand the contributions. However, I didn't check the proofs in the appendix and I am not familiar with the previous methods (IN and GIN), so I cannot judge the novelty and soundness of the methods.\n No further questions. See sections above.\n\n", " In this paper, the authors study causal graph recovery among unobserved target variables from observations made with measurement error. The authors propose the Transformed Independent Noise (TIN) condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables. By utilizing TIN, the ordered group decomposition of the causal model is identifiable. Experimental results on both synthetic and real-world data demonstrate the effectiveness and reliability of the method. Strengths: \n\n 1. The paper is well written with good structures. Detailed illustrations and annotation make it easy to read. \n\n 2. The methodology proposed by this paper and technical sections look solid and sound with technical details. \n\nWeaknesses:\n\n1. Experiments on more real-world datasets with measurement errors could make the paper stronger. \n\n2. Experimental comparison with other methods such as [1] and [2] on ordered group decomposition should be added.\n\n[1] Zheng, Xun, et al. \"Dags with no tears: Continuous optimization for structure learning.\" Advances in Neural Information Processing Systems 31 (2018). \n\n[2] Rolland, Paul, et al. \"Score matching enables causal discovery of nonlinear additive noise models.\" arXiv preprint arXiv:2203.04413 (2022).\n \n1. Is it possible to apply the method directly to causal structure recovery? The paper can be strengthened with additional real-world examples and experimental results.\n", " The authors propose a generalization of the existing causal discovery approaches for learning the structure between latent variables in a parametric linear additive non-Gaussian structural causal model - under the assumption that each latent node has exactly one measurement node that is a child. + The exposition is very clear. Authors closely follow several examples and do it well. \n\n+ An important problem with some recent results. This work seems to generalize these for another special case of measurement graphs. \n\n- Presentation gets overly succinct in Section 5.\n\n- Some concerns with the theory. \n\n- Some parts of the algorithmic approach is unclear. \n\n- Emoji-based writing in the intro makes it hard to parse the text/the story. Thank you for your submission. I will be happy to increase my score after clarifications of the below. \n\n\"necessarily being one, all results in this paper still hold\"\nI guess this depends on the downstreat objective. This might not always be a valid assumption. \n\nAn example/insight for Prop. 1 would be useful.\n\n\"the GIN condition can be readily used to fully identify the structure of G, which is already a breakthrough over existing methods\"\nCan you elaborate on this? This was not identified by the previous authors? They only learned the partial graph? This statement is missing some context.\n\nCould you elaborate a bit on the recursion idea that is being hinted at for the GIN paper and that I believe will be used here as well? If the source node is discovered, one can get rid of its effect by conditioning on it. But here we only have access to its descendant. So we cannot condition. How can one recurse after the source is found?\n\nline172: \"while this is impossible on \\tilde{G}\"\nThis and other impossibility claims seem hard to check. Could you give some intuition on which mathematical tool can be used to verify this impossibility claim?\n\nAncestors in Theorem 1 probably means exogenous ancestors.\n\nAssumption 1 seems reasonable as an alternative formulation of faithfulness. \n\nThe method requires precise measurement model, which is a disadvantage, correct? If an observed node is caused by two latent variables, this will not work. Please verify.\n\nCould you please explain the proof of Theorem 2? I am a little concerned with the mapping to the max-flow min-cut formulation. Proof connects through B matrix, which is weighted - each edge has a weight potentially different from 1. Max-flow min-cut connection is valid for weighted or non-weighted. But using B matrix tells me that you are using the weighted version. But then we are making judgements about the rank of matrices which is typically not about the amount of max-flow determined by weighted edges. \n\nSpecifically the statement:\n\"the maximum amount of non-intersecting paths from source to sink is equal to the size of the minimum vertex cut from source to sink.\"\nMaybe I am not familiar with this version because if flow is on an edge weighted graph, I thought cut should also be counted as edge weighted.\n\nTheorem 3 should require some sort of guarantee that the measurement does not lose information. Can you formalize this?\n\nMy main concern is about finding the correct value \\omega for conducting the transformed independence tests. I also don't see how this statement follows from Theorem 4:\n\"This transforms the task of estimating the dimension of \\Omega(Z;Y) to a simpler one: counting size of the subsets Y0.\"\nOne still needs to search for existence of a w in these subsets.\n\nSection 5 in general is too short and is very hard to follow without any explanation. How do we find w_YY in (11)?\n\nline 367:\" then check whether all solution\"\nHow can we efficiently check all solutions if there are uncountably many? Please explain.\n More like future work than the limitations and critique. Could be improved." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "uqNkHfH43R4", "nips_2022_wOUH1VQ9Rcj", "Buk3zAOY4Iy", "nRsnyraVvXw", "KYTwdET-oL4", "9DWllf1-7Oe", "p-R24Ww0rMt", "KTiX5jVnXTd", "Buk3zAOY4Iy", "tY2DqqBEXr", "tY2DqqBEXr", "tY2DqqBEXr", "uqNkHfH43R4", "Bqd-ckTb1Nu", "yfcp09bj2sA", "nips_2022_wOUH1VQ9Rcj", "nips_2022_wOUH1VQ9Rcj", "nips_2022_wOUH1VQ9Rcj" ]
nips_2022_AUz5Oig77OS
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models
During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users tend to make gradual changes to the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited regions. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With 1.2%-area edited regions, our method reduces the computation of DDIM by $7.5\times$ and GauGAN by $18\times$ while preserving the visual fidelity. With \engineabbr, we accelerate the inference time of DDIM by $3.0\times$ on RTX 3090 and $6.6\times$ on Apple M1 Pro CPU, and GauGAN by $4.2\times$ on RTX 3090 and $14\times$ on Apple M1 Pro CPU.
Accept
This paper was a close call. One reviewer was of the opinion that the paper lacked significant innovations other than fairly obvious sparse processing tricks to make local edits faster. This reviewer did not change their opinion (Borderline Reject) post-rebuttal. Of the other two reviewers, one was at Borderline Accept and the third at weak accept. After reading the paper, I agree with some of the first reviewer's comments on a number of technically-obvious contributions. However, I also believe these contributions are valuable from a practical perspective. Furthermore, code release (as promised by the authors) will be valuable to the community. Therefore, I recommend acceptance.
train
[ "rWTWydGgaw", "e4QLEjG9WQ", "b7B-8CQrkzi", "fOdYYZh-eI", "vRSdHWuv0tG", "QRWdS8ZiWk", "lGy0nX-u4gp", "0QxGREJx-jf", "GvMCGqQAEQW", "mbcEGReMDZ7", "cNv5wyJI6Be", "4SWB61qGiU_", "fefQ6Z09uJE", "3k3NZqlNbo" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your additional comments. In our general response, we highlight our novelty and differences between our work and SBNet. Please do not hesitate to contact us if you have additional questions. Thank you for your time again!", " Thanks for your additional comments. In our general response, we highlight our novelty and differences between our work and SBNet. We also include more results to demonstrate that our method works well for sequential editing. Please do not hesitate to contact us if you have additional questions. Thank you for your time again!", " ## Novelty\n\nThanks for your additional comments. We highlight our novelty and differences between our work and SBNet as follows:\n\n* High-level ideas are different: Our novelty is **NOT** to propose a new tiling-based sparse convolution but provide a new technique for accelerating deep generative image editing. Our key idea is to reuse the original image features to update the edited images as they are quite similar. To the best of our knowledge, previous work on generative model acceleration mainly focuses on reducing model sizes (e.g., channel pruning and quantization), but few of them explore the sparsity inside the activations. We are the first to uncover the spatial sparsity between the original and edited features and leverage it for further speedup. \n* Performances are different. Directly using SBNet yields poor performance (only $1.6\\times$ faster than the baseline) as it mainly targets recognition networks. Our engine is co-designed with the generative networks and our algorithm and is $1.8\\times$ faster than SBNet and $2.9\\times$ faster than the baseline.\n* Applications are different: Our work tackles interactive editing with generative models, while original SBNet was used in 3D object detection. \n\n## Our method works well for sequential editing\n\nThanks for your additional comments. In this [figure](https://ibb.co/b5f3Cdg) (or Figure 12 in our revised supplementary materials), we show the results of sequential editing with the following methods:\n\n* Full Model: the results with the full model. \n* One-time pre-computation: we only pre-compute the original image features for all the editing steps.\n* Incremental pre-computation: we incrementally update the pre-computed features with SIGE before the next editing step.\n\nSpecifically, One-time pre-computation performs as well as the full model, demonstrating that our method can be applied to multiple sequential editing with only one-time pre-computation in most cases. Moreover, for extremely large edited regions, we could use SIGE to incrementally update the pre-computed features (Incremental Pre-computation) and condition the later editing on the recomputed one. Its results are also as good as the full model. Therefore, our method could well address the sequential editing. We have included the results in our revision (see Section D and Figure 12 in the supplementary materials).", " Thank the authors for their reply. My concerns on dilation are resolved but I am still not fully convinced by the originality and sequential editing part of the paper. I hope to see other reviewers' opinions in the discussion period before making the final decision.", " I appreciate that the authors took the time to carry out the analysis I suggested. I think the authors answered my question well. But I read other reviewers' questions, and I find that the method is similar to the SBNet. I will participate in the discussion and give the final decision.", " Thanks again for your insightful comments. \nIn our previous response, we have added additional experiments and analyses accordingly to your suggestions. Please do not hesitate to contact us if you have additional questions. Thank you for your time again!\n\nBest, Authors", " Dear AC and all reviewers:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can provide.\n\nBest, Authors", " ### Originality\nWe respectfully disagree with the review’s opinions that our paper is incremental and engineering. In our general response, we clarify and highlight the novelty of our algorithm and engine.\n* Our key insight is NOT the spatial correspondence between the features of convolutional layers and generated images. Our main insight is to selectively update the local edited regions instead of the whole image to accelerate generative image editing. To achieve this, we pre-compute the feature maps and reuse them for updating the edited regions sparsely. To the best of our knowledge, this is a new contribution to efficient deep image editing, which can be applied to a wide range of generative models (e.g., GANs and diffusion models) and could be of great interest to a broad audience.\n* Our engine optimizations are non-trivial and not just engineering. They are co-designed with our algorithm to better adapt the SBNet for generative image editing. Directly using SBNet as our engine yields poor performance, as it was originally designed for recognition tasks. As shown in Table 3 of the paper, it only has a $1.6\\times$ speedup even though the computation reduction is $9.1\\times$. Our normalization parameter pre-computation also reuses the original image results from our algorithm and enables us for further kernel fusion. With these optimizations, we achieve $2.9\\times$ latency reduction, which is $1.8\\times$ faster than the original SBNet. This shows that our engine optimizations contribute more significantly to our final performance.\n\n### Sequential editing does not dilute our contribution\nThanks for the discussion. Our method can be applied to multiple sequential editings with only **one-time** pre-computation. In many cases, the editing will not change the global context (e.g., adding two trees with overlaps). Therefore, our method could speed up updating all the edited regions with the context of original images while not losing visual fidelity using just a one-time feature pre-computation. \n\nMoreover, we can further perform full feature recomputation during the idle time (happens a lot in a real editing setting (e.g., AnyCost GAN [59])) to reset the error accumulations and condition the later editing on the current feature maps, which can lead to a better speed up and quality for future editings.\n\n\n### Dilation hyper-parameters\nAs suggested by the reviewer, we show the results of different dilation sizes on GauGAN in this [figure](https://anonymous.4open.science/r/sige-neurips22-rebuttal-937C/dilation.png) (or Figure 10 in our revised supplementary materials). Large dilation could slightly improve the image quality by smoothly blending the edited and unedited regions at the cost of extra computation. Specifically, the shadow boundary of the added car fades when the dilation is 20. We choose dilation 1 in our experiments since the image quality is almost the same as 20 while delivering the best speed. We have included the results in our revision (see Section D and Figure 10 in the supplementary materials).\n\n### Reference\n\nThanks for pointing out the references. We have cited these works in our revision (see Section 2 Generative Models).\n\n### Minor & typos\n\nThanks for pointing them out. We have fixed them in our revisions.\n\nWe hope our response has resolved all of your concerns. Please let us know what other experiments or clarifications we can offer to convince you to increase the rating.", " ### Novelty\nIn our general response, we highlight the contribution and novelty of our algorithm and engine. Our key insight is that the original and edited images are quite similar. Therefore, we can reuse the original image results to accelerate deep generative image editing. We believe this high-level idea is a new contribution to the field of generative models. To implement this idea, we leverage the spatial sparsity inside the edited image when reusing the original image feature maps. Directly using existing tiling-based sparsity methods such as SBNet only yields small gains, as it was primarily designed for recognition tasks. On the contrary, our algorithm-engine co-design is tailored for generative models and outperforms SBNet by $1.8\\times$.\n\n### Why $\\Delta A_l$ is divided into blocks:\nThis is because a $3 \\times 3$ convolution needs to operate on at least a $3 \\times 3$ region, so we could not divide $\\Delta A_l$ into pixels. For $3\\times3$convolution, we include additional overlaps for the divided blocks to ensure that the output blocks of the adjacent input blocks can be seamlessly stitched together.\n\n### Optimizing baselines\nThanks for your suggestion. The \"patch\" and \"crop\" baselines did not use pre-computing normalization and kernel fusion in the original paper. We did not optimize these baselines, as they didn’t work well or generalize to arbitrary editing regions. The image quality of \"Patch\" baseline already degrades for small editing regions. The \"Crop\" baseline incurs redundant computations for irregular regions (e.g., the third cloud example in Figure 5). \nFollowing reviewers’ suggestions, we further optimized the \"Crop\" baseline by adopting pre-computed normalization (see the following table or Table 2 in the revised paper). We didn’t use kernel fusion as the crop baseline is based on highly-optimized PyTorch implementation. Our method still consistently outperforms this optimized baseline, especially when the editing is irregular (e.g., the third cloud example in Figure 5). We have clarified and updated the results in our revision (see Table 5 and Line 257-259).\n\n\n| Editing Size | Method | MACs | 3090 | 2080Ti | Intel Core i9-10920X | Apple M1 Pro |\n| :----------- | -------- | ------------------- | -------------------- | -------------------- | ---------------------------- | -------------------- |\n| -- | Original | 248G | 37.5ms | 54.6ms | 609ms | 12.9s |\n| 1.20% | Crop | 32.6G (7.6$\\times$) | 15.5ms (2.4$\\times$) | 29.3ms (1.9$\\times$) | 185ms (3.3$\\times$) | 1.85s (6.9$\\times$) |\n| | Ours | 33.4G (7.5$\\times$) | 12.6ms (3.0$\\times$) | 19.1ms (2.9$\\times$) | 147ms (4.1$\\times$) | 1.96s (6.6$\\times$) |\n| 7.19% | Crop | 54.4G (4.6$\\times$) | 17.3ms (2.2$\\times$) | 26.5ms (2.1$\\times$) | 220ms (2.8$\\times$) | 2.98s (4.3$\\times$) |\n| | Ours | 51.8G (4.8$\\times$) | 15.5ms (2.4$\\times$) | 22.1ms (2.5$\\times$) | 223ms (2.7$\\times$) | 3.23s (4.0$\\times$) |\n| 15.5% | Crop | 155G (1.6$\\times$) | 30.5ms (1.2$\\times$) | 44.5ms (1.2$\\times$) | 441ms (1.4$\\times$) | 8.09s (1.6$\\times$) |\n| | Ours | 78.9G (3.2$\\times$) | 19.4ms (1.9$\\times$) | 29.8ms (1.8$\\times$) | 304ms (2.0$\\times$) | 5.04s (2.6$\\times$) |\n\n### Reference\nThanks for pointing out the reference. We have cited this work and discussed it in our revision (see Section 2 Sparse Computation).\n\n### Typos\nThanks for pointing them out. We have revised them accordingly. \n\nWe hope our response has resolved all of your concerns. Please let us know what other experiments or clarifications we can offer to convince you to increase the rating.", " ### Effectiveness for large editing\nAs suggested by the reviewer, we show the results of large editing (around 35%) in the following table and this [figure](https://anonymous.4open.science/r/sige-neurips22-rebuttal-937C/large-editing.png) (or Table 4 and Figure 11 in our revised supplementary materials). Specifically, we could achieve up to $1.7 \\times$ speedup on DDIM, $1.5 \\times$ speedup on PD256, and $1.7 \\times$ speedup on GauGAN without losing visual fidelity. Furthermore, in many practical cases, users can decompose a large edit into several small ones. Our method could incrementally update the results when the edit is being created. We have included the results in our revision (see Section D, Table 4, and Figure 11 in the supplementary materials).\n\n| Model | Editing Size | Method | MACs | 3090 | 2080Ti | Intel Core i9 | Apple M1 Pro |\n| ------ | ------------ | -------------- | ------------------- | -------------------- | -------------------- | -------------------- | ------------------- |\n| DDIM | -- | Original | 249G | 37.5ms | 54.6ms | 609ms | 12.9s |\n| | 32.9% | Ours | 115G (2.2$\\times$) | 26.0ms (1.4$\\times$) | 36.9ms (1.5$\\times$) | 449ms (1.4$\\times$) | 7.53s (1.7$\\times$) |\n| PD256 | -- | Original | 119G | 35.1ms | 51.2ms | 388ms | 6.18s |\n| | 32.9% | Ours | 64.3G (1.9$\\times$) | 25.3ms (1.4$\\times$) | 35.1ms (1.5$\\times$) | 334ms (1.2$\\times$) | 4.47s (1.4$\\times$) |\n| GauGAN | -- | Original | 281G | 45.4ms | 49.5ms | 682ms | 14.1s |\n| | -- | GAN Comp. | 31.2G (9.0$\\times$) | 17.0ms (2.7$\\times$) | 25.0ms (2.0$\\times$) | 333ms (2.1$\\times$) | 2.11s (6.7$\\times$) |\n| | 38.7% | Ours | 148G (1.9$\\times$) | 27.9ms (1.6$\\times$) | 41.7ms (1.2$\\times$) | 512ms (1.3$\\times$) | 8.37s (1.7$\\times$) |\n| | 38.7% | GAN Comp.+Ours | 18.3G (15$\\times$) | 15.3ms (3.0$\\times$) | 22.2ms (2.2$\\times$) | 169ms (4.0$\\times$) | 1.25s (11$\\times$) |\n\n### Edited region indexing\nThanks for your question. We downsample the difference mask to different resolutions and dilate the downsampled mask with extra pixels (1 for diffusion models and 2 for GauGAN). For each convolution inside the network, we use the difference mask at the corresponding resolution to index the active blocks. For $3 \\times 3$ convolution, we include extra overlaps (2) for the indexed active blocks to ensure that the output blocks of the adjacent input blocks can be seamlessly stitched together. We don’t use overlaps for $1\\times1$ convolutions. We have clarified this in our revision (see Section 3.2).\n\nWe hope our response has resolved all of your concerns. Please let us know what other experiments or clarifications we can offer to convince you to increase the rating.", " We sincerely appreciate all reviewers' efforts for the insightful and thoughtful comments. We are glad that the reviewer recognized the following strengths.\n\n* Motivation & Contribution: The motivation of reusing the original image activations to update the sparse edited regions is reasonable and intuitive (reviewer 2N5G and ckKG). Our work is of high interest to the community (reviewer ckKG).\n* Experiments: The experimental studies are thorough across different models, datasets, and devices to show the method’s effectiveness (reviewer 2N5G, ckKG, and oeJz).\n* Presentations: The paper writing is generally clear and easy to follow (reviewer ckKG and oeJz). \n\nIn addition to the pointwise responses below, we first clarify and emphasize our contribution and novelty and then summarize the major changes in our revision:\n\n1. Contribution & novelty\n * Our high-level idea is to selectively update the local edited regions instead of the whole image to accelerate generative image editing. To achieve this, we pre-compute the feature maps and reuse them for updating the edited regions sparsely. To the best of our knowledge, this is a new contribution to efficient deep image editing, which can be applied to a wide range of generative models (e.g., GANs and diffusion models) and could be of great interest to a broad audience. \n * To achieve our high-level idea, we propose engine optimizations that are tailored for generative models and co-designed with our algorithm. Directly applying existing techniques (e.g., SBNet) fails to yield significant speedup. In contrast, our engine optimization significantly outperforms SBNet by $1.8\\times$. \n\n2. Revision summary. \n We made the following revisions to our manuscript to address the reviewers’ comments:\n * As suggested by reviewer oeJz, we include the ablation of different dilation sizes in Section D of the supplementary materials. Large dilation could slightly improve the image quality by smoothly blending the edited and unedited regions at the cost of extra computation.\n * As suggested by reviewer 2N5G, we include the results of large editing in Section D of the supplementary materials. Our method could achieve up to $1.7\\times$ speedup without losing visual fidelity for the $\\sim35$% editing.\n * As suggested by reviewer ckKG, we clarify the experimental setting and update the results of the “Crop” baseline in Section 4.1. Our method could still beat the optimized baseline. \n * As suggested by reviewer ckKG and oeJz, we include and discuss the additional related works in Section 2.", " This paper produces a speedup technique, spatially sparse inference, for the image manipulation method. The SIGE significantly reduces inference time and does not harm the performance of the original network. Strengths:\n1. The motivation is reasonable. Finding the edit regions and making the inference process focus on the edit regions, this strategy can reduce the time cost intuitively.\n2. The experiments are sufficient. Table 2 can prove the ability of SSI well.\n\nWeaknesses:\nSee the questions.\n 1. How about performance when editing regions are large(over 30%)?\n2. How to get the correct index of editing regions. In my opinion, since the convolution process, the difference mask in Fig.3 can not represent the editing regions in the deep level of network. See the questions.", " This paper proposes spatially sparse inference (SSI) to accelerate interactive generative image editing. The key idea is to keep a backup of the original features and only perform computation for edited regions, which is usually sparse in the interactive setting. The paper also introduces some implementation improvements to further reduce the computational overhead, including pre-computing normalization parameters and kernel fusion. Experiments on several pipelines, datasets, and devices show that the proposed method can drastically reduce the computation and running speed, and outperforms other pruning or patch-based baselines. Ablation studies are conducted to verify the effectiveness of each proposed module. Strengths:\n\n1. This paper takes the first attempt to apply region-specific computation to generative image editing. It makes use of the sparse nature of interactive image editing and avoids redundant computation in the unedited regions, which is very reasonable. As interactive image editing is a broadly studied application scenario and the running speed is important, I believe this work is of high interest to the community.\n\n2. To improve practical computing efficiency, the paper proposes pre-computing normalization parameters and kernel fusion. These implementation improvements effectively reduce computational overhead and accelerate inference in practice.\n\n3. Experimental studies are thorough. The proposed method is evaluated on several different pipelines, datasets, and devices, and its advantages over baselines are clearly demonstrated.\n\n4. The writing is generally clear and easy to follow.\n\nWeaknesses:\n\n1. The main weakness is the novelty, as the key idea of spatially sparse inference is not new and has been widely used before, e.g., [67, 72, 73, 74]. There is also a missing related work that performs region-specific computation: \"Not All Pixels Are Equal: Difficulty-Aware Semantic Segmentation via Deep Layer Cascade\". In particular, the proposed method is mainly built based on SBNet [67]'s sparse kernel implementation. The differences in terms of how mask is derived and other implementation improvements are not very significant. \n\nThe following weaknesses are minor:\n\n2. At line 130, it is not explained why $\\Delta A_l$ is divided into blocks instead of pixels.\n\n3. Do the \"patch\" and \"crop\" baselines use pre-computing normalization and kernel fusion? Please clarify this. \n\nTypos: \nLine 251: could saves -> could save \nLine 262: the activations -> activations 1. Considering that the key idea of spatially sparse inference is not new and that the key spatially sparse computing module is based on SBNet [67], the novelty of the proposed method is not significant to me. Authors may highlight their novelty.\n\n2. Please explain why $\\Delta A_l$ is divided into blocks instead of pixels (line 130).\n\n3. Do the \"patch\" and \"crop\" baselines use pre-computing normalization and kernel fusion? Please clarify this. Limitations and potential negative societal impact have been properly discussed.", " This paper proposed the Sparse Incremental Generative Engine (SIGE), which consists of several tricks (tiling-based sparse convolution, the use of pre-computing normalization parameters, and kernel fusion) to accelerate local image edits of generative models with little sacrifice in image quality. Strength:\n- (Clarification) This paper is well-written and easy to follow.\n- (Quality) The evaluation of latency reduction on various hardware is interesting and useful. \n\nWeakness:\n- (Originality) This paper is a bit incremental and engineering: i) the key insight is straightforward and not surprising as the spatial correspondences between features of convolutional layers and generated images are well-known. ii) the proposed three tricks in SIGE are mostly engineering ones and a bit thin. Although they are given good names, they are a bit trivial without significant technical contributions, e.g. the core idea of tiling-based sparse convolution is mostly borrowed from SBNet with an incremental improvement trick, the use of pre-computing normalization parameters is a simple approximation that can be explained in a single sentence, the kernel fusion is mostly engineering tricks that only perform computation in the edited region and the copying overhead.\n- (Significance) The significance of the proposed method could be less than expected as it has an important limitation: its benefits may not hold for overlapping editing regions in a sequence of edits, which is a common case in the real world. In this case, the pre-computation stage is no longer one-time (i.e. the generated image may need to be pre-computed again before the next edit) and can make the proposed method less useful.\n- (Quality) The dilation hyper-parameters could be important to the final results and should be discussed in more depth.\n\nMissing References:\n\nFor \"Generative models\" in section 2, there are missing references for image-to-image translation and real image editing:\n\n[1] Abdal, R., Qin, Y. and Wonka, P., 2019. Image2stylegan: How to embed images into the stylegan latent space?. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4432-4441).\n\n[2] Abdal, R., Qin, Y. and Wonka, P., 2020. Image2stylegan++: How to edit the embedded images?. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8296-8305).\n\n[3] Zhu, P., Abdal, R., Qin, Y. and Wonka, P., 2020. Sean: Image synthesis with semantic region-adaptive normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5104-5113).\n\nMinor/Typo:\n- Line 131: missing \".\"\n- Table 1 caption: \"LIPS\" -> LPIPS\n- In Table 1, the bold fonts are not always the best results, e. g. GAN Comp. LPIPS with G.T. is better than the proposed method. Please address the comments in the Weakness section. Please see the Weakness section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "vRSdHWuv0tG", "fOdYYZh-eI", "lGy0nX-u4gp", "QRWdS8ZiWk", "mbcEGReMDZ7", "0QxGREJx-jf", "cNv5wyJI6Be", "3k3NZqlNbo", "fefQ6Z09uJE", "4SWB61qGiU_", "nips_2022_AUz5Oig77OS", "nips_2022_AUz5Oig77OS", "nips_2022_AUz5Oig77OS", "nips_2022_AUz5Oig77OS" ]
nips_2022_5kThooa07pf
Subsidiary Prototype Alignment for Universal Domain Adaptation
Universal Domain Adaptation (UniDA) deals with the problem of knowledge transfer between two datasets with domain-shift as well as category-shift. The goal is to categorize unlabeled target samples, either into one of the "known" categories or into a single "unknown" category. A major problem in UniDA is negative transfer, i.e. misalignment of "known" and "unknown" classes. To this end, we first uncover an intriguing tradeoff between negative-transfer-risk and domain-invariance exhibited at different layers of a deep network. It turns out we can strike a balance between these two metrics at a mid-level layer. Towards designing an effective framework based on this insight, we draw motivation from Bag-of-visual-Words (BoW). Word-prototypes in a BoW-like representation of a mid-level layer would represent lower-level visual primitives that are likely to be unaffected by the category-shift in the high-level features. We develop modifications that encourage learning of word-prototypes followed by word-histogram based classification. Following this, subsidiary prototype-space alignment (SPA) can be seen as a closed-set alignment problem, thereby avoiding negative transfer. We realize this with a novel word-histogram-related pretext task to enable closed-set SPA, operating in conjunction with goal task UniDA. We demonstrate the efficacy of our approach on top of existing UniDA techniques, yielding state-of-the-art performance across three standard UniDA and Open-Set DA object recognition benchmarks.
Accept
This submission deals with universal domain adaptation for object recognition. The authors propose to extend existing strategies with an original and effective complementary strategy, thus achieving SOTA performance in this context. Their first proposal aims to align domains while avoiding the risk of negative-transfer, working in the Bag-of-visual-words space. Their second proposal is a new pretext task which seeks to predict the number of crops on images stitched by a varying number of random image crops. This should favor prototype-alignment. This submission received diverging ratings. Reviewers have raised several concerns, to which the authors have provided detailed answers. The reviewers appreciated the answers and the additional experiences provided. Following the discussions, the final scores of the reviewers have increased and are clearly positive on this submission, on the express condition that all the improvements discussed are integrated in a very careful way. The AC agrees that the strengths in this case outweigh the weaknesses, but strongly recommend that all the improvements are fully reflected in the final version.
train
[ "oFUkIf2C8Mz", "xUJ9-ffo2Lf", "AzH0LYJA4pp", "EOnkqI0Q2Xx", "RwR6AwK7LvI", "XiJKydjMyyT", "KTuY23ziFE", "Rka9DO0a28F4", "f1vBAiKrTl", "fJ5hHKyQi4", "TH2NZ5AtnNq", "xiJvVJFws9o", "8jxNnJDPad" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. This clarifies my question on NTR and Fig 2. It would be good to improve the current presentation of NTR/Fig.2 by incorporating this clarification discussion. \n\nI appreciate the efforts of adding extra experiments on additional ablation and backbone. However, in my opinion, the rebuttal is for clarifying the misunderstanding from reviewers, not extensively improving the experiments. To this end, I'm comfortable only raising my rating to borderline reject.", " I thank the authors for the clear answers. My concerns have been addressed.\nS37P", " Thanks for the clarification. Most of my concerns are resolved by the additional evidence. I am increasing my rating.", " Thanks for your constructive feedback, we will update Fig. 2 as per your suggestions to make it more clear. We address the remaining concerns below.\n\n* **[R-W1, Q2] Have the authors validated that aligning res3 feature does not cluster all the private class samples into one cluster?** \n * To further support our argument that private class samples are better clustered with our approach, we apply linear evaluation protocol for the target-private classes on the word-prototype features $\\phi(x)$. Here, we use the labels of target-private classes (only for this analysis) to train a linear classifier on the frozen features from $\\phi(x)$ and compute the accuracy for target-private classes.\n * Due to time constraints, we only show two settings of UniDA on Office-Home in the table below. However, we observe a significant gain (+5.62%) in the target-private accuracy with linear evaluation, which indicates that the target-private classes are better clustered with our proposed method.\n\n| Target-Private Acc. w/ Linear Evaluation | Ar$\\to$Cl | Cl$\\to$Pr | Avg. Gain |\n|---------------------|:---------:|:---------:|:---------:|\n| OVANet + arch-mod | 70.72 | 80.76 | |\n| OVANet + Ours (all) | 74.91 | 87.81 | **+5.62** |\n\n* **[R-W2] Evidence that alignment of res3 and word-prototype layer are more significant than that of deeper layers**\n * We apologize for the confusion caused by our response. Compared to a baseline without our architecture modification and EM and pretext losses, the res4-like conv layers (same architecture as res4 in the baseline) would process the word-histogram features instead of the usual image features.\n * We apologize for not giving concrete numbers. We provide the same corresponding to an updated Fig. 2 in the two tables below. Even when deeper layers are not updated with our proposed pretext and EM losses, the post-adapt NTR and DIS are only marginally worse (relative to the pre-adapt NTR and DIS). \n * Note that we require lower DIS and lower NTR layer for adaptation in order to improve the DIS while further reducing the NTR.\n\n| Layer | NTR (pre-adapt) | NTR (post-adapt) | NTR (post-adapt w/o updating deeper layers)|\n|--|:--:|:--:|:--:|\n| Res1 | 44.2 | 34.8 | 35.5 |\n| Res2 | 51.3 | 38.2 | 39.1 |\n| Res3 | 66.7 | 54.4 | 55.6 |\n| Res4 | 70.1 | 59.4 | 60.0 |\n| FC | 70.3 | 61.9 | 62.8 |\n\n\n| Layer | DIS (pre-adapt) | DIS (post-adapt) | DIS (post-adapt w/o updating deeper layers) | \n|--|:--:|:--:|:--:|\n| Res1 | 48.0 | 56.7 | 55.9 |\n| Res2 | 40.3 | 45.9 | 44.5 |\n| Res3 | 23.8 | 30.6 | 29.9 |\n| Res4 | 14.5 | 29.1 | 27.7 |\n| FC | 13.2 | 29.0 | 27.4 |\n\n* **[R-W3] Confusions with rebuttal ablation table with rotation pretext task**\n * We apologize for the confusion caused by our response. However, we had mentioned just above the table in response-[W3], \"Note that both pretext task experiments have the mid-level SPA layer as well as the EM loss (the same as in Table 6)\".\n * We repeat the updated rebuttal table below for clarity. Overall, we observe an average 1% improvement over rotation, even on the large-scale DomainNet benchmark.\n * We also give the ablation where architecture modification and EM loss are not used and observe a drop in performance compared to when they are used. Even here, we observe an average 0.9% gain over rotation. DomainNet results are not reported in this case due to time constraints.\n\n| | OSDA (Office-Home) | UniDA (Office-Home) | UniDA (DomainNet) |\n|------|:-----:|:-----:|:-----:|\n| OVANet | 63.6 | 71.8 | 50.7 |\n| OVANet + arch-mod + EM loss + rotation | 64.3 (+0.7) | 72.4 (+0.6) | 51.1 (+0.4) |\n| OVANet + arch-mod + EM loss + our pretext task | **65.5 (+1.2)** | **73.2 (+0.8)** | **52.2 (+1.1)** |\n\n| | OSDA (Office-Home) | UniDA (Office-Home) |\n|------|:-----:|:-----:|\n| OVANet | 63.6 | 71.8 |\n| OVANet + rotation (w/o arch-mod & EM loss) | 63.9 (+0.3) | 72.0 (+0.2) |\n| OVANet + our pretext task (w/o arch-mod & EM loss) | **64.9 (+1.0)** | **72.8 (+0.8)** |\n\n* Thanks for your invaluable questions and comments, which have strengthened our submission. Please let us know if any further clarifications are required, as today is the last day for author-reviewer discussions.", " [W1,Q2]\nThanks for the clarification. The main confusion on Fig. 2 was because it requires quite a mental load to interpret. In my understanding, we need to align layers with lower domain invariance and lower negative transfer risk. This is different from typical trade-off figures. It would be much easier for readers if Fig. 2 is revised. E.g., NTR ($\\downarrow$), DIS ($\\downarrow$). However, the bigger concern still remains. Have the authors validated that aligning res3 feature does not cluster all the private class samples into one cluster? The evidence of xx percent improvement is weak and indirect.\n\n[W2]\nI do not see any evidence here. I do not understand what “res4-like conv-layers” are. I cannot judge the claim “only slightly worse than the post-adapt curves in Fig. 2, indicating that alignment of res3 and word-prototype layer are more significant than that of deeper layers” as no numbers or figures are provided.\n\n[W3]\nIn Table 3 of the main paper, “all (SPA)”, not “OVANet + our pretext”, shows 65.5% OSDA, and 73.2% UniDA performance on the OfficeHome. However, in the rebuttal, the authors show 65.5% and 73.2% are OVAnet + our pretext task. It is really confusing. Which numbers should we trust? If we trust the rebuttal numbers, L_{em} does not contribute at all on top of the proposed pretext task. If we trust the main paper numbers, the proposed pretext task does not improve much compared to the rotation (64.9% vs. 64.3% and 72.8% vs. 72.4%) which confirms my original comment (W3. the proposed pretext task improves only 0.4 ~ 0.6 points compared to the rotation task).\n\n[Q1, Q3, Q5, Limitations and social impact]\nThanks for the clarification. My concerns are resolved.", " We thank all the reviewers for their interesting questions and constructive feedback. We have carefully addressed each point raised in the reviews and hope that our response is satisfactory. We humbly request you to go through our responses and please let us know if any further clarifications are required.", " We thank the reviewer for the constructive, detailed and insightful feedback. We appreciate that the reviewer finds our work intuitively reasonable with a challenging and interesting problem setting and favorable results. We address the reviewer's concerns below.\n\n* **[W1, Q2] Why not align res1 and res2? Should we align the layers with low DIS more?**\n * While the DIS of res1 and res2 is higher than that of res3, we prefer adaptation of layers with lower DIS because we intend to improve the DIS of those layers. Further, aligning res1 and res2 may be trivial as they are already fairly aligned (as indicated by their higher domain-invariance-score DIS) and the deeper layers would remain unaligned.\n * Based on the above argument, it may seem better to align the deepest layer res4 due to its lowest DIS. However, we need to consider the negative-transfer-risk for the category shift as aligning the highly class-specific (high NTR) deeper layers may lead to misalignment. Thus, we choose a mid-level layer res3 as a compromise between DIS and NTR (considering domain-shift and category-shift respectively).\n\n* **[W2] Pretext task aligns deeper layers?**\n * We thank the reviewer for pointing this out. While the deeper layers are contained within $\\psi$, they are not the same as the baseline deeper layers but rather are processing the word-histogram features. We will update it in the revised draft as \"res4-like conv-layers\" instead of res4.\n * To further support our hypothesis, we repeated the Fig. 2 post-adapt analysis where only res3 and word-prototype layer $V$ is updated with the pretext task losses and EM loss (while other losses are as in Eq. 8). We observed an almost similar trend as before, only slightly worse than the post-adapt curves in Fig. 2, indicating that alignment of res3 and word-prototype layer are more significant than that of deeper layers. We will update Fig. 2 and corresponding text in the revised draft.\n\n* **[W3] Is BoW-style alignment optimal?**\n * As mentioned in L216-220, only using the EM loss may be susceptible to word-level misalignment which leads to lesser gains. Our proposed pretext task helps avoid this misalignment.\n * The last two rows of Table 5 show that adding the EM loss and BoW-style mid-level features along with the pretext task is better (1.4% average gains for DCC+SPA) than simply using the pretext task.\n\n* **[W3] Proposed pretext task only improves 0.4-0.6 points over rotation**\n * We respectfully disagree as the gains of our pretext task over rotation are 1% (average) on Office-Home as shown in Table 6 and repeated in below table. Note that `(+x.x)` indicates gains over the previous row.\n * Further, we show an ablation of rotation pretext and our pretext task on DomainNet below. As DomainNet is a large-scale and challenging benchmark, the gains of our pretext task over rotation signify its efficacy. Note that both pretext task experiments have the mid-level SPA layer as well as the EM loss (the same as in Table 6).\n\n| | OSDA (Office-Home) | UniDA (Office-Home) | UniDA (DomainNet) |\n|:------:|:-----:|:-----:|:-----:|\n| OVANet | 63.6 | 71.8 | 50.7 |\n| OVANet+rotation | 64.3 (+0.7) | 72.4 (+0.6) | 51.1 (+0.4) |\n| OVANet+our pretext task | **65.5 (+1.2)** | **73.2 (+0.8)** | **52.2 (+1.1)** |\n\n* **[Q1] Would high NTR reduce class-level misalignment?**\n * See our common response to reviewers.\n\n* **[Q3] How to train a linear classifier to measure DIS?**\n * We use the standard $\\mathcal{A}$-distance to measure DIS. We briefly describe the process here and will add the same in the revised supplementary. We train a binary classifier with a linear layer where input is the feature vector (after global average pooling) for which DIS is to be measured, similar to prior works. Source and target training samples, including private class samples, are passed through the frozen network being evaluated to obtain the feature vector. And only the domain-label (0 for source, 1 for target) is required for training the linear classifier with conventional CE loss.\n\n* **[Q4] Do other baselines in Table 6 have SPA-architecture modification and EM loss?**\n * Yes, we had included those for a fair comparison with our proposed pretext task. Note that performance is slightly lower for all baselines if SPA-modification and EM loss are not used.\n\n* **[Q5] Why are word-prototypes learnt only via pretext task?**\n * As per Eq. 8 and L266-269, the word-prototypes in $\\psi$ are updated only through the pretext task losses and EM loss (the parameters are mentioned under the $\\min$ term) while the goal task $\\mathtt{UniDA-Algo}$ objective updates only the backbone $h$ and goal classifier $f_g$.\n\n* **Limitations and negative societal impact**\n * We would like to clarify that, as per the checklist (main paper, L467-468) which the reviewer may have missed, we provide limitations and negative societal impact in Suppl. Sec. 2 and 3.", " We thank the reviewer for their constructive feedback. We appreciate that the reviewer finds our work generally well-written with a much practical problem setting and competitive results. We address the reviewer's concerns below.\n\n1. **Confusion with NTR**\n * We understand that some confusion arose about NTR (please also see our common response) and we clarify that NTR is a risk, implying that lower NTR is desirable. We tried to directly or indirectly highlight it several times in the paper.\n * L128-130: \"In the context of UniDA, the feature space of the deeper layers for source and target would be more difficult to align due to the disjointness of the source and target label sets. We empirically observe the same, i.e. NTR increases as we go deeper in the model.\"\n * Eq. 3 and L157: \"optimal tradeoff requires NTR to be less than the threshold $\\zeta_n$\"\n * L172: \"... assists UniDA with minimal negative-transfer-risk (NTR)\"\n * We will highlight the important phrases in the revised draft for clarity.\n\n2. **Does adaptation at mid-level layer contradicts multi-level adaptation works?**\n * Multi-level adaptation works (including [ref-1]) operate only on the closed-set DA setting where there is no category shift. In that scenario, adaptation at multiple deeper layers may be useful as DIS (domain-invariance-score) is higher. \n * However, in presence of category shift, we need to consider not only DIS but also NTR (negative-transfer-risk). As NTR (risk) is higher for deeper layers, we perform adaptation at mid-level layers with lower NTR.\n * Thanks for mentioning this, as our work is able to explain why multi-level adaptation methods cannot directly work for UniDA (further supported by the multi-layer ablations in the next answer).\n\n3. **Ablation of SPA at different and all layers**\n * We performed analysis experiments of DIS and NTR at different layers presented in Fig. 2 but did not add the already performed ablations at different layers. We thank the reviewer for pointing this out and provide these important ablations below which verify our analysis experiments.\n * We infer that \"all layers\" implies applying SPA simultaneously after every res-block. As supported by our analysis, we find that all layers and even combination of layers is suboptimal w.r.t. the mid-level res3 which is a *sweet-spot*. Our paper provides concrete evidence on the existence of this *sweet-spot* and how to better utilize it for UniDA.\n\n| SPA at | UniDA (Office-Home) |\n|:-------------:|:----------:|\n| none (OVANet) | 71.8 |\n| res1 | 72.0 |\n| res2 | 72.4 |\n| res3 | **73.2** |\n| res4 | 71.9 |\n| res2+res3 | 72.6 |\n| res3+res4 | 72.2 |\n| all layers | 71.9 |\n\n4. **SPA with Zhu et al. [49]**\n * As the code was not available for [49], we re-implemented their method and report the results on Office-31 in the table below. Note that * indicates results from the re-implementation. We observe gains of 1.5% over [49] indicating the generalizability of our proposed SPA.\n\n| | UniDA (Office-31) |\n|------------------|:-----------------:|\n| Zhu et al. [49] | 86.7 |\n| Zhu et al. [49]\\* | 86.0 |\n| [49]\\* + SPA | **87.5 (+1.5)** |\n\n5. **Applicability of SPA to Transformers**\n * While the SPA layer contains a 1x1 convolution layer, it can be interpreted as a Linear or fully-connected layer as well, which are also used in transformer architectures. Further, the proposed pretext task is independent of the model architecture.\n * Hence, our proposed SPA can be easily extended for transformers and we report results on Open-Set DA by using a Vision Transformer (ViT) backbone. Note that `(+x.x)` in table indicates the gains over the previous row. Thanks for pointing this out as it further validates the utility of our technique across different architectures.\n\n| | OSDA (Office-Home) |\n|--------------------|:------------------:|\n| OVANet (ResNet backbone) | 63.3 |\n| OVANet (ViT backbone) | 64.5 (+1.2) |\n| OVANet (ViT) + SPA | **65.6 (+1.1)** |", " We thank the reviewer for their valuable feedback. We appreciate that the reviewer finds our work novel and sound with adequate empirical support, well-written, and easy to follow. We address the reviewer's concerns below.\n\n1. **Sparsity assumption of word-histogram space**\n * Consider Fig. 1 from our main paper. The shallower layers would extract generic shapes like rectangles, circles, lines, etc. while deeper layers would extract semantic shapes like windows, arms, chassis, etc. Intuitively, the word-histogram space at generic-shape-level cannot be sparse for the object recogition task while sparsity is desirable at deeper layers. \n * Our entropy and pretext objectives ensure word-histogram sparsity at a sufficiently high semantic level, catering to the UniDA problem. With this, the pre-classifier features better capture the class-level intrinsic structure (L209-212) that improves UniDA performance. Note that a good intrinsic structure refers to a scenario where individual classes, including private classes, are well clustered in the feature space.\n\n2. **Real examples of word-level misalignment; How does pretext task mitigate word-level misalignment?**\n * Here, misalignment refers to cases where different classes are aligned with the same word-prototype or different samples of a class are aligned to distinct word-prototypes (L216-220).\n * Consider Fig. 4A with the worst-case of misalignment where all classes (class-1, class-2, class-3) are represented by the same word-histogram. Then, in Fig. 4B, the image-level word-histograms would be identical for any no. of instances used for patch-shuffling and the pretext task of identifying no. of instances would fail.\n * The above example shows, similar to a proof by contradiction, that the pretext task objectives cannot allow word-level misalignment since misalignment would hurt the pretext task performance. This is how the pretext task helps mitigate word-level misalignment.\n\n* Thanks for your insightful questions. We will incorporate the responses in the revised draft.", " * **[6Jjx-Q1, 5kcu-Q1] Confusion with NTR: Would high NTR reduce class-level misalignment?**\n * The reviewers are correct that high NTR implies we know what are known and what are unknown samples i.e. known and unknown samples are well-separated. However, unsupervised adaptation at a higher NTR layer is more susceptible to misalignment between shared and private (unknown) classes because target-private classes get grouped into a single unknown cluster.\n * In contrast, lower NTR feature space can better represent all the different classes without grouping the target-private classes into a single unknown cluster (i.e. better intrinsic structure). Hence, alignment in this space would better respect the separations of private classes than at a higher NTR feature space, which is necessary to avoid misalignment in UniDA.\n * For example, consider \"hatchback\" (compact car) and \"SUV\" (large-sized car) as a shared and target-private class, respectively, in an object recognition task. Before adaptation, at a high NTR layer, hatchback and SUV features would be well-separated as SUV is yet unseen to the source-trained model. However, during *unsupervised* adaptation, the similarities between hatchbacks and SUVs may align the single target-private cluster (containing SUV features) with the hatchback cluster. Due to this, other target-private classes also become closer to this hatchback class which increases the misalignment.\n * In contrast, at a lower NTR layer, the target-private classes (including SUV) would not be grouped together. Hence, misalignment of hatchback and SUV clusters would not disturb other clusters unlike the higher NTR scenario. Our proposed sparsity and pretext objectives would help mitigate misalignment even between similar classes.\n * This is supported by our layer-wise ablations (refer to table in response-3 to Reviewer 5kcu) where adaptation at layers with higher NTR is worse than at res3.\n* We thank both reviewers for this insightful question and we will update the revised draft for better clarity on NTR.", " This paper addresses universal domain adaptation (UniDA) for object recognition: target sample is classified as either one of the \"known\" classes or \"unknown\". This work proposes an add-on strategy that helps improve results of existing approaches, reaching SOTA performance in UniDA.\n\n** Analysis **\nFor UniDA, one wants to increase the domain-invariant score (DIS) between source/target while trying to alleviate the potential negative-transfer risk (NTR) between \"known\" and \"unknown\" classes. However, the two measures are often at odds with each other. Quantifying NTR as known-unknown classification accuracy using entropy threshold (like [17]) and DIS as the $\\mathcal{A}-$distance (like [1]), the paper provides an analysis on the NTR/DIS trade-off of resnet features at different depths. Empirically, feature of the res3-block balances NTR and DIS. Based on this finding, res3-block feature is used throughout experiments.\n\n** Proposal 1 **\nTo make domain alignment possible while avoiding the risk of negative-transfer, the paper proposes to align in the Bag-of-visual-words space. The intuition is that BoW contains visual primitives that are universally shared among known and unknown classes. To this end, res3-block features $h\\in\\mathbb{R}^{H\\times W\\times N_d}$ are passed through a soft-assignment module including a vocabulary matrix $V\\in\\mathbb{R}^{N_d \\times K}$ ($K$ is the vocabulary size) and a soft-quantization operation (Eqn. 4). For both source and target samples, one minimizes the BoW-wise entropy of the corresponding soft-assignment tensor $\\in\\mathbb{R}^{H\\times W \\times K}$. That is to implement the sparsity assumption of the word-histogram space.\n\n** Proposal 2**\nA novel pretext task that help encourage prototype-alignment: a pretext dataset is composed of images stitched by a varying number of random image crops from different images. Pretext task is to predict the number of crops, based on the global word-histogram of the stitched image.\n\n** Experiment **\nOn several benchmarks (Office-31, DomainNet, OfficeHome), the two proposed strategies complement with existing approaches and improve performance, achieving SOTAs in UniDA. - The paper is well-written, easy to follow. Most arguments are sound with adequate empirical supports; there are only a few concerns (see below).\n\n- Originality: the two proposed strategies are novel in the context of UniDA. \n\n- Extensive ablation studies validate the proposed strategies and experimental choices.\n\n- Complementary to existing approaches. The combinations with either OVANet [32] or OCC [20] obtain SOTA results. - What's unclear to me is the sparsity assumption on the word-histogram space. Could the authors elaborate more the intuitions?\n\n- As discussed in L216-220, there are risks of word-level misalignment when minimizing the BoW-entropy. Could the authors provide some real examples when the misalignment happen? How does the proposed pretext task help mitigate the misalignment risks? Limitations and potential negative societal impact are given in the supplementary material. I find those adequate.", " This paper studies the universal domain adaptation problem. Unlike partial DA or open-set DA, UniDA does not require knowing relation between source and target domain label sets. The authors start by analyzing the behaviors of source-trained classifier on target domain data. Specifically, the authors propose a new metric called negative-transfer-risk (NTR). It measures the class-specificity of the features in a specified layer via a shared-vs-unknown binary classifier. The other metric is the inverse of A-distance. Motivated by the analysis, the authors propose the do the adaptation in the mid-level layer of the network. Inspired by the BOW, the authors design a word-prototypes in the selected mid-layer representation, and regularize such a BOW-like representation via a subsidiary prototype-space alignment. To realize SPA, the authors design a novel word-histogram based pretext task with UniDA objectives. The experiments are conducted on three DA benchmarks, and competitive results are reported. Strength\n* The studied problem Universal DA is a much practical setting, where much weaker assumption is posed compared with standard, partial or open-set DA.\n\n* The technical detail part are generally well written, such as the design of word-prototype.\n\n* The reported results seem competitive. It shows the benefit by applying SPA to both OVAnet and DCC. \n\nWeakness\n* The introduction and motivation of NTR are unclear to me and not easy to follow. For detailed questions, please see the below question section.\n\n* Experiments. To support the key statement of mid-level adaptation, the authors miss an ablation study of applying SPA on different layers and all layers.\n\n* From Office-31, [49] seems the best existing method. I’d enough the authors to apply SPA to [49] as [32,20] to show the generalizability of SPA.\n\n* The proposed SPA is based on convolution operator. As vision transformer is becoming more popular and powerful, the applicability of the proposed method may be limited to ConvNet only.\n \n* As defined in Eq 1, NTR measures the class-specificity of the features in a source-trained specified layer via a shared-vs-unknown binary classifier. So Fig 2 blue solid curve only suggests that in deeper layers (e.g. Res4/FC), it is easier to tell apart the known/unknown given the feature of the source-domain trained network, compared with the feature extracted from the shallow layers. I am not clear why this observation suggests “(line 131: adaptation should be performed at a shallower layer.)”, since the final DA classification task will be performed in the final deep layer.\n\n Also, I’m not clear if a higher NTR score is better or the other way around. Intuitively, I think higher NTR the better. While when I see the dashed blue curve (post adapt) in Fig 2, it looks that the lower the better. Please clarify.\n\n* Similarly, DIS score only shows the expressive capability of each layer in terms of domain-invariance. Then the conclusion is drawn that the adaptation should be performed in the mid-level layer, which seems contradictory to the existing multi-level adaptation works such as [ref-1]. \n[ref-1] Xie et al., Multi-Level Domain Adaptive Learning for Cross-Domain Detection, 2019\n\n* Please conduct an ablation study of applying SPA on different layers and all layers.\n No potential negative societal impact identified.", " In this paper, the authors tackle the problem of universal domain adaptation where the source domain consists of fully-labeled images and unknown category images while the target domain consists of unlabeled images of known and potentially unknown categories. To tackle this problem, they first study the trade-off relation between the negative transfer risk (NTR), i.e., class-level misalignment between the shared and private classes, and domain invariance. Motivated by the trade-off between negative transfer risk and domain invariance, they propose to align mid-level features of deep neural networks. To this end, they propose to align bag-of-word style features with self-entropy regularization to encourage the sparser BoW features. They further regularize the model training by adding a pretext task of entropy-bin classification on the grid-shuffled images from multiple instances. Plugged into the existing UniDA methods, the proposed method shows favorable performance on universal and open-set domain adaptation settings on three public benchmarks. The strengths of this paper are as follows.\nS1: The problem tackled in this work, universal domain adaptation is challenging and interesting. \nS2: Aligning mid-level features for the task of UniDA seems reasonable. Intuitively, high-level features are often specific to classes, therefore prone to category-shift. In contrast, low-level features are robust to category shift. However, they are prone to domain-shift. Therefore, intuitively mid-level features such as BoW could be a good balance between category-shift and domain-shift.\nS3: The proposed method shows favorable performance on three public benchmarks. The proposed components, the BoW-style architecture, entropy regularization, and pretext task, contribute to the performance.\n\nHowever, this paper has several weaknesses as well.\nW1: My biggest concern is the study on the relation between NTR and DIS (domain invariance score). Fig. 2 shows Res1 block has the lowest NTR and highest DIS. Compared to Res1 block, Res3 block shows higher NTR and lower DIS. If NTR and DIS are the only criteria, why do we need to use Res3 block feature for alignment? Why we cannot align Res2 and Res1 as well? \n\nW2: The proposed architecture modification does not seem to align the mid-level feature only. By minimizing L_{em} we align the mid-level feature. However, minimizing L_{s,n} and L_{t,n} also align the high-level feature as f_n is attached on top of Res4 and GAP. This is a self-contraction with the claim made from Fig. 2.\n\nW3: The improvement from each component is not very surprising. BoW-style mid-level feature + L_{em} only improves 0.4 ~ 0.5 points on top of OVANet. The results make me doubt that the proposed BoW-style feature alignment is really optimal. In addition, the proposed pretext task improves only 0.4 ~ 0.6 points compared to the rotation task.\n\nW4: Presentation quality should be improved. In general, the paper is not straightforward to read. For example, it is hard to understand why NTR in eq. (1) measures negative transfer risk. Another confusion is that do we need to more align the layers with higher DIS or layers with lower DIS.\n\n\n Please address all of my concerns in the weaknesses part above. Also, I do have a few more questions for the authors here.\n\nQ1: Why the NTR in Eq (1) does measure negative transfer risk? Basically, it is the accuracy of the pseudo labeling of the shared-private label using entropy thresholding. If we have a high NTR value, we know what are known samples and what are unknown samples. Intuitively it would reduce class-level misalignment.\n\nQ2: In L142-144, the authors claim the domain adaptation should be performed at a deeper layer to encourage domain invariance. However, Fig 2. Shows deeper layers show lower DIS. What is a reasonable conclusion here? Should we more align the deeper layers with lower DIS as they are less domain invariant? Or should we more align the shallower layers with higher DIS as they are domain more promising?\n\nQ3: How do we train a linear domain classifier to measure DIS? \n\nQ4: In Table 6, are the other pretext baselines equipped with the proposed BoW-style architecture modification and L_{em} as well? If not, we need to compare with “our pretext task” baseline in Table 5 (64.9% and 72.8%) instead of full SPA (65.5% and 73.2%).\n\nQ5: In L271-272, why the word prototypes are learnt only through the pretext task? The main task classifier is also attached on top of the word-histogram feature as shown in Fig. 3.\n\n The authors did not really address the limitations of their work. Please describe what the failure modes of the proposed method are and when the failures happen. They have not discussed the potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "Rka9DO0a28F4", "f1vBAiKrTl", "EOnkqI0Q2Xx", "RwR6AwK7LvI", "KTuY23ziFE", "nips_2022_5kThooa07pf", "8jxNnJDPad", "xiJvVJFws9o", "TH2NZ5AtnNq", "nips_2022_5kThooa07pf", "nips_2022_5kThooa07pf", "nips_2022_5kThooa07pf", "nips_2022_5kThooa07pf" ]
nips_2022_C2o5DeL_8L1
Generative Status Estimation and Information Decoupling for Image Rain Removal
Image rain removal requires the accurate separation between the pixels of the rain streaks and object textures. But the confusing appearances of rains and objects lead to the misunderstanding of pixels, thus remaining the rain streaks or missing the object details in the result. In this paper, we propose SEIDNet equipped with the generative Status Estimation and Information Decoupling for rain removal. In the status estimation, we embed the pixel-wise statuses into the status space, where each status indicates a pixel of the rain or object. The status space allows sampling multiple statuses for a pixel, thus capturing the confusing rain or object. In the information decoupling, we respect the pixel-wise statuses, decoupling the appearance information of rain and object from the pixel. Based on the decoupled information, we construct the kernel space, where multiple kernels are sampled for the pixel to remove the rain and recover the object appearance. We evaluate SEIDNet on the public datasets, achieving state-of-the-art performances of image rain removal. The experimental results also demonstrate the generalization of SEIDNet, which can be easily extended to achieve state-of-the-art performances on other image restoration tasks (e.g., snow, haze, and shadow removal).
Accept
The paper has received positive reviews. There was substantial discussion, and the authors are strongly encouraged to include the clarifications they made rot the final copy, as well as a more extensive discussion of the limitations and possible danger of overfitting.
train
[ "4QRMLZoF8Nf", "BgvC5ClVoo", "bM9uyohn3_O", "2a1gdcyadCg", "zerIWTSmoQ2", "VQ7-6K-JGd", "UDgBwNfA_x", "L2iiAa4qSpE", "GTIh28WpWYC", "Ag-w-XT0tbr", "H1r7ZgrZrJv", "8nE2lBL3dMrW", "RvlUfkOPTHo", "AloCWlACimr", "qbGMjmMj4gSy", "jI_9aZuTOj", "gJyq0rWr8Gb", "gsBSsL0cktU", "Jz3qr1PsWiA", "_D8lV6ZZcZy", "u33v0wrZ5s6", "nFOWanRn1Tf", "WrZIzoaM5jI", "yHV12xyKbBf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your new comment again!\n\n**Is there any intuitive reason for the proposed linear blending formulation of K in EQ7? How does the K_o component (with/without K_o) impact the performance? Have you ever tried other formulations for K?**\n\nThough Eq. 7 is a linear blending form of kernels, the learning of kernels is done by using a set of convolutional layers and non-linear activation functions (ReLu). They form a non-linear function for adjusting the weights of the kernels, which allows the overall result of Eq. 7 to be computed by the non-linear function as well. Please also note that this manner has been widely used in the current form of CNN, where the convolutional block generally consists of consecutive linear (convolutional layer, fully-connected layer) and non-linear functions (ReLu, Sigmoid) for fitting more universal functions.\n\nDue to the limited time for making our paper more focused, we have not experimented with other formulations of blending kernels. Yet, it should be investigated that the specific blending may be finally equivalent to using the trivial linear and non-linear functions for fitting the blending form, while the specific blending may requires extra computation and supervision.\n\n ", " Thanks for your new comments!\n\n**1. The proposed method significantly increased the number of CNN parameters in order to estimate the latent components R and K (to achieve the SOTA performance).**\n\nDuring the network training, SEIDNet consists of 132 convolutional layers (26 layers for extracting the visual feature map + 27 layers for computing the reference kernels + 32 layers for constructing 2 CVAEs with the complete encoder, condition, and decoder branches + 47 layers for others) with 3.97M learnable parameters. This network complexity is reasonable and comparable to recent methods like PReNet, MPR, SPDNet, and RCDNet, which require 3~30M parameters.\n\nDuring the network testing, the network complexity can be further reduced, because we can remove the convolutional layers for computing the reference kernels and constructing the encoders of CVAEs. In this case, the network consists of 90 layers (26 layers for extracting the visual feature map + 20 layers for constructing 2 CVAEs with the condition and decoder branches + 44 layers for others) with 2.5M learnable parameters.\n\n**2. According to the formulations, the reconstruction loss L_de is the only strong constraint used to supervise the learning process. There is no ground truth of K and R in L_se (Eq5) and L_id(EQ9), or any constraints on K and R. Actually, there are infinite decoupling solutions of K, and R. Therefore, it requires more layers to increase the learning capacity of the net, and more data to learn how to correctly decouple K, and R. As a result, the proposed method is highly dependent on its training data, and much easier to get overfitted. Constraints on K, and R may be required to make the learning process more efficient and robust.**\n\nPlease note that overfitting means a method trained on a data set produces ugly results on different test sets. In Table 4 of the supplementary file, we have experimented with training our model on the unified training set of Rain13K and tested on the separate test sets (i.e., the test sets of Test100, Test1200 [10], Test2800 [11], Rain100H, and Rain100L). We remark that this is a conventional way to justify the generalization (overfitting) of the deraining model. In this case, our model still outperforms other methods. Thus, we find no evidence of overfitting of our model. \n\nWe agree with your opinion on using stronger constraints on the status and kernel to make the learning process more efficient and robust. Actually, in Eq. 3, we subtract the object layer from the input image to achieve a relatively strong constraint on the status. Yet, how to achieve strong supervision for the kernel is difficult, because a kernel is generally regarded as a latent variable. Thus, there is no strong supervision for learning the kernel for deraining or other related tasks in the existing methods.\n", " Is there any intutive reason for the proposed linear blending forumlation of K in EQ7? How does the K_o component (with/without K_o) impact the performance? Have you ever tried other formulations for K? ", " The proposed method significantly increased the number of CNN parameters in order to estimate the latent components R and K (to achieve the SOTA performance). According to the formulations, the reconstruction loss L_de is the only strong constraint used to supervise the learning process. There is no ground truth of K and R in L_se (Eq5) and L_id(EQ9), or any constraints on K and R. Actually, there are infinite decoupling solutions of K, and R. Therefore, it requires more layers to increase the learning capacity of the net, and more data to learn how to correctly decouple K, and R. As a result, the proposed method is highly dependent on its training data, and much easier to get overfitted. Constraints on K, and R may be required to make the learning process more efficient and robust. ", " Thanks for the new questions.\n\n**1. Is this CVAE-based kernel sampling completely new or there is previous related work?**\n\nIn areas of rain/snow/shadow/haze removal and the related image restoration tasks, we find that the previous methods based on the generative networks (e.g., GAN and VAE) generally construct the latent space for sampling the image patches. Thus, they lose the chance to use kernels, which provide more explicit and accurate control of the pixel intensities. Along another line, the discriminative networks compute the shared or dynamic kernels for processing the pixel intensities. Yet, the kernels are deterministic. They yield unsatisfactory results on the confusing rain/object pixels.\n\nBased on the discussion above and to the best of our survey, we believe that our work is novel in the rain/snow/shadow/haze removal and the related tasks, in terms of using the generative power for computing the accurate kernels. However, due to the limited time and effort, we are not sure if there are works in other vision tasks sharing a similar spirit. Thus. we are cautious in using terms like “completely new” in the paper. We are pleased if more references are recommended for discussion.\n\n**2. Is it necessary to use networks with many layers here? What is the performance of the proposed method using the same/equivalent backbone as the previous methods? I just want to make sure the performance gain here is not simply because we are using a better backbone.**\n\nDuring the network training, SEIDNet consists of 132 convolutional layers (26 layers for extracting the visual feature map + 27 layers for computing the reference kernels + 32 layers for constructing 2 CVAEs with the complete encoder, condition, and decoder branches + 47 layers for others) with 3.97M learnable parameters. This network complexity is reasonable and comparable to recent methods like PReNet, MPR, SPDNet, and RCDNet, which require 3~30M parameters.\n\nDuring the network testing, the network complexity can be further reduced, because we can remove the convolutional layers for computing the reference kernels and constructing the encoders of CVAEs. In this case, the network consists of 90 layers (26 layers for extracting the visual feature map + 20 layers for constructing 2 CVAEs with the condition and decoder branches + 44 layers for others) with 2.5M optimized parameters.\n\n**3.The result of SP+M-Net [TPAMI 22 Le et al].**\n\nThanks. We have added it to the tables below.\n\n| ||&emsp; &nbsp; ISTD+||\n|:-:|:-:|:-:|:-:|\n|Method|Shadow|Non-shadow|All|\n|PMDNet|9.7|3.0|4.0|\n|AEFNet|6.5|3.8|4.2|\n|CRFormer|5.9|2.9|3.4|\n|SP+M-Net|9.7|3.0|4.0|\n|SEIDNet|6.5|3.4|3.9|\n\n| ||&emsp; &ensp; ISTD||\n|:-:|:-:|:-:|:-:|\n|Method|Shadow|Non-shadow|All|\n|AEFNet|7.77|5.56|5.92|\n|CRFormer|7.32|5.82|6.07|\n|SP+M-Net|6.0|3.1|3.6|\n|SEIDNet|7.47|5.08|5.47|\n", " Thank you for the clarification! I will check the paper again for the details. \n\nSeveral following up questions:\n\n1) Is this CVAE-based kernel sampling completely new or there is previous related work? \n\n2) Is it necessary to use networks with many layers here? What is the performance of the proposed method using the same/equivalent backbone as of the previous methods? I just want to make sure the performance gain here is not simply because we are using a better backbone.\n\nRef that you requested (sorry for not being clear in the beginning):\nTPAMI-22 - Le & Samaras: Physics-based Shadow Image Decomposition for Shadow Removal\nDOI Bookmark: 10.1109/TPAMI.2021.3124934", " We are sorry for the late reply. It takes time to prepare the extensive analysis for better solving the questions.\n\n**1.Your motivation here perfectly justifies the need for having various potential statuses per pixel among which we hopefully have one correct status.**\n\nActually, by sampling multiple statuses, we hopefully have more correct statuses, which help to suppress the negative impact of the incorrect statuses.\n\n**2.However, what puzzles me the most is that eventually the proposed method samples several potential statuses and takes the average of them. The average of all sampled statuses would never be a good approximation of the correct status. The logic here is unclear to me and at least should be listed as a limitation.**\n\nWe respectfully clarify that the sampled statuses are not averaged. We agree that the average of the sampled statuses is not a good approximation of the correct status, because the average status is very sensitive to incorrect sampled statuses. This strategy involves problematic status information at the very beginning. It heavily degrades the reliability of the kernel that is estimated later. In Section 5.2 “Analysis of Network Components” of the paper, we have experimented with averaging the sampled statuses to estimate the kernel, where the performance is worse than our full model. Please see the results in the second row of Table 1. The network architecture is illustrated in Figure 1(b) of the supplementary file.\n\n**3.Should it be more logical to first, sample various statuses, and then have a mechanism to identify the best one among these sampled kernels?**\n\nWe agree with your opinion. Instead of averaging the sampled statuses, we input the statuses, individually, into the second CVAE (see Figure 3), for sampling multiple kernels separately. Thus, the incorrect kernels have little impact on the correct ones. Please also note that the sampled kernels are computed based on the corresponding latent vectors (see $\\mu_c^m + \\sigma_c^m * Z^m$ at the bottom of Eq. 12). Each latent vector is processed by the convolutional weights of the decoder in the second CVAE. The latent vectors associated with the correct/incorrect kernels can be enhanced/suppressed by the convolutional weights. Though we finally average the sampled kernels, the incorrect kernels, which are computed based on the suppressed latent vectors, have little impact on the average kernel. In Section 5.2 “Sensitivity to the Number of Kernels”, we have evaluated the effectiveness of sampling more kernels for deraining.\n\nTo further justify the effectiveness of the average kernel, we have added an extensive analysis to Section 9 “Analysis of the Average Kernels” of the supplementary file. We manually select 100 pairs of image patches from Rain100L and Rain1400 (see Figure 16(a) of supplementary file). Each pair of image patches contain the rain streaks and the object textures, respectively, where the rain streaks and the object textures are visually similar. The typical discriminative networks (i.e., EfDeRain, SPDNet, and MPR) compute similar kernels for the confusing rain and object, thus yielding unsatisfactory results. In contrast, we sample more kernels for each image patch, where the sampled kernels are averaged. We compute the difference (L1 distance) between the average kernels, which belong to each pair of confusing rain and object patches. We accumulate and average the differences, which are reported in Figure 16(b) of supplementary file. With more sampled kernels, we achieve more differentiable average kernels for processing the confusing rain and object patches.\n\n**4.Furthermore, I think the method, even with the VAE, only looks at a small local patch to estimate the status. Without more contexts, how does it conceptually resolve the status uncertainty?**\n\nWe agree that the multi-scale context is critical to estimating the correct status and kernel. This can be partially evidenced in our discussion on the limitation of our method (see Section 5 of the supplementary file). Fewer network layers require less computation. However, they achieve less global context for computing the accurate kernels, eventually degrading the performance. Thus, it is future work to incorporate our framework with the cost-effective module for learning the multi-scale context.\n\n**5.Please add recent shadow removal methods to the table (TPAMI 22 Le et al., CRFormer Wan et al.).**\n\nThanks. We have added the result of CRFormer [Wan et al.] to the tables below. The results are reported on ISTD and ISTD+. We also respectfully request the detailed reference information of [TPAMI 22 Le et al.] for adding their results.\n| ||&emsp; &nbsp; ISTD+||\n|:-:|:-:|:-:|:-:|\n|Method|Shadow|Non-shadow|All|\n|PMDNet|9.7|3.0|4.0|\n|AEFNet|6.5|3.8|4.2|\n|CRFormer|5.9|2.9|3.4|\n|SEIDNet|6.5|3.4|3.9|\n\n| ||&emsp; &ensp; ISTD||\n|:-:|:-:|:-:|:-:|\n|Method|Shadow|Non-shadow|All|\n|AEFNet|7.77|5.56|5.92|\n|CRFormer|7.32|5.82|6.07|\n|SEIDNet|7.47|5.08|5.47|\n", " Dear Reviewer 4SNd,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 172", " Dear Reviewer YErN,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 172", " Dear Reviewer aYKW,\n\nThanks for your review again. As the deadline for the authors' response is approaching, we sincerely request your comment on our primary response. This will definitely give us a valuable chance to address the questions unsolved.\n\nBest,\n\nAuthors of Paper ID 172", " Thank you a lot for your response. \n\n>But the status is not trivially a binary indicator of rain or object.\n\nI agree with this.\n\n>Different pixel-wise statuses should be respected by different kernels, which have different weights to suppress/enhance the rain/object intensities of the corresponding pixels.\n\nI agree with this.\n\nYour motivation here perfectly justifies the need for having various potential statuses per pixel among which we hopefully have one correct status (please correct me if I am wrong here). However, what puzzles me the most is that eventually the proposed method samples several potential statuses and takes the average of them. I don't get the logic here. To me, the average of all sampled statuses would never be a good approximation of the correct status. It surely is better than the worst but can't be the best. The logic here is unclear to me and at least should be listed as a limitation. Should it be more logical to first, sample various statuses, and then have a mechanism to identify the best one among these sampled kernels? Furthermore, I think the method, even with the VAE, only looks at a small local patch to estimate the status. Without more contexts, how does it conceptually resolve the status uncertainty? \n\nThis is the only significant issue I have with this paper and I think it is quite important. Without proper explanation here, I don't understand why the method works. \n\n>Experiment on the Adjusted ISTD dataset.\n\nThanks! I see that the method achieves quite competitive results here but not SOTA (which is fine in my opinion since the method isn't designed for this task specifically). Please add recent shadow removal methods to the table (TPAMI 22 Le et al., CRFormer Wan et al.) ", " Dear Reviewer 4SNd,\n\nWe sincerely thank you for your constructive comments for making our paper much better! We are looking forward to your comments on our response. Should you have any other questions, we will do our best to response to you.\n\nBest,\n\nAuthors of Paper ID 172", " Dear Reviewer CNK5,\n\nWe sincerely thank you for your constructive comments for making our paper much better! We are looking forward to your comments on our response. Should you have any other questions, we will do our best to response to you.\n\nBest,\n\nAuthors of Paper ID 172", " Dear Reviewer YErN,\n\nWe sincerely thank you for your constructive comments for making our paper much better! We are looking forward to your comments on our response. Should you have any other questions, we will do our best to response to you.\n\nBest,\n\nAuthors of Paper ID 172", " Dear Reviewer aYKW,\n\nWe sincerely thank you for your constructive comments for making our paper much better! We are looking forward to your comments on our response. Should you have any other questions, we will do our best to response to you.\n\nBest,\n\nAuthors of Paper ID 172", " **1. Explanation of “conv” in Eq. 3, Eq. 6, and Figure 2.**\n\nIn Eq. 3, “conv” is a convolutional layer. In Eq. 6 and Figure 2(d), “conv” means a set of convolutional layers. The kernel size is 3x3. We eliminate the notations of the convolutional kernels to simplify the variables in the paper.\n\n**2. Explanation of encoder, condition, and decoder.**\n\nThe encoder, condition, and decoder branches are convolutional blocks. We have introduced the details of the encoder, condition, and decoder branches in Section 1 of the supplementary file. Each branch has 16 convolutional layers with 3x3 kernels. There is a pair of batch normalization and ReLU layers between the adjacent convolutional layers.\n\n**3. Explanation of $condition(F)$ and $condition(F, R)$ for computing the mean values and standard deviations.**\n\n$condition$ in Eqs. 4 and 12(top) is the condition branch of the CVAE $\\mathcal{V}_{se}$. $condition(F)$ means that we pass the feature map F to the convolutional layers of the condition branch. The last convolutional layer outputs the mean value and standard deviation maps, which helps to sample multiple status maps from the latent space.\n\n$condition$ in Eqs. 8 and 12(bottom) is the condition branch of the CVAE $\\mathcal{V}_{id}$. $condition(F, R)$ means that we concatenate the feature map F and the status map R, then feed them into the convolutional layers of the condition branch. The last convolutional layer outputs the mean value and standard deviation maps, which are used to sample multiple kernel maps from the latent space.\n\n**4. Explain the equation of the information decoupling.**\n\nThe information decoupling is formulated as Eqs. 6 and 7. First, we achieve the element-wise multiplication between the feature map F and the status map R, yielding the feature map $F_r$ of the rain streaks in the image. We also compute the element-wise multiplication between the feature map F and the reverse status map (1-R), achieving the feature map $F_o$ of the object textures. Next, $F_r$ and $F_o$ are fed into different convolutional blocks (denoted as “conv”), which yield the kernel maps $K_r$ and $K_o$. The kernel maps $K_r$ and $K_o$ are weighted by the status map R (see Eq. 7). The weighted kernel map K is used to remove/enhance the rain/object intensities of the image.\n\n**5. How to get the correct kernel map K?**\n\nBefore training the CVAE $\\mathcal{V}_{id}$, we pre-train the convolutional layers in Eq. 6. In this manner, the computation of the kernel map is directly supervised by the deraining loss. Thus, the kernel map can play as an accurate reference for training this CVAE.\n\n**6. Explanation of the discriminative networks.**\n\nNote that the architectures of the discriminative networks in the ablation study have been illustrated in Figure 1 (a-c) and Figure 2 (a,c) of the supplementary file. We will release the implementation of these discriminative networks along with the code package.\n\n**7. Limitation of the proposed method.**\n\nWe have discussed the limitation in Section 5 of the supplementary file. We change the number of convolutional layers (see “conv” in Eq. 6) to compute the kernel map K (see Eq. 7), which plays as a reference for training the CVAE $\\mathcal{V}_{id}$. Though the pre-training of the convolutional layers for achieving the kernel map K is supervised by the deraining loss, more layers show a strong power for estimating a more accurate kernel map K, however, at the cost of more computation (see Figure 11(a-b)). Compared to the extremely fast methods (e.g., EfDeRain), SEIDNet still has much room to achieve a better trade-off between performance and computational efficiency.\n\n**8. Comparison of the proposed CVAE with GAN or other discriminative methods.**\n\nThanks. In Table 4, we have compared SEIDNet with the generative CVID and other discriminative methods on the deraining task. An array of GAN-based methods have been compared on the desnow and deshadow tasks. Among these methods, Composition GAN, DS-GAN, and RFMPRaLSGAN are used for desnow; Mask-GAN, ARGAN, and RIS-GAN are used for deshadow. We have added the results of the recent GAN-based methods (i.e., DCD-GAN and DerainCycleGAN) on the deraining task, and the results of DW-GAN and FD-GAN on the dehaze task in the following tables.\n\nPlease note that our major contribution is to model the distributions of the pixel-wise status and kernel. CVAE is an implementation to learn these distributions, whose effectiveness has been evidenced in the ablation study and state-of-the-art comparison. In the future, we will study the implementation with stronger generative networks (i.e,CGAN, CVAE-GAN, Transformer-based CVAE), which hopefully improve the performance.\n\n| | &emsp; &ensp;SPA |\n|:-:|:-:|\n|Method|PSNR &nbsp; SSIM|\n|DerainCycleGAN|35.20 &nbsp; 0.9500|\n|DCD-GAN |35.30 &nbsp; 0.9430|\n|SEIDNet|44.96 &nbsp; 0.9911|\n\n| |&nbsp; ITS Subset|\n|:-:|:-:|\n|Method|PSNR SSIM|\n|FD-GAN|23.15 0.9207|\n|DW-GAN|35.94 0.9860|\n|SEIDNet|40.62 0.9968|", " We sincerely thank the reviewers for their constructive comments. We are pleased to see that our contribution is unanimously found novel the reviewers. Below, we address the concerns raised by the reviewers. ", " **1. Errors on some bright objects.**\n\nThanks for your valuable comment, which helps us to better clarify the analysis in the supplementary file.\n\nIt should be noted that bright objects are extremely similar to the appearances of the rain streaks. The examples of the confusing bright objects and rain streaks can be found in Figure 6: (1) the top-right water region in the first row; (2) the second-left person in white cloth in the fourth row; (3) the bright regions of the wheels in the last row. There are pixels of these bright objects misunderstood as rain streaks, as illustrated in the corresponding status maps. Yet, we use multiple status maps, which provide more differentiable information for separating the bright objects from the rain streaks. Thus, the final deraining results are reasonable. In future work, we plan to further improve the deraining results of the bright objects, while relying on fewer status maps for saving computation.\n\n\n**2. Discussion on size of training and testing dataset.**\n\nWe sincerely thank you for this constructive comment, which helps to make our experimental analysis more comprehensive.\n\nPlease note that public datasets are used for a fair comparison between SEIDNet and other methods. Some of the datasets contain relatively balanced training and testing splits, e.g., Snow100K (50K/50K images for training/testing), Rain100L (200/100 images for training/testing), and ISTD dataset (1330/540 images for training/testing). In these datasets, SEIDNet outperforms other methods.\n\nWe agree that conducting experiments on the unbalanced training and testing splits may reduce the confidence of SEIDNet. Thus, we follow your suggestion and reduce the training data in the extremely unbalanced dataset (i.e., SPA with 638K/1K images for training and testing). This is done by randomly sampling 1K training images from SPA dataset. The random sample is three-fold, thus forming three different subsets, each of which contains 1K images for training different models. The trained models are evaluated on the 1K images in the original test set. With different subsets for training, SEIDNet yields better results than other methods (see the table below).\n\n\n ||&emsp; Subset A|&emsp; Subset B|&emsp; Subset C|\n|:-:|:-:|:-:|:-:|\n|Method|PSNR &ensp; SSIM|PSNR &ensp; SSIM|PSNR &ensp; SSIM|\n|EfeDeRain|&nbsp; 34.37 &nbsp; 0.9556|&nbsp; 36.18 &nbsp; 0.9629|&nbsp; 35.44 &nbsp; 0.9576|\n|MPR|&nbsp; 35.57 &nbsp; 0.9519|&nbsp; 36.83 &nbsp; 0.9579|&nbsp; 37.65 &nbsp; 0.9630|\n|SPDNet|&nbsp; 31.84 &nbsp; 0.9094|&nbsp; 33.57 &nbsp; 0.9254|&nbsp; 32.93 &nbsp; 0.9162|\n|SEIDNet|&nbsp; 39.07 &nbsp; 0.9813|&nbsp; 39.90 &nbsp; 0.9821|&nbsp; 39.97 &nbsp; 0.9799|\n\n\n**3. No societal impact.**\n\nMany thanks for your reminder. Our approach can help to recover the image information, which may be broadly used in many scenarios (e.g., autonomous vehicles and video surveillance). One should be cautious of using the results, which may contain problematic information. This may give rise to the infringement of privacy or economic interest. We have added this societal impact to Section 6 of the supplementary file.", " **1. How to connect the motivation to the proposed method? What does it mean to randomly sample multiple status maps given a single input image? Why are multiple statuses (kernels) better than a single status (kernel), in terms of resolving the confusing information of the pixels?**\n\nWe sincerely thank you for this valuable comment, which significantly helps us to clarify the motivation for proposing SEIDNet and how multiple statuses and kernels work. We will also follow your suggestion to polish the presentation of our paper.\n\n$[Motivation$ $of$ $Proposing$ $SEIDNet]$\n\nIn the input image, the observed intensity of each pixel can be regarded as a mixture of the pixel intensities of rain and object. We agree that each pixel is associated with an underlying status. But the status is not trivially a binary indicator of rain or object. It should be a factor that determines the rain and object intensities. Different pixel-wise statuses should be respected by different kernels, which have different weights to suppress/enhance the rain/object intensities of the corresponding pixels.\n\nTypically, the discriminative network outputs a single status for each pixel. As introduced in lines 24-28 of the paper, the rain streaks and object textures may be similar (see the example in Figure 1). They mislead the discriminative network (e.g., EfDeRain), which outputs similar statuses for the similar rain and object pixels with confusing appearances. In this case, the similar rain and object pixels cannot be differentiated. The similar statuses let the corresponding pixels be processed by similar kernels, yielding erroneous deraining results. This problem motivates us to propose SEIDNet for using multiple statuses and kernels to better differentiate and process the confusing rain and object pixels.\n\n$[The$ $concept$ $of$ $sampling$ $multiple$ $statuses$ $and$ $why$ $it$ $work]$\n\nWe use the first CVAE of SEIDNet to sample multiple statuses for each pixel. Given the rain and object pixels in different training images, this CVAE learns the distribution of the pixel-wise statuses. Conceptually, the distribution can be regarded as a latent space, where we embed the statuses of the rain and object pixels of the training images. To infer the status of an unknown pixel, we sample from the latent space to find multiple possible statuses, which appear in similar scenarios of the training images. Rather than using an indifferentiable status alone, we resort to multiple statuses together to borrow more differentiable information from a broad range of training images, thus providing a better chance to separate the confusing pixel from the similar pixels.\n\n$[The$ $concept$ $of$ $sampling$ $multiple$ $kernels$ $and$ $why$ $it$ $work]$\n\nThe second CVAE of SEIDNet constructs the second latent space. In this space, we embed the kernels for processing the rain and object pixels in the training images. By respecting multiple possible statuses of the pixel, we again borrow the information widely from the training images, sampling multiple kernels from the second latent space. We average these kernels to process the corresponding pixel. Given a pair of confusing rain and object pixels, their average kernels likely have different weights to adjust their rain and object intensities, compared to the single and similar kernels produced by the discriminative network. The effectiveness of using multiple kernels has been evidenced by the experiment in Section 5.2 “Sensitivity to the Number of Kernels” of the paper.\n\n**2. Experiment on the Adjusted ISTD dataset.**\n\nThanks for your suggestion. In the table below, we compare SEIDNet with the recent methods PMDNet [ECCV 2020] and AEFNet [CVPR 2021] on the Adjusted ISTD dataset (a.k.a., ISTD+). SEIDNet outperforms the compared methods.\n\n| ||&emsp; ISTD+||\n|:-:|:-:|:-:|:-:|\n|Method|Shadow|Non-shadow|All|\n|PMDNet|9.7|3.0|4.0|\n|AEFNet|6.5|3.8|4.2|\n|SEIDNet|6.4|3.4|3.9|\n\n**3. Limitation of when the method doesn't work well in comparison to other approaches and why.**\n\nThanks. We have discussed the limitation of when SEIDNet works unsatisfactorily (see Section 5 of the supplementary file). Here, we further clarify this limitation by comparing SEIDNet with other methods.\n\nThe major limitation of SEIDNet stems from the need for a deeper network with 27 convolutional layers to estimate the kernels, which play as the reference for training the second CVAE satisfactorily. Currently, using these extra 27 layers significantly lowers the inference speed of SEIDNet, which requires 0.083s to process an image. Compared to the extremely fast models like EfeDerain (0.0059s/image), SEIDNet stills has much room to achieve a better trade-off between performance and computational efficiency.", " **1. Experiments on multiple degradations.**\n\nThank you for this valuable comment. According to your suggestion, we compare our SEIDNet with the advanced methods (EfeDeRain [AAAI 2021],HRGAN [CVPR 2019] MPR [CVPR 2021], TransWeather[CVPR 2022]) on the Outdoor-Rain Test1 dataset [CVPR 2019], where the images are degraded by rain and fog. We report the results of different methods in the following table, where SEIDNet outperforms other methods in terms of PSNR and SSIM.\n\n| |&emsp;&ensp; Test 1&emsp; |\n|:-:|:-:|\n|Method|PSNR &emsp; SSIM|\n|MPR|21.90 &ensp; 0.8456|\n|HRGAN|21.56 &ensp; 0.8550|\n|EfeDerain|22.96 &ensp; 0.8842|\n|TransWeather|31.05 &ensp; 0.9509|\n|SEIDNet|31.36 &ensp; 0.9593|\n\n**2. Comparisons with recent methods (Maxim [CVPR 2022] and Transweather [CVPR 2022]).**\n\nMany thanks for your suggestion. In comparison with the contemporary methods (Maxim [CVPR 2022] and Transweather [CVPR 2022]), SEIDNet still shows its effectiveness by achieving better results on the challenging deraining (Rain13K), dehazing (ITS\\&OTS), desnow (Snow 100K), and multi-degradation (Outdoor-Rain Test1) datasets. Please see the tables below.\n\n| |Snow100K Overall|\n|:-:|:-:|\n|Method|&ensp; PSNR &emsp; SSIM&ensp; &nbsp; |\n|TransWeather|31.55 &ensp; 0.9152|\n|SEIDNet|32.77 &ensp; 0.9643|\n\n| |&nbsp; &nbsp; ITS Subset &nbsp;|&nbsp; &nbsp;OTS Subset|\n|:-:|:-:|:-:|\n|Method|PSNR &ensp; SSIM|PSNR &ensp; SSIM|\n|MAXIM|&nbsp; 38.11 &nbsp; 0.9910|&nbsp; 34.19 &nbsp; 0.9850|\n|SEIDNet|&nbsp; 40.62 &nbsp; 0.9968|&nbsp; 35.72 &nbsp; 0.9951|\n\n| |Rain13K Overall|\n|:-:|:-:|\n|Method|PSNR &emsp; SSIM|\n|TransWeather|&nbsp; 31.48 &ensp;&nbsp; 0.9104|\n|MAXIM|&nbsp; 33.37 &ensp;&nbsp; 0.9365|\n|SEIDNet|&nbsp; 33.62 &ensp;&nbsp; 0.9539|\n\n**3. Make code and pseudo-code public.**\n\nWe will definitely follow your suggestion and release the code package and pseudo-code publicly. For your reference, we have provided some of the core code segments of SEIDNet in the supplementary file. We have also attached the below pseudo-code for training and testing to Section 7 of the supplementary file. \n\n$Algorithm$ $A$: Training pseudo-code of SEIDNet\n\n1: $epoch$ = 1;\n\n2: while $epoch$≤ max_epoch do \n\n3: &emsp; Input rainy image $I$, object layer $O$ for estimating status map $R$ via Eq. (3);\n \n4: &emsp; Extract feature map $F$ from rainy image $I$;\n\n5: &emsp; Pass feature map $F$ and status map $R$ to CVAE $\\mathcal{V}_{se}$;\n \n6: &emsp; Estimate mean values and standard deviation maps: $[μ_r,σ_r] \\leftarrow encoder([F, R]), [μ_f,σ_f]←condition(F)$ in Eq. (4);\n\n7: &emsp; Input $F, Z$ and $(μ_r , σ_r )$ to the decoder of $\\mathcal{V}_{se}$ to generate $R^′$ via Eq. (4);\n\n8: &emsp; Calculate status estimation loss $L_{se}$ via Eq. (5);\n\n9: &emsp; Estimate kernel maps $K_r$ and $K_o$ from $F$, $R$ via Eq. (6);\n\n10:&emsp; Estimate kernel map:$K←R⊙K_r +(1−R)⊙K_o$ in Eq.(7)\n\n11:&emsp; Pass $K$, $F$ and $R$ to CVAE $\\mathcal{V}_{id}$;\n\n12:&emsp; Calculate mean value maps and standard deviation maps$[μ_k,σ_k]←encoder([K, F, R]), [μ_c,σ_c]←condition([F,R])$ in Eq. (8);\n\n13:&emsp; Generate kernel map from kernel space: $K^′←decoder([F,R,μ_k+σ_k⊙Z])$ in Eq. (8);\n\n14:&emsp; Calculate information decoupling loss $L_{id}$ via Eq. (9);\n\n15:&emsp; Employ $K^′$ and $I$ to estimate object layer $O^′$ via Eq. (10);\n\n16:&emsp; Calculate deraining loss $L_{de}$ via Eq. (10);\n\n17:&emsp; Calculate overall loss $L$ via Eq. (11);\n\n18:&emsp; Update network weights;\n\n19:&emsp; If $epoch$ ≥ lr_decrease_epoch then\n\n20:&emsp; &emsp; Adjust learning rate;\n\n21:&emsp; end if\n\n22:&emsp; $epoch \\leftarrow epoch + 1$;\n\n23: end while\n\n$Algorithm$ $B$: Testing pseudo-code of SEIDNet\n\n1: $index$ = 1\n\n2: while $index$ ≤ len(test_dataset) do\n\n3: &emsp; Extract feature map $F$ from rainy image $I$;\n\n4: &emsp; Pass feature map $F$ to CVAE $\\mathcal{V}_{se}$;\n\n5: &emsp; Calculate mean value map and standard deviation map: $[μ_f,σ_f ]←condition(F)$ in Eq. (12);\n\n6: &emsp; while $m$ ≤ $N$ do\n\n7: &emsp; &emsp; Generate status map from constructed status space: $R^m←decoder([F,μ_f+σ_f⊙Z^m])$ in Eq. (12);\n\n8: &emsp; &emsp; Pass $F$, $R^m$ and $Z^m$ to CVAE $\\mathcal{V}_{id}$;\n\n9: &emsp; &emsp; Calculate mean value map and standard deviation map: $μ^m_c,σ_c^m←condition([F,R^m])$ in Eq. (12);\n\n10: &emsp;&emsp;Generate kernel map from constructed kernel space:$K^m←decoder([F,R^m,μ^m_c+σ_c^m⊙Z^m])$ in Eq. (12);\n\n11: &emsp;&emsp;$m←m+1$\n\n12: &emsp;end while\n\n13: &emsp;Estimate $K^u$ via Eq. (13)\n\n14: &emsp; $O$ = $K^u \\circledast I$\n\n15: &emsp; $index←index+1$\n\n16: end while", " The paper presents a CVAE(conditional variational auto-encoder) based method of single image rain removal. The key idea is to use the generative network VAE to learn the probability distribution of the pixel-wise rain status P(R|F,Z) and the derainning kernels P(K|F, R, Z), where R, F is the status map and the kernels, respectively. F are the extracted features of the input image, and Z is the latent variables generated by the normal distribution P(Z). For the inference stage, they use the rainy image I to compute F, and randonly generate m Zs. By using F, m Zs, they compute m Rs and consequently m Ks. The final derain kenerl map K is the mean value map of m Ks. The experimental results demostrate the effictiveness of the proposed method on the deraining benchmarks of Rain100H, Rain100L, Rain1400, SPA, the snow removal task dataset of Snow100k, and the haze removal task dataset of ITS&OTS. \n The proposed method is an extension of [Du et.al. TIP 2020]. Instead of using CVAE to directly estimate P(Y|F,Z) as Du et. al., they formulate the problem as P(R, K| F, Z) and factorize the distribution P(R, K| F, Z) = P(K|R,F,Z) P(R|F, Z), so that they can decouple the latent factors to learn P(R|F,Z) and P(K|R, F, Z), respectively. They experimental results show the effictiveness of the proposed formulation. \n\nHowever, the paper presentation is lack of clarity. There are many places in the method(both formulation and architecture) which are not clear. Especially about information decoupling, the authors should give a more explaination about why they use the corrent formulation of K (EQ6~9), and how it can get correct K as ground truth. Please provide the clear defination of the function \"conv\" in EQ3, EQ6. If it is a convolution, it should have two varibles but not one. if not, please give the clear explaination. \nPlease explain the \"encoder\", \"condition\", and \"decoder\" operations in the paper. If they are CNN blocks, please decribe them clearly, especially where the mean and standard deviation are computed from by the \"condition\" block ( condition(F) in EQ4, condition(F, R) in EQ12). \nPlease explain the equations of the information decoupling. Please explain the conv function in Eq6 and fig. 2. Is the conv a fixed numarical fountion? How do you promise the current formulation get correct K of the input pair? \n\nPlease provide more explaination about the discriminative networks they used in the comparison experiments. No information is provided in the current draft. The authors have not provide discussion about the limitations or failure cases of their work. The authors should include such discussions. In ablation study, more discussions and analysis are needed about the comparison of the proposed CVAE with GAN or other discriminative methods. ", " Paper proposes an image deraining method where they compute pixel status in status space that indicates whether the pixel is belonging to rain or an object. this information is used in the decoupling layer to construct kernel space for the pixel to derain and recover a clean background. Weaknesses:\n- Can the proposed method handle multiple degradations like images with rain and fog or images with fog and snow.\n- comparison with a few recent papers is missing\n[a] Tu et al., Maxim: Multi-axis mlp for image processing, cvpy 2022\n[b] Valanarasu et al., Transweather: Transformer-based restoration of images degraded by adverse weather conditions, cvpr 2022.\n\n- can authors elaborate or provide pseudo-code for training and testing\n- Proposed methods seem complex to implement, will the code be made public\n Please refer weaknesses Please refer weaknesses", " The paper discusses a method for image rain removal based on two CVAE models. The first CVAE learns the distribution of rain condition (status), conditioned on the image features. The second CVAE learns the distribution of kernels that are used to remove rain effects, conditioned on the rainy status and the image features. During testing, multiple status maps and kernels are repeatedly sampled from the distribution to remove rainy effects from the input image. Strengths:\nThe technical details are clear. The idea of using CVAE to learn the distribution of status estimation and the corresponding restoration kernels is novel. The results are strong and the framework seems to work well for not just rain removal but also other image restoration tasks including snow, haze, and shadow removal. \n\nWeaknesses:\n\nMy main concern is that it is unclear to me how the method works. Conceptually, I think we aim to correctly estimate the status maps and the kernels, given the image feature? With the correct status map and kernels then we can remove rainy effects properly. However, the two CVAEs sample several status maps as well as multiple kernels, conditioned on the image features. I don’t understand how these randomly sampled status maps approximate rainy status in the input image. Should an input rainy image be associated with exactly one status map? What does it mean (conceptually) to randomly sample multiple status maps given a single input image? Moreover, I also don’t understand why taking the average of randomly sampled kernels is better than one kernel. While I am aware that the authors have provided quantitative evidence in the s.m. that using these CVAEs is better than using regression models directly, I don’t think the authors have provided proper explanations for the effectiveness of their method.\n\nThe motivation also doesn’t make much sense to me since I can’t connect the motivation to the proposed method. Why the “confusing information of the pixels” can be resolved by sampling multiple pseudo rainy status maps? \n\nThe shadow removal experiment should be conducted on the Adjusted ISTD dataset (shadow image decomposition for shadow removal - ICCV19) since the original ISTD dataset exhibit a major color shift between input and GT images.\n Please see the weaknesses. Some limitations have been discussed in the s.m. It's unclear to me when the method doesn't work well in comparison to other approaches and why.", " A CNN structure based on the combination of two Conditionally Variational Auto-Encoders (CVAE) is proposed for image rain removal. Two CVAE are used, one for generating hypothesis on rain presence at a pixel and on for the scale of the rain steak. The proposed structure is tested on several database and compared to several others methods. It is also applied separately to snow, haze and shadow removal. The proposed structure is interestingly introduced and seems original. The evaluation seems substantial and show improvements over 5 others methods. It is interesting that the proposed structure can work on other applications such as snow, haze and shadow removal. In the supplementary material, shown maps of rainy pixels have numerous errors on bright objects. May it be possible de comment on this point ? \nOne may notice that the size of the test database is small or very small compared to the size of the learning databases in several cases in the experiments. This may induce a lack of confidence on the proposed results that should be interesting to discuss ?\n No societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "bM9uyohn3_O", "2a1gdcyadCg", "jI_9aZuTOj", "qbGMjmMj4gSy", "VQ7-6K-JGd", "UDgBwNfA_x", "H1r7ZgrZrJv", "yHV12xyKbBf", "nFOWanRn1Tf", "u33v0wrZ5s6", "Jz3qr1PsWiA", "yHV12xyKbBf", "WrZIzoaM5jI", "nFOWanRn1Tf", "u33v0wrZ5s6", "u33v0wrZ5s6", "nips_2022_C2o5DeL_8L1", "yHV12xyKbBf", "WrZIzoaM5jI", "nFOWanRn1Tf", "nips_2022_C2o5DeL_8L1", "nips_2022_C2o5DeL_8L1", "nips_2022_C2o5DeL_8L1", "nips_2022_C2o5DeL_8L1" ]
nips_2022_xl39QEYiB-j
Embodied Scene-aware Human Pose Estimation
We propose embodied scene-aware human pose estimation where we estimate 3D poses based on a simulated agent's proprioception and scene awareness, along with external third-person observations. Unlike prior methods that often resort to multistage optimization, non-causal inference, and complex contact modeling to estimate human pose and human scene interactions, our method is one-stage, causal, and recovers global 3D human poses in a simulated environment. Since 2D third-person observations are coupled with the camera pose, we propose to disentangle the camera pose and use a multi-step projection gradient defined in the global coordinate frame as the movement cue for our embodied agent. Leveraging a physics simulation and prescanned scenes (e.g., 3D mesh), we simulate our agent in everyday environments (library, office, bedroom, etc.) and equip our agent with environmental sensors to intelligently navigate and interact with the geometries of the scene. Our method also relies only on 2D keypoints and can be trained on synthetic datasets derived from popular human motion databases. To evaluate, we use the popular H36M and PROX datasets and achieve high quality pose estimation on the challenging PROX dataset without ever using PROX motion sequences for training. Code and videos are available on the project page.
Accept
The submission initially received mixed reviews. After rebuttal, all reviewers felt their concerns reasonably addressed and recommended acceptance (though one didn't update the score). The AC agrees. The authors are encouraged to revise the paper accordingly.
train
[ "wlQx5Hv8vO", "EWxdfi3HkRb", "3H8Ew3i5aW", "fTJ1P47rN9", "7SsnyynOubE", "HiaR2QimTh0", "5qpmtk4eKqN", "857o7k4RBv", "56siKCpoiLm", "v9GjVoG0eO", "gxDgjW42ROD", "0yNMNk22zsq0", "CuJQUSKM0rv", "YJBxFeWZnUS", "gCyMjZFnoI7", "cVRMx7QTUy", "b9WKbfEp1CF", "fTq_pmTWJLA" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for the updated review and response! We are super glad that our reply has answered some of your concerns and improved your impression of our work. Feel free to ask us anything if there are any additional questions!\n\nAlso, if we may ask, could you kindly update the ratings in the official review to \"6: Weak Accept\" and reflect the score change? \n\nThanks again! \n\nAuthors ", " Thank you for the author's responses, especially around the questions I had; I am significantly happier with the work and therefore will recommend a weak accept", " Dear Reviewer wM8A, \n\n\nThank you so much for all of your constructive feedback and suggestions. They will greatly help improve our work. \n\nWe have provided comments on your concerns and revised our paper based on the suggestions. Since our method is causal, runs in sub-real time, and can be applied with and without a real-world scene, it is not restricted to the use cases demonstrated in the paper (we have included a teaser in-the-wild result in the anonymous [webiste](https://embodiedscene.github.io/embodiedpose/), which uses the same model reported in the paper. We sincerely hope that our response could have addressed some of your concerns and improved your impression of our work. \n\nAs the author's discussion period draws near an end, we would greatly appreciate it if you could acknowledge/update your initial reviews, and please do not hesitate if there are any more questions/concerns you would like to be addressed. \n\n\nThanks again! \n\nAuthors", " Dear Reviewer SrSt, \n\n\nThank you so much for all of your constructive feedback and suggestions. They will greatly help improve our work. \n\nWe have provided comments on your concerns and revised our paper based on the suggestions. As the first work to simulate full-body humanoids in natural daily scenes, we provide a viable and expandable approach (namely, modified scenes for better simulation) for this task. As simulation technology improves, we will be able to leverage better scene scans and simulators accordingly. We sincerely hope that our response could have addressed some of your concerns and improved your impression of our work. \n\nAs the author's discussion period draws near an end, we would greatly appreciate it if you could acknowledge/update your initial reviews, and please do not hesitate if there are any more questions/concerns you would like to be addressed. \n\n\nThanks again! \n\nAuthors. ", " Dear Reviewer fkcV, \n\n\nThank you so much for all of your constructive feedback and suggestions. They will greatly help improve our work. \n\nWe have provided comments on your concerns and revised our paper based on the suggestions. As the first work to simulate full-body humanoids in natural daily scenes, we not only provide improved performance on PROX (as can be shown in the revised paper and [website](https://embodiedscene.github.io/embodiedpose/)) but also much better run-time. We sincerely hope that our response could have addressed some of your concerns and improved your impression of our work. \n\nAs the author's discussion period draws near an end, we would greatly appreciate it if you could acknowledge/update your initial reviews, and please do not hesitate if there are any more questions/concerns you would like to be addressed. \n\n\nThanks again! \n\nAuthors. ", " Dear Reviewer aaAC, \n\n\nThank you so much for all of your constructive feedback and suggestions. They will greatly help improve our work. \n\nWe have provided comments on your concerns and revised our paper based on the suggestions. We would especially point to run-time and performance comparison with the SOTA methods both in the revised paper and [website](https://embodiedscene.github.io/embodiedpose/). We sincerely hope that our response could have addressed some of your concerns and improved your impression of our work. \n\nAs the author's discussion period draws near an end, we would greatly appreciate it if you could acknowledge/update your initial reviews, and please do not hesitate if there are any more questions/concerns you would like to be addressed. \n\n\nThanks again! \n\nAuthors. ", " We thank all reviewers for their time and constructive feedback and hope that our responses provide clarification on our approach and results. Here we summarize some common clarification points and provide a list of revisions we have made based on the constructive feedback. \n\n**Performance**\n\nIn this work, we explore pose estimation using \"embodiment\" as our guiding principle, where we incorporate egocentric features and scene awareness in our pose estimation pipeline. We train a simulated agent to follow 2D keypoint observations and navigate in recreated real-life environments. Our method is trained on **motion sequences** from the AMASS, kin_poly, and H36M (train split) datasets and does not utilize any image-level features from the video sequences. At inference time, we used an off-the-shelf 2D keypoint detector to extract 12 2D keypoints compatible with the SMPL humanoid body (more details in Appendix B.2) and use them as input in a streaming fashion. In this constrained setting, our model is able to achieve the state-of-the-art result on the challenging PROX dataset without ever seeing the videos or 2D keypoint observations from PROX during training. Compared to SOTA batch optimization-based methods, our approach can run in a fraction of the runtime while remaining competitive in performance. As motion is best seen in videos, we provide visualization of all the sequences from the PROX dataset on our [anonymous website](https://embodiedscene.github.io/embodiedpose/). \n\n\n**Modified Scene Geoms**\n\nWe utilize modified scene geometries to better approximate the real world, as early in our development we realize that the convex decomposition of the scanned scenes is a poor substitute for the real scenes. As can be shown in Appendix C.2, the scanned scenes are often bulkier than their real-world counterparts and contain cracks and crevices. This is a result of both inaccuracies in the scene scanning process and the convex decomposition necessary to port those scenes into a physics simulation. Thus, we modify the scenes and use simple geometries to better approximate their real-world counterparts, and offer smoother and stabler shapes for physics simulation. We believe that as simulation and 3D scanning technologies progress, as a community, we will be able to create better digital copies of real scenes. As one of the first works to simulate humanoids together with real-life-sized scenes, we modify the scenes so that it offers the best trade-off between simulation speed and realism. Notice that our core contribution, our MPG formulation, and humanoid control methods, do not depend on this operation. The only input it requires is the signed distance function (sdf) of the scene to query occupancy, and the sdf can come from a number of sources including point clouds and meshes. As simulation technology progresses, our method can easily be adapted to leverage better-approximated scenes. \n\n\n**List of Revisions**\n\n1. Fixes in Table 1 and Table 2, where the checkmarks for HyberIk and PROX were erroneous. HyberIk does utilize the image features on the H36M dataset. PROX-RGB also utilizes the scene features during their optimization procedure. \n2. Added the runtime in seconds to each of the methods shown in Table 2. We hope this puts the performance of our method in better perspective, especially compared to multi-stage batch optimization-based methods. \n3. Added ablation study on not using the modified scenes to our ablation study. \n4. Added the discussion of the experimental result of the PROX dataset as well as Fig. 4 for qualitative evaluation. \n5. We change the notation of scene features from $\\boldsymbol s_t$ to $\\boldsymbol o_t$, since $\\boldsymbol s_t$ is already used to refer to simulation states. \n7. Appended discussion of social impact in the Appendix.\n6. Typo fixes\n\nWe appreciate all suggestions for our papers and believe that they have made our manuscript stronger. Please let us know if there is any remaining questions or clarifications we could make. Thanks!", " We thank the reviewer for the positive, constructive, and encouraging review. We are super glad that you find our work \"simple and robust\", \"significant\", our result on PROX \"state-of-the-art\", and our method \"only needs 2D observations and can be trained on synthetic dataset for real-world pose estimation\". To address your questions and concerns: \n\n\n**Failure cases in the poses of sitting or lying**\n\nWe agree that the performance for some of the lying and occluded sitting sequences still has room for improvement. As discussed in failure cases (Sec.5 and Appendix A.1), the humanoid and scene geometries do not perfectly reflect the real-world and its physical properties. Beds and sofas are extra challenging due to their soft cover and pliable surfaces. Although physics simulation such as Mujoco supports simulating soft bodies (through simulating hundreds of small spheres and capsules), it is extremely computationally expensive (both in CPU and memory consumption) to simulate a larger amount of soft bodies. Thus, in this work, we choose to use modified geometries to better replicate the scenes. We wish to be transparent about the failure cases and shortcomings of our method and have included all of the available sequences from the PROX dataset on our anonymous website: [https://embodiedscene.github.io/embodiedpose/](https://embodiedscene.github.io/embodiedpose/). \n\n\n\n**Contact modeling**\n \n \n Our learning framework models contact through the input pose $\\mathbf{q_t}$ and environmental sensors. Unlike previous simulation-based Mocap methods such as SimPOE [2], our pose estimation network takes into account the current simulated pose $\\mathbf{q_t}$ into consideration while producing its pose estimation $\\tilde{\\mathbf{q_t}}$. This closes the loop between physics simulation and pose estimation, where the network is aware of the current simulation states while producing the next time-step estimates. Prior arts, like SimPoE, often use a two-stage approach where the kinematic pose is estimated independently from the physics simulation process. As a result, our network receives real-time feedback of contact and collision from the physics simulation through $\\mathbf{q_t}$, as the reaction in $\\mathbf{q_t}$ indicates that the humanoid cannot walk through certain obstacles. Combined with environmental occupancy information, our network can learn the association between scene occupancy and collision through a large number of simulation experiences. \n\n\n[1] Todorov, Emanuel et al. “MuJoCo: A physics engine for model-based control.” 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (2012): 5026-5033.\n\n[2] Yuan, Ye et al. “SimPoE: Simulated Character Control for 3D Human Pose Estimation.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021): 7155-7165.\n\n\n**New agents for different motion sequences**\n\nWe do not require new agents trained for different motion sequences. Our evaluation on the PROX and H36M datasets is carried out without ever using any motion sequences or fine-tuning our model, and a single policy is used to evaluate all sequences from the PROX and H36M datasets. Once learned, our agent can estimate poses of everyday activities without additional information or training. It is conceivable and possible that specialized agents can be trained for more dynamic motion sequences such as competitive sports for better performance. \n", " \nWe thank the reviewer for the positive, constructive, and helpful feedback. We are grateful that you find our work to have \"strong performance\" and \"impressive given the limited amount of input\". We also appreciate that you find our ablation to be comprehensive. To address your questions and concerns:\n\n---\n\n**Modified Scene Geometries**\n\nWe agree that modifying the geometries is an important part of our proposed method and will provide additional motivation in our revisions. In Appendix C.2, we provide visualizations that explain the motivations for this simplification step: due to inaccuracies of the mesh creation process, the scanned meshes provided by the PROX dataset often contain geometries bulkier than the original objects in the scene. As can be seen in Figure 3 of the Appendix, the chairs and tables are much thicker in the scanned mesh than in the real world. This makes it impossible for the humanoid to pass through. Since this issue does not fall under our pose estimation method, we resort to using modified geometries to better reflect the real world. Another reason is that convex geometries are required for physics simulators [1]. Creating the convex hull of scanned meshes is a lossy process, and the resulting mesh often has unrealistic cracks and crevices. As can be seen in Figure 3, a chair mesh must be decomposed into multiple segments of convex hulls to be faithfully simulated. \n\n\nNotice that our pose estimation networks itself **does not** require the modified geometries; it uses the signed distance function of the scene to query occupancy while the humanoid navigates the environments. The signed distance function can be acquired from a number of sources, such as scanned meshes or point clouds. Given better-formed geometries for physics simulation (e.g. convex hulls that are faithful to the original scenes), our method would require no additional modifications for utilizing the data. Advancement in physics simulation that alleviates the requirement of convex geometries for contact modeling may also help remove this geom modification process. \n\nFor the semi-automatic scene modification process, given the scanned point cloud of the scenes, we first identify the objects that we wish to simplify. For each object, we manually select the geometries that we want to use to approximate it (e.g. cylinders and cuboids for tables). Then, for each segment of the objects (such as armrests and legs of chairs), we use a script to find the bounding box of the point clouds and calculate the parameters of the geometries (width and heights, diameters, etc.). Each scene is then represented by the compositions of these modified geometries. \n\nTo better demonstrate the performance of our method without the scene simplification process, we rerun our method with the original scene scans and added an entry to our ablation table:\n\n| w/ Modified Scene | w/Scene awareness | MGP w/o multi-step | MGP w/o Geometric | MGP w/o TCN | SR $\\uparrow$ | ACD $\\downarrow$ | Accel $\\downarrow$ |\n| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| ⤬ | ✔ | ✔ | ✔ | ✔ | 80% | 413.0 | 36.5 |\n| ✔ | ✔ | ✔ | ✔ | ✔ | 96.7% | 148.2 | 9.2 |\n\nAs can been seen in the table, without the scene modification process, the success rate drops while ACD increases. Notice that a few sequences (namely MPH11_00034_01 , MPH1Library_00145_01 , N0Sofa_00034_01) causes the simulation to become unstable and throws the following error: \n\n> Exception in do_simulation Got MuJoCo Warning: Nan, Inf or huge value in QACC at DOF 0. The simulation is unstable. Time = 54.2467. 1627\n\nUpon visual inspection, this is due to the humanoid getting stuck to the original mesh's unrealistic gaps and cracks and causes the simulation to become unstable. This does not happen in the modified scene, as it better reflects the real scene geometry. \n\n\n\n**Limitations**\n\nWe thank the reviewer for feedback on the limitations and social impact. We apologize for this oversight and have added the following section in our appendix in revision: \n\n\"This research focuses on estimating physically valid human poses from monocular videos. Such a method can be positively used for animation, telepresence, at home robotics, etc. where a natural-looking human pose can serve as the basis for higher-level methods. It can also lead to malicious use cases, such as illegal surveillance and video synthesis. As our method focuses on fast and causal inference, it is possible to significantly improve its speed and deploy it to real-time scenarios. Thus, it is essential to deploy these algorithms with care and make sure that the extracted human poses are with consent and not misused.\"\n", " **Soft geometry in PROX**\n\nWe thank the reviewer for the suggestion and question. Indeed, it would be beneficial to simulate soft bodies and make sofas and chairs more realistic. Our current simulation configuration is used to optimize run time and computational cost. While the Mujoco physics simulation does support soft body simulation, it is accomplished as a large link of spheres and capsules, which is computationally expensive to simulate. Memory consumption for contact modeling increases significantly with the number of geometries simulated, and it becomes impossible to simulate large scenes together with moving humanoids. As a result, soft body simulation is only used in small-scale tests such as robotic folding. We also do not have an annotation on the hardness and compliance of the sofas in the PROX scenes. As far as we know, we are the first work to simulate humanoids with large real-world scenes and we feel like it is a reasonable trade-off between realism and computational cost to start with rigid bodies. It is also an active research area to simulate interaction with deformable objects such as sofas and chairs. \n\n\n**Typos and suggestions**\n\nWe thank the reviewers for the careful review and reading. We have fixed the typos in the manuscript. Thanks!\n\n[7] M. Hassan, V. Choutas, D. Tzionas, and M. J. Black. Resolving 3D human pose ambiguities with 3D scene constraints. In International Conference on Computer Vision, Oct. 2019.\n\n[31] D. Rempe, T. Birdal, A. Hertzmann, J. Yang, S. Sridhar, and L. J. Guibas. Humor: 3d human\nmotion model for robust pose estimation. In International Conference on Computer Vision441\n(ICCV), 2021\n\n[40] Todorov, Emanuel et al. “MuJoCo: A physics engine for model-based control.” 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (2012): 5026-5033.\n\n[46] S. Zhang, Y. Zhang, F. Bogo, P. Marc, and S. Tang. Learning motion priors for 4d human body capture in 3d scenes. In International Conference on Computer Vision (ICCV), Oct. 2021\n", " The authors thank the reviewer for constructive and helpful feedback. We are glad you find our work \"novel in utilizing simulation\". To address your questions and concerns:\n\n---\n\n**Comparison with Neural MoCon and Novelty**\n\nThere are several differences between our method and Neural MoCon in methodology and approach. \n\n- Neural MoCon employs a **batch optimization based** reference motion initialization step to first obtain kinematic pose, while ours directly estimates physically valid 3D body pose in a causal and sequential manner. This two-stage approach renders Neural MoCon hard to employ in a streaming fashion. The two-stage formulation approach also creates a disconnect between the pose estimation stage and the later motion tracking stage. When the kinematic pose estimate is of lesser quality, the simulated motion tracker may find it hard to imitate the reference motion. \n\n- Neural MoCon employs a sampling-based control strategy, where the distribution prior to sample from is learned from the **training split** of the datasets. Since the GPA and H3.6M datasets all have similar actions between training and testing splits, it is natural that such a distribution prior is helpful in estimating motions from the test set. In the GPA dataset, only three sequences (0, 34, 52) out of 60 unique recordings were used for testing purposes, while the rest 57 are used for training. Ours, on the other hand, employs a Deep-RL based humanoid controller and does not utilize any additional distribution information from the testing datasets. As such, our method can be applied to the PROX dataset **without** ever being trained on it (the PROX data also do not provide any ground-truth 3D annotation). \n\nIn all, compared to Neural MoCon, our method is casual, runs in near real-time, and does not need any training set to learn the prior data distribution. We also differ in philosophy, as our guiding principle is the idea of \"embodiment\" where we incorporate more egocentric features and scene awareness in body pose estimation. We use body perception, scene feature, and movement goal to control a humanoid to estimate pose in a streaming fashion, while Neural MoCon utilizes multiple stages of pose optimization and motion tracking. \n\nWe can indeed work on the dataset they use (namely, GPA and GTA-IM). We chose PROX because, compared to the GPA dataset, it is more challenging and contains videos of humans interacting in a real-world indoor environment. The GPA dataset only contains videos recorded in a motion capture studio with subjects interacting with simple geometries such as stairs and cuboids. The background is also always a green screen. The PROX dataset has more heavy occlusion induced by real-world human-scene interactions (such as sitting on chairs and sofas), and the subjects do not wear any MoCap markers. Our method is able to perform well on the PROX dataset without ever training on the PROX videos. No additional modifications will be needed to run inference on the GPA dataset (as GPA also contains camera pose and scene geometries), though we choose the PROX dataset as it is more challenging and contains more realistic human-scene interactions. \n\n**Performance**\n\nWe apologize for any confusion that our table might have caused. In Table 2, we compare with multistage batch-optimization-based methods such as HuMoR [31], PROX [7], and LEMO [46], and some of them use RGB-D sequences. The only regression-based method is HybrIk w/ RootNet, and it alone offers a fair comparison. Batch-optimization-based methods often directly optimize for the metrics shown in the table (such as ACD and scene penetration), which a regression-based method can only learn from data. We have added relative run-time for all of the methods in table 2 in our revision. We achieve SOTA results on ACD, ground, and scene penetration using RGB and about 100 times speedup while staying competitive with RGB-D results. We would also want to point out that the difficulty of the PROX dataset is the main reason why most of the methods that tackle this dataset is batch optimization based: factoring in the scene geometry, occlusion, and depth ambiguity in a regression-based method is exceedingly challenging. The lack of training data also exacerbates this issue. Here we include a subset of our main Table 2 where we only include the RGB-based methods: \n\n| Method (RGB) | Phys. | Scene | SR $\\uparrow$ | ACD $\\downarrow$ | Time (seconds) $\\downarrow$ |\n| ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| PROX | ⤬ | ✔ | 100% | 297.81 | 47.64 | \n| HuMoR | ⤬ | ⤬ | 98.3% | 435.7 | 11.73 |\n| HybrIK w/ RootNet | ⤬ | ⤬ | - | 23512.7 | 0.08 |\n| Ours | ✔ | ✔ | 96.7% | 148.8 | 0.12 |\n\nFrom this table, where all the methods use RGB input, we can see that our method outperforms all baseline methods in terms of ACD (distance to point cloud) while having a per-frame runtime close to regression-based methods.\n", " **Further dataset demonstrate the performance better than H36M**\n\nSince we focus on pose estimation and human scene interaction modeling, we pick the PROX dataset as our main source of evaluation. We utilize the popular H36M dataset to establish a baseline and showcase that our method can estimate good-quality 3D human pose in a MoCap environment without ever being trained on real-world 2D keypoint detection (we train only on synthetic datasets). The same model is then used to evaluate on the PROX dataset, where we demonstrate the ability to estimate human pose and human scene interactions in a real-world environment from detected 2D keypoints. Our regression-based pose estimator has not been fine-tuned on the PROX dataset and has achieved SOTA results compared to batch-optimization-based methods [7, 31, 46]. \n\n\n[7] M. Hassan, V. Choutas, D. Tzionas, and M. J. Black. Resolving 3D human pose ambiguities with 3D scene constraints. In International Conference on Computer Vision, Oct. 2019.\n\n[31] D. Rempe, T. Birdal, A. Hertzmann, J. Yang, S. Sridhar, and L. J. Guibas. Humor: 3d human motion model for robust pose estimation. In International Conference on Computer Vision441 (ICCV), 2021\n\n[40] Todorov, Emanuel et al. “MuJoCo: A physics engine for model-based control.” 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (2012): 5026-5033.\n\n[46] S. Zhang, Y. Zhang, F. Bogo, P. Marc, and S. Tang. Learning motion priors for 4d human body capture in 3d scenes. In International Conference on Computer Vision (ICCV), Oct. 2021 \n", " \n**Discussion around the PROX Dataset**\n\nWe are sorry for the lack of discussion around the PROX dataset and have added the following paragraph to the manuscript:\n\n\"The PROX dataset contains RGB-D video sequences of humans interacting with indoor environments that are pre-scanned using depth cameras. We use the qualitative split of the PROX dataset, which contains 1,000K frames of videos of 20 subjects in 12 scenes. No ground-truth 2D keypoints or 3D poses are provided for this dataset. Due to the lack of training split, heavy occlusion, and human scene interactions, the PROX dataset is highly challenging for 3D human pose estimation methods, and popular methods often utilize multistage optimization to recover poses from videos.\"\n\nWe also added a plot (Fig.4 in our revision manuscript) and the following paragraph for discussion of our result:\n\n\"From Fig.4 we can see that our method can estimate accurate 3D absolute poses that align with the 3D scene. Compared to batch multistage optimization methods that utilize RGB-D input, PROX [7], and LEMO [46], our method can achieve comparable results while running in near real-time. Compared with the regression-based method, HybrIK [16] with RootNet [24], our method can estimate much more accurate root translation and body pose. In terms of physical plausibility, optimization-based methods often perform well but can still exhibit floating and penetration. While the optimization methods can take over a minute per frame to estimate poses, catastrophic failure can still occur (as shown in the fourth row). Since motion is best seen in videos, please refer to our demo videos [https://embodiedscene.github.io/embodiedpose/](https://embodiedscene.github.io/embodiedpose/) for quantitative comparison. \"\n\n**Disconnect between 2D keypoint detection and 3D/depth \"perfect\" generation of the scene...**\n\nWe are unsure of what exact disconnect is referred to here and would appreciate your understanding if we misinterpreted your meanings. Since the PROX dataset does not provide ground-truth 2D keypoints or 3D body pose annotation, we resort to off-the-shelf 2D keypoint detection methods, as done in prior arts [7, 31, 46]. The 3D scenes are also far from perfect (as demonstrated in Appendix C.2), with chair and sofa dimensions drastically different from their real-world counterparts. This creates a disconnect between the simulated environment and the real world that causes the agent to be stuck between chairs often. Thus, we resort to using modified scene geometries.\n\n\n**Temporal prediction neural networks such as LSTMs or transformers**\n\nWe indeed use a temporal prediction neural network in our network (please refer to Appendix D2). Our kinematic policy consists of Gated Recurrent Units (GRU) and processes the input autoregressively. We can not use a temporal prediction network alone (without MPG) since 2D keypoint observation is coupled with the camera pose. Given the same global body pose $\\boldsymbol q_t$, different camera poses will produce different 2D keypoints. Since our network directly predicts the human pose in the global coordinate space, we cannot use 2D keypoints directly as input. The benefit of MPG is that it provides motion cues in the global coordinate space and can be directly used as network input. For more information and motivation of MPG, please refer to Sec.3.2. \n\n\n**Limitations**\n\nSorry for not making the limitations section more apparent. We discussed the failure cases of our methods in the \"Sec.5 Discussion\" and added additional failure cases in Appendix A.1. Here we can also provide a summary of the failure cases:\n\nDue to the discrepancy between physics simulation and the real world, our simulated humanoid can get stuck in narrow gaps between chairs, tables, and beds. Heavily erroneous 2D keypoint detection may also cause our humanoid to lose track and lose balance. Motions such as lying on the bed facing down and pushing up are also challenging since they are few examples in motion capture datasets. Although our method can often recover quickly by jolting out of the current state and trying to follow the target 2D keypoints, we can still observe failure cases where the humanoid cannot stay on track. For visualization of the failure cases (we include results on **all** sequences from the PROX dataset for evaluation), please refer to our anonymous website: [https://embodiedscene.github.io/embodiedpose/](https://embodiedscene.github.io/embodiedpose/).\n\n\n", " Thank you so much for your feedback and suggestions. We are glad you found that our method \"performs well on the PROX dataset\". To address your comments and concerns:\n\n---\n\n**Usage of H36M Dataset**\n\nWe agree that the H36M dataset is not the best dataset for scene modeling: it only contains the ground and occasionally the chairs as scene elements. Thus, we mainly focus on demonstrating pose estimation results on the PROX dataset, which contains videos of humans interacting with a rich environment. We include the H36M dataset since it is a common benchmark for 3D human pose estimation, and we would like to establish a baseline from which we can compare with popular methods. It also shows that our method can be applied to global 3D human pose estimation, in both scene-rich and ground-only scenarios. We agree that more dataset that contains human scene interactions and ground-truth 3D pose annotation is needed to further research on this topic. \n\nAs mentioned in Sec.4.1 and the caption for Table 1, our method only uses a sparse set of detected 2D keypoints as input and does not directly use image features. We also train only on synthetic 2D keypoints generated from motion datasets. As such, the comparison between ours and the SOTA methods is not a completely fair one. Methods such as VIBE, MeTRAbs, and HyberIK are all kinematics-only methods and have severe artifacts such as foot sliding, penetration, etc., as can be shown in the demo videos. SimPoE, though a physics-based method, uses VIBE as the first stage and can be viewed as a refinement of VIBE's result. Another important distinction is that we directly perform pose estimation in the **global** space (as compared to two-stage methods such as MeTRAbs and SimPoE, where the root position and body pose are separately regressed). \n\nFor a better evaluation of the motion quality, we provide rendered videos of full sequences from the H36M dataset on our anonymous submission website [https://embodiedscene.github.io/embodiedpose/](https://embodiedscene.github.io/embodiedpose/). The qualitative result demonstrates that while our method may not achieve SOTA in joint positional error, it does produce a high-quality pose estimation. \n\n\n**MPG based on PoseTriplet [3]**\n\nCompared to PoseTriplet [3], the only similarity is that we both make use of the network architecture from VideoPose [24], and our methodology and procedure are quite different. PoseTriplet uses the VideoPose network as the **main pose estimation backbone** and uses it to provide 3D joint positions for their physics-based imitator. Our MPG calculates the pose projection gradients through an iterative gradient descent process augmented with learning-based and geometric transform $\\mathcal{G}$. The learning-based transform utilizes the VideoPose network to calculate the root orientation in camera space, while the geometric transform solves a rigid body transformation step using least-squares minimization. As such, we only utilize the VideoPose network in a **substep** of our MPG. The output is also different: MPG directly computes the pose gradient (in joint angles) in the global space instead of 3D joint positions in the camera space. The pose gradients serve as the movement cues for our velocity-predicting network. In Appendix B, we go into detail about how MPG is constructed using both learned and geometric transforms. \n\n\n", " In this paper, the authors propose a scene-aware pose estimation framework based on the concept of embodiment. Different from previous work with multistage optimization, non-causal inference, and complex contact modeling for the scene-aware pose, this method is simple(only one stage) and recovers robust scene-aware pose in the simulated environment. Besides, to simulate good poses, they disentangle the camera pose and use a multi-step projection gradient defined in the global coordinate frame as the movement cue for our embodied agent.\nThis method only needs 2D observation and can be trained on synthetic datasets for real-world pose estimation. They achieve the state-of-the-art on PROX dataset without using any training data in this dataset. Strengths:\n+ The main contribution--simulating scene-aware humanoid to mimic visual observation and recover 3D poses in the real-world environment is significant. I believe it will be a good baseline for the scene-aware pose estimation.\n+ This method achieves promising results on the challenging poses of PROX dataset.\n+ The MGP is reasonable on 3D pose recovery and achieves significant improvements shown in ablation studies.\n+ This paper is well written and easy to follow.\nWeakness:\n-The simplified scene representation causes some failure cases in the poses of sitting or laying.\n- The contact modeling between motion and scene is a little bit unclear. 1) It would be better to discuss how the contact information models in this paper. I think it is one of the most important differences between this work and other simulation-based mocap methods. \n2) Does this method need to train new agents for different motion sequences? Please follow the weakness.", " The paper proposes a method for estimating 3D human pose given a monocular RGB video observing a human moving through a scene tha has been previously reconstructed (i.e. a scene for which a 3D mesh reconstruction is available). The method is based on physical simulation of the human moving through a geometrically simplified version of the scene. The simulation is driven by a multi-step projection gradient connecting 2D pose keypoints to the controller that drives the humanoid pose in simulation.\n\nThe experiments evaluate the proposed method as well as ablations against a breadth of methods from prior work on two datasets: PROX and H36M. Performance is evaluated in terms of pose accuracy (mainly joint error metrics where ground truth pose is available) and pose plausibility (mainly distance to scene geometry, interpentretation frequency and distance etc.). The results show that the approach is competitive with prior work that uses image features on the H36M dataset, and mostly outperforms prior work on the PROX dataset. Strengths\n+ The physics-based formulation is novel and offers a complemetary approach to the 3D human pose estimation problem statement relative to most prior work. The fact that it can provide strong performance is quite impressive given the limited amount of input information that is given compared to other methods\n+ The paper is well-written and presents a breadth of ablations that analyze the impact of different components of the method on performance\n\nWeaknesses\n- The approach relies on a \"semi-automatic\" simplification of the 3D scene geometry to address failure modes due to reconstruction noise. I would have liked to see a more detailed discussion of this issue and also of the actual process for creating these simplified scenes. This is an important detail as it constitutes a significant modification to the input available to the presented method relative to prior work. Ideally, there would be an \"ablation\" that would report results based on the original (un-simplified) scene geometry to concretely measure the impact and value of this semi-automatic step I would like the authors to respond to the point regarding scene simplification. Limitations are described in the last section of the paper, mainly focusing on the simplification of human bodies and scene geometry to rigid bodies. As far as I can tell, there is no discussion of potential negative societal impacts (despite a statement pointing to the supplement in the checklist).", " The paper presents a 3d human pose framework based on both simulators and 2d/3d human estimators. It recovers both global 3D human poses and local human poses. The methods can be applied to daily activities such as PROX dataset. Strengths:\nIt sounds novel to me as I am not a simulator guy but a learning and 3d person guy. Using these simulators naturally will help generate lots of physics priors and encode the physics into the learning process.\nI may not provide enough evidence for the novelty of the simulation part.\n\nWeaknesses:\nI did not see the comparison with \"Neural MoCon: Neural Motion Control for Physically Plausible Human Motion Capture\", if so, what is the difference, can you work on the dataset they are working on?\n\nI think the performance is somewhat unsatisfying, we can see several metrics in Table 2 is not SOTA.\n\nSeveral typos: \nline 287: scene penetration frequency Freq. and distance Pen..\nline 204: keytpoints\nline 202: rotationrR As I understand, the geometry in PROX is soft, is that reasonable to fully treat the sofas as hard/always flat? should we consider the deformation during the interaction of the geometries? Yes", " This paper aims to perform single-camera pose estimation. It uses 2D key points and then uses a 3d simulated scene that the person is within, to enable an improvement in the pose estimation. The approach is tested in H36M and PROX datasets Strengths\nThe work doesn't require complex contact modelling or multi-stages of optimization processes, it is a single inference step.\nThe work proposes a temporal gradient projection to smooth the estimation over time.\nThe work is able to perform well on the POX dataset\n\n\n\nWeaknesses\nThe H36M dataset is ill-suited for scene modelling as there are few objects within the foreground and seems to perform poorly and is distracting for the paper\nThe MPG is heavily based on the work of [3]\nThere is very limited experimental discussion around the work of the PROX dataset, making it hard for a reader to fully appreciate the performance of the approach\nThere is a disconnect between the 2D keypoint detection and the 3D/depth \"perfect\" generation of the scene, For the MPG whatabout if temporal prediction neural networks such as LSTMs or transformers were used instead?\nCould a further dataset demonstrate the performance better than H36M none stated" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "EWxdfi3HkRb", "0yNMNk22zsq0", "gCyMjZFnoI7", "cVRMx7QTUy", "b9WKbfEp1CF", "fTq_pmTWJLA", "nips_2022_xl39QEYiB-j", "gCyMjZFnoI7", "cVRMx7QTUy", "gxDgjW42ROD", "b9WKbfEp1CF", "CuJQUSKM0rv", "YJBxFeWZnUS", "fTq_pmTWJLA", "nips_2022_xl39QEYiB-j", "nips_2022_xl39QEYiB-j", "nips_2022_xl39QEYiB-j", "nips_2022_xl39QEYiB-j" ]
nips_2022_NaW6T93F34m
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach
Modern deep neural networks (DNNs) are extremely powerful; however, this comes at the price of increased depth and having more parameters per layer, making their training and inference more computationally challenging. In an attempt to address this key limitation, efforts have been devoted to the compression (e.g., sparsification and/or quantization) of these large-scale machine learning models, so that they can be deployed on low-power IoT devices. In this paper, building upon recent research advances in the neural tangent kernel (NTK) and random matrix theory, we provide a novel compression approach to wide and fully-connected \emph{deep} neural nets. Specifically, we demonstrate that in the high-dimensional regime where the number of data points $n$ and their dimension $p$ are both large, and under a Gaussian mixture model for the data, there exists \emph{asymptotic spectral equivalence} between the NTK matrices for a large family of DNN models. This theoretical result enables ''lossless'' compression of a given DNN to be performed, in the sense that the compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values \emph{only} in $\{ 0, \pm 1 \}$ up to scaling. Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme, with code available at https://github.com/Model-Compression/Lossless_Compression.
Accept
In the paper, the authors provide theorems that establish that for GMM input data, the NTK matrices of dense and quantized DNNs have the same eigenspectra in the asymptotic limit of high input data dimension and sample size. These results motivate network compression algorithms which demonstrate good empirical performance even outside the regime for which the proofs are established. The theorems provide a novel extension that contains previous studies as special cases. The baseline comparisons included in the paper are somewhat limited in nature, and the authors should re-evaluate their choice to use the word "lossless" with quotes, and instead use a more accurate term that does not require quotes.
train
[ "IXLVzmIH6g8", "yy91Fv9CVxP", "0N4xOCe-_7", "ZCTp-IAWGSJ", "2kdD1kHCEu", "F62y0GUFAy", "STPC6544o5W", "zkrDwDS-cBd" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their response. It addressed all my questions. Especially, the performance-complexity tradeoff in Figure 3 is very helpful. I decided to keep my rating (7: accept).", " We thank the reviewer for his/her positive support and for the valuable insights shared in the constructive comments. \nIn the following, we provide point-by-point answers (**A**) to the comments raised by the reviewer.\n\n* \"The authors ran experiments on real not-so-wide DNNs that also break the assumptions, e.g. taking real image data that is not GMM and taking convolutional nets that are not fully-connected. And they reported, in their own words \"unexpected close match\" to the theory. This, in its own right, is an important observation that future theoretical relaxations might address. But the gap is rather inadequately characterized by experiment here: e.g. any empirical characterization of the notion of \"unexpectedness\"?\"\n\n**A**: This is an interesting and important question. We would like to clarify that (i) in Figure 2, we compare the eigenspectra of CK matrices of fully-connected (FC) nets for GMM and real MNIST data of reasonable size ($n,p$ only in hundreds), for which we observe an \"unexpected close match\" between the theory and practice, since we clearly violate the asymptotic ($n,p\\to\\infty$) and GMM assumptions, but the theoretical results remain approximately valid: This may be due to a fast convergence rate as $n,p$ grow large or an underlying \"universality\" as discussed after Remark 2 and in Remark 4 of the Appendix of the paper; and (ii) \nin Figure 3, we compare the performance of uncompressed DNNs versus compressed nets using the proposed NTK-LC (which is based on the limiting NTK and does **not** hold, a priori, for the network of finite width under study) and other compression approaches, where we violate the infinite network size assumption of NTK and also the GMM data assumption (but not the fully-connected assumption, since we *only* compressed the FC layers, see the discussion above Figure 3) and obtain a sparse and quantized network with up to a factor of $10^3$ less memory and limited performance degradation.\nAnd we will conduct additional experiments to empirically characterize, in a quantitative manner, how the different types of violations above affect the final performance of the network (which is not available for the moment due to the short time slot).\n\n* \"Another thing I do not appreciate is the trickiness of the scalar calibration due to nonlinearities. This seems to stem from the requirement of unity Gaussianity. Is there any empirical data supporting the near-Gaussianity under real data? Does there exist any badly behaved nonlinearities that significantly deviated from theory? Does this mean NNs trained with normalization layers are more conducive to such compression, or could the procedure be simpler in those cases?\"\n\n**A**: We thank the reviewer for this interesting question. The (exact) Gaussian distribution assumption may not be necessary and is demanded here mainly for the simplicity of mathematical derivation, as has been discussed in the paragraph after Remark 2 and in Remark 4 in the Appendix. We agree with the reviewer's intuition and also conjecture that \"NNs trained with normalization layers are more conducive to the proposed compression,\" but establishing such a proof in a rigorous manner seems out of the scope of this paper.\n\n* \"I find the naive sparse/quantized baselines presented in experiments inadequate. Of course this is a post-training compression technique, but since it is derived from NTK, it is natural and important to ask how it generalizes compared to much stronger, non-post-training baselines, such as, say, a winning lottery ticket--does there exist any other compressed nets that generalizes better regardless of how expensive the compression procedure was?\"\n\n**A**: We thank the reviewer for this constructive comment. We have added, in Figure 3 of the revised version of the manuscript, the comparison between the proposed NTK-LC approach and the magnitude-based pruning method (as demanded by Reviewer 7Ta5, among the most widely used NN compression schemes), which shows the advantageous performance of the proposed NTK-LC approach. More empirical results (such as those related to winning lottery ticket) will be made available in an updated version of the paper. The theoretical analysis, though, requires more effort and is out of the scope of this paper.\n\n\n* \"'Lossless' (used not in quotation marks) might be a false advertisement.\" and \"Figure 2, missing axes labels\".\n\n**A**: We thank the reviewer for pointing these out and they are all fixed in the revised version.\n\n", " We thank the reviewer for his/her positive support and constructive comments. \nIn the following, we provide point-by-point answers (**A**) to the comments raised by the reviewer.\n\n* \"The claim in line 49 seems to be a central theme of the paper but has no follow-up discussion on its meaning and implications.\" and \"Suggestion: As mentioned in Weakness 1, the paper would benefit from a discussion on how the convergence and generalization properties of ultra-wide DNNs can depend only on the eigenspectra.\"\n\n**A**: We thank the reviewer for this helpful suggestion. It is known that the time evolutions (when trained with gradient descent using a sufficiently small step size) of both the residual error and the in-sample prediction of (sufficiently wide) neural networks can be expressed as *explicit* functions of the NTK eigenvalues and eigenvectors. More specifically, with the notations in the paper and consider $K_{\\rm NTK} = \\sum_{i=1}^n \\lambda_i v_i v_i^T$ the spectral decomposition of the final-layer NTK matrix $K_{\\rm NTK}$ with eigenvalue-eigenvector pair $(\\lambda_i, v_i)$, one has, for square loss $L(W_1, \\ldots, W_L, w) = \\frac12 \\parallel y - f(X) \\parallel^2$ on the training set $(X, y) \\in \\mathbb{R}^{ p \\times n} \\times \\mathbb{R}^n$ that\n$$ v_i^T \\frac{d}{dt} ( y - f_t(X) ) = - \\lambda_i r_i(t), \\quad \\frac{d}{dt} f_t(X) = \\sum_{i=1}^n \\lambda_i r_i(t) v_i,$$\nfor $f_t(X)$ the network output at time step $t$ and residual error component $r_i(t) = v_i^T ( y - f_t(X) )$, see more details in, e.g., [1-3].\nIn a sense, the NTK eigenvalue distribution entails the \"train-ability\" of the NN model and the eigenvectors of the largest eigenvalues indicate the direction in which the loss decays the most rapidly.\nIn the revised version of the paper, we have added a sentence in lines 51-53 to make this clearer.\n\n\n* \"Theoretical claims are presented in the asymptotic regime of infinite n and p (Assumption 1).\"\n\n**A**: Despite the asymptotic nature of the claims in the paper, empirical results show a close match between theory and practice even for $n,p$ in hundreds, see Figure 2. We conjecture the asymptotic results can be extended to a non-asymptotic setting with some additional efforts and under the additional assumption that weights $W$ are sub-gaussian. That is, however, out of the scope of this paper.\n\n* \"A particular GMM distribution is chosen for the input data of the studied model without justification for why it is the relevant distribution to be analyzing.\"\n\n**A**: GMM is among the most widely known and used distributions in the machine learning literature, see for example [Chapter 9, 4]. In the study of large (and deep) neural network models using high dimensional probability and random matrix theory, GMM is particularly appealing in that (i) it allows one to apply many convenient and/or advanced techniques such as the Stein's lemma and Hermite polynomials, and (ii) in the large $n,p$ regime in Assumption 1, many machine learning methods tend to treat real data *as if they were* mere simple GMM, and the eigenspectra of some core random matrices of interest in these methods only depend on the first- and second-order statistics of the data, so GMM is a first simple yet representative and effective model to study, see [5] and [Chapter 8, 6] for more discussions on this point.\nWe have discussed more on this point in lines 242-256 as well as in Remark 4 in Appendix A of the revised version.\n\n\n[1]Fan Z, Wang Z. Spectra of the conjugate kernel and neural tangent kernel for linear-width neural networks[J]. Advances in neural information processing systems, 2020, 33: 7710-7721.\n\n[2] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural Tangent Kernel: Convergence and Generalization in Neural Networks. Advances in Neural Information Processing Systems, 2018, 33: 8571–8580.\n\n[3] Ben Adlam and Jeffrey Pennington. The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 74–84. PMLR, 2020.\n\n[4] C. M. Bishop, Pattern Recognition and Machine Learning, 1st ed. Springer-Verlag New York, 2006.\n\n[5] M. E. A. Seddik, C. Louart, M. Tamaazousti, and R. Couillet, \"Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures,\" in Proceedings of the 37th International Conference on Machine Learning, 2020, pp. 8573–8582.\n\n[6] R. Couillet and Z. Liao, Random Matrix Methods for Machine Learning. Cambridge University Press, 2022.", " We thank the reviewer for his/her positive support and constructive comments. \nIn the following, we provide point-by-point answers (**A**) to the comments raised by the reviewer.\n\n* \"The results in Figure 2 rely on a qualitative measure of \"closeness\" to evaluate the method instead of a metric that can be quantified and compared. Many of the markings in the top histograms are barely noticeable and require magnification to be seen.\" and \"Suggestion: As mentioned in Weakness 4, the paper would benefit from a quantitative metric of eigenvalue closeness.\"\n\n**A**: To provide a quantitative measure for the NTK eigenvalue \"closeness,\" we have added, in the revised version, the spectral norm errors between $K_{CK}$ and $\\tilde K_{CK}$ (as has been established in Theorem 1 in the $n,p \\to \\infty$ limit). Specifically, in Figure 3 **top**, we have $\\parallel K_{\\rm CK} - \\tilde K_{\\rm CK} \\parallel = 0.15$ (**left** for GMM data) and $\\parallel K_{\\rm CK} - \\tilde K_{\\rm CK} \\parallel = 6.86$ (**right** for MNIST data).\nBesides, we have measured the similarity between the eigenvalues of $K_{\\rm CK}$ and $\\tilde K_{\\rm CK}$ using three different (histogram similarity) metrics: the cosine similarity [7], the correlation and the intersection [8]. The similarity estimates based on these three approaches are all close to one (in fact all greater than 0.99), indicating an extremely close match between the two histograms.\nWe have also redrawn Figure 3 to ensure the top histograms are easily readable. \n\n* \"Question: What is the novelty of the NTK-LC approach and how does it compare with state-of-the-art methods?\" and \"The results in Figure 3 are compared with \"naive\" baselines instead of competitive state-of-the-art methods.\"\n\n**A**: The proposed NTK-LC approach is novel in that it has a novel and sound theoretical foundation that depends on the *precise* CK ad NTK eigenspectra of fully-connected DNN models, which, to the best of our knowledge, is derived for the first time under generic GMM data. In Figure 3 of the revised version, we compare the proposed NTK-LC approach to the magnitude-based pruning method (as also proposed by Reviewer 7Ta5, among the most widely used NN compression schemes), showing the advantageous performance of the proposed NTK-LC approach. \n\n[7] Alfirna Rizqi Lahitani, Adhistya Erna Permanasari, and Noor Akhmad Setiawan. Cosine similarity to determine similarity measure: Study case in online essay assessment. In 2016 4th International Conference on Cyber and IT Service Management, pages 1–6. IEEE, 2016\n\n[8] Lee S M, Xin J H, Westland S. Evaluation of image similarity by histogram intersection[J]. Color Research & Application: Endorsed by Inter‐Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 2005, 30(4): 265-274.", " We thank the reviewer for his/her positive support and constructive comments. \nIn the following, we provide point-by-point answers (**A**) to the comments raised by the reviewer.\n\n\n* \"Quantized neural networks in the current theoretical and empirical results focus on ternary-valued weights, i.e., NNs with weights that take values in {-1, 0, 1}. Would it be possible to extend the results to binary networks, that is, {-1, 1}-valued networks? It seems that such binary-valued weights (up to a proper scaling) can still satisfy assumptions for Theorem 1 and 2.\"\n\n**A**: The binarized weights can be achieved by taking the sparsity level $\\varepsilon = 0$ in the proposed NTK-LC approach, in which case the weights are distributed according to a symmetric Bernoulli, see also Equation (16) in the paper.\n\n* \"In Figure 3 in Section 4, the proposed lossless compression approach is numerically compared with two heuristic sparsification and quantization approaches. The heuristic sparsification approaches uniformly zero out 80% of the weights. This approach is very brutal. Would it be possible to add a more realistic baseline of magnitude-based pruning -- remove a fraction of weights that have the lowest magnitude (absolute value) and keep other weights. This baseline is used more often as a baseline for model pruning and is sometimes surprisingly strong (see, e.g., https://arxiv.org/pdf/1902.09574.pdf)\"\n\n**A**: We thank the reviewer for this constructive suggestion. We have added, in Figure 3 of the revised version of the paper, the comparison between the proposed NTK-LC approach and the widely used magnitude-based pruning method suggested by the reviewer. We observe a better \"performance-complexity tradeoff\" achieved by the proposed NTK-LC method. \nMore specifically, we have added two groups of experiments using the NTK-LC ternary weight approach (that has ternarized weights but *unquantized* activations) and the popular magnitude-based pruning method. \nNTK-LC (with both weights and activation quantized) yields slightly inferior performance than its ternary-weights variant, but achieves higher accuracy under the same memory budget compared with the popular magnitude-based pruning.\n\n* \"If I understand correctly, the current lossless compression approach uses layerwise pruning and quantization. That is, all layers share the same sparsity level in Algorithm 1. Would it be possible to use the same theoretical insights to perform global compression; that is, compress weights in MLP layers collectively without regard for the specific layer. This may lead to better compression results since it gives more freedom to the weight selection.\"\n\n**A**: We thank the reviewer for this interesting and insightful question. The proposed NTK-LC compression method can also be used with different sparsity for each layer, as long as Assumption 2 is satisfied. As for the trade-off between the sparsity of each layer, some previous efforts (e.g., [1, 2]) have already provided insightful results and discussions on this point, e.g., less pruning in the first layers and more in the last layers of the network. It could be of future interest to provide more theoretical insights on such \"optimal sparsity schedule\" (if there exists) within the proposed analysis framework. We leave that for future work.\n\n[1] Su J, Chen Y, Cai T, et al. Sanity-checking pruning methods: Random tickets can win the jackpot[J]. Advances in Neural Information Processing Systems, 2020, 33: 20390-20401.\n\n[2] Han S, Pool J, Tran J, et al. Learning both weights and connections for efficient neural network[J]. Advances in neural information processing systems, 2015, 28.", " This paper studies the problem of neural network compression using analysis in the high-dimensional NTK regime. Their main results show that under this regime, the spectral properties of both NTK and CK matrices are independent of the distribution of the weights up to normalization and centering. Instead they depend on a number of parameters that define the activation functions at each layer. This finding informs a new compression technique where a new (compressed) network can match the activation parameters at each layer to enjoy the same spectral properties as the original net. This NTK-LC technique is evaluated on synthetic data by qualitatively comparing the distribution of eigenvalues and on real data by comparing test accuracy with naive baselines. Strengths:\n\nThe paper has a clear motivation in the field of neural network compression, a relevant problem that is lacking theory. It is clearly written with thorough theoretical results and experiments on both synthetic and real-world data.\n\nWeaknesses:\n1. The claim in line 49 seems to be a central theme of the paper but has no follow-up discussion on its meaning and implications.\n2. Theoretical claims are presented in the asymptotic regime of infinite n and p (Assumption 1).\n3. A particular GMM distribution is chosen for the input data of the studied model without justification for why it is the relevant distribution to be analyzing.\n4. The results in Figure 2 rely on a qualitative measure of \"closeness\" to evaluate the method instead of a metric that can be quantified and compared. Many of the markings in the top histograms are barely noticeable and require magnification to be seen.\n5. The results in Figure 3 are compared with \"naive\" baselines instead of competitive state of the art methods.\n Question: What is the novelty of the NTK-LC approach and how does it compare with state of the art methods?\n\nSuggestion: As mentioned in Weakness 1, the paper would benefit from a discussion on how the convergence and generalization properties of ultra-wide DNNs can depend only on the eigenspectra.\n\nSuggestion: As mentioned in Weakness 3, the paper would benefit from a quantitative metric of eigenvalue closeness. The authors have addressed the limitations of using NTK theory to explain the behavior of modern neural networks.", " The authors showed asymptotic eigen-spectral equivalence conditions for fully-connected NTK given GMM data and certain assumptions, based thereon they proposed a net compression scheme with sparse and low-precision random weights, and demonstrated with examples. \n [+] Results linking NTK and random matrix theory with DNN compression is of timely interest to the field.\n\n[+] Though I cannot say I followed all proofs, the main ideas and motivations are well presented.\n\n[-] A lack of comprehensive experimental comparison with baseline approaches is limiting the practical significance of the findings.\n - The authors ran experiments on real not-so-wide DNNs that also break the assumptions, e.g. taking real image data that is not GMM and taking convolutional nets that are not fully-connected. And they reported, in their own words \"unexpected close match\" to the theory. This, in its own right, is an important observation that future theoretical relaxations might address. But the gap is rather inadequately characterized by experiment here: e.g. any empirical characterization of the notion of \"unexpectedness\"? \n- Another thing I do not appreciate is the trickiness of the scalar calibration due to nonlinearities. This seems to stem from the requirement of unity Gaussianity. Is there any empirical data supporting the near-Gaussianity under real data? Does there exist any badly behaved nonlinearities that significantly deviated from theory? Does this mean NNs trained with normalization layers are more conducive to such compression, or could the procedure be simpler in those cases?\n- I find the naive sparse/quantized baselines presented in experiments inadequate. Of course this is a post-training compression technique, but since it is derived from NTK, it is natural and important to ask how it generalizes compared to much stronger, non-post-training baselines, such as, say, a winning lottery ticket--does there exist any other compressed nets that generalizes better regardless of how expensive the compression procedure was? \n- \"Lossless\" (used not in quotation marks) might be a false advertisement.\n- Figure 2, missing axes labels.\n See above.", " This paper characterizes the asymptotic spectral equivalence between NTKs of dense and quantized networks. It shows that under certain assumptions of data (high-dim, Gaussian mixture data) and network architectures (wide MLPs), quantized networks have the same NTK eigenspectra of unquantized ones. This finding allows the authors to perform model quantization with little performance degradation. The paper is very well written -- the authors crafted their paper with immense care and taste for mathematical detail. The main results of the paper (Theorem 1 and Theorem 2) are novel and subsume previous studies [2, 32] as special cases. Overall, I think this is a high-quality paper. \n\nOne weakness of this paper is in its numerical evaluation. As I detailed below, the baselines used for the model pruning (randomly removing weights) seem to be too brutal and too weak. It is beneficial to incorporate more realistic baselines such as magnitude-based pruning.\n\n\n\n * Quantized neural networks in the current theoretical and empirical results focus on ternary-valued weights, i.e., NNs with weights that take values in {-1, 0, 1}. Would it be possible to extend the results to binary networks, that is, {-1, 1}-valued networks? It seems that such binary-valued weights (up to a proper scaling) can still satisfy assumptions for Theorem 1 and 2. \n* In Figure 3 in Section 4, the proposed lossless compression approach is numerically compared with two heuristic sparsification and quantization approaches. The heuristic sparsification approaches uniformly zero out 80% of the weights. This approach is very brutal. Would it be possible to add a more realistic baseline of magnitude-based pruning -- remove a fraction of weights that have the lowest magnitude (absolute value) and keep other weights. This baseline is used more often as a baseline for model pruning and is sometimes surprisingly strong (see, e.g., https://arxiv.org/pdf/1902.09574.pdf)\n* If I understand correctly, the current lossless compression approach uses layerwise pruning and quantization. That is, all layers share the same sparsity level in Algorithm 1. Would it be possible to use the same theoretical insights to perform global compression; that is, compress weights in MLP layers collectively without regard for the specific layer. This may lead to better compression results since it gives more freedom to the weight selection. One limitation of the paper, as mentioned in my questions above, is its lack of natural baselines for model pruning in the experiment sections. I encourage the authors to consider incorporating them. " ]
[ -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "2kdD1kHCEu", "STPC6544o5W", "F62y0GUFAy", "F62y0GUFAy", "zkrDwDS-cBd", "nips_2022_NaW6T93F34m", "nips_2022_NaW6T93F34m", "nips_2022_NaW6T93F34m" ]
nips_2022_6RoAxmwj0L2
DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation
Distributional shifts in photometry and texture have been extensively studied for unsupervised domain adaptation, but their counterparts in optical distortion have been largely neglected. In this work, we tackle the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between source and target images. To this end, we propose a distortion-aware domain adaptation (DaDA) framework that boosts the unsupervised segmentation performance. We first present a relative distortion learning (RDL) approach that is capable of modeling domain shifts in fine-grained geometric deformation based on diffeomorphic transformation. Then, we demonstrate that applying additional global affine transformations to the diffeomorphically transformed source images can further improve the segmentation adaptation. Besides, we find that our distortion-aware adaptation method helps to enhance self-supervised learning by providing higher-quality initial models and pseudo labels. To evaluate, we propose new distortion adaptation benchmarks, where rectilinear source images and fisheye target images are used for unsupervised domain adaptation. Extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains. Datasets and more information are available at https://sait-fdd.github.io/.
Accept
This paper proposes a new segmentation method with geometric insight to deal with the distortions. As pointed out by our reviewers, this paper is featured with important practical value, clear problem definition, and interesting mathematical insight. During the rebuttal phase, most of the reviewers confirmed their support for an (weak) acceptance, and I believe this paper should be accepted as a poster paper.
test
[ "2usz1Iy0GZ", "kLfAjZv08pD", "g6-jD-Y8SY3", "iVsVTBe-u-8", "ZkXp99NEVa", "YSOJdSaN4t2", "mPmdStNPsz-", "IFIbszCvCJN", "eN-yfFh25M0", "uUQG0prGnPi", "ew14s2TDnI", "KHblGrI1xoq", "RmBGanDRVtz", "AKd3LobaIMp", "RuxFDBQMHBh" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your favorable reviews. According to the reviewer's suggestions, we will add a detailed explanation in the Discussion and Conclusion (currently Conclusion section in the manuscript) section discussing the possibility of extending our methods, to the general domain adaptation methodology.\nWe will definitely consider releasing the code for reproducibility. We are under an internal review process to complete this.", " We thank the reviewer ‘QLQF’ for the informative replies. We hope our following answers address your concerns properly.\n\nRegarding suggested optical flow (OF) methods, we found Semantic-Flow [R5] very interesting as it exploits localized layer information from semantic segmentation to further improve optical flow tasks. We also noted that SIFT-Flow [R6] considers *unpaired* scenes for motion synthesis via object transfer.\nHowever, prior works on optical flow (e.g., [R5],[R6]) primarily aim to solve *motion* prediction between images. Even though some of optical flow methods allow *unpaired* sets of images, it remains unknown how the non-learnable image descriptor (e.g., SIFT) for *explicit* local correspondence matching can be directly applied to *implicit* relative distortion learning for distortion style transfer. \nMoreover, we cannot say that our method ignores the correspondence. For example, Fig.2 and Fig.4 in the manuscript show that buildings and vehicles are distorted by replicating counterparts in target images by implicit learning of distortion correspondence (RDL) (Line 301-308 in the manuscript).\nLastly, throughout our experiments, we have shown the importance of our relative distortion learning (RDL) for distortion-aware segmentation adaptation, where +RDL (*learnable*) always improves adaptation performance while +RA (randomized affine augmentation) leads to degraded performance in some cases (e.g., +RDL vs. +RDL+RA in *Cityscapes $\\rightarrow$ CityscapesFisheye* and *Cityscapes $\\rightarrow$ FDD*).\nAbove all, we thank the reviewer 'QLQF' for constructive suggestions, and we would like to investigate the possibilities of optical flow as spatial augmentation methods in the follow-up research.\n\n\nRegarding the second question about camera calibration, in Line 40-43 of the manuscript, we referred to Kumar et al. [20], where they experimentally evaluated the disadvantage of calibrations including reduced field-of-view, resampling error, and calibration errors in practice (*\"Practical Problems encountered\"* section in [20]). Hope this appears to be reasonable to the reviewer 'QLQF'.\n\n\n\nReferences:\n\n[R5] Sevilla-Lara et al. \"Optical flow with semantic segmentation and localized layers.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. \n\n[R6] Liu et al. \"Sift flow: Dense correspondence across scenes and its applications.\" IEEE transactions on pattern analysis and machine intelligence 2010\n\n[20] Kumar et al. \"Unrectdepthnet: Self-supervised monocular depth estimation using a generic framework for handling common camera distortion models.\" 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2020.", " Thanks for the authors' response. However, important questions are still not resolved. Please check the following.\nTo be explicit, I am not worrying that the proposed can not do better than other adaptation methods that mainly concern appearance gaps.\nHowever, my concern lies in two folds.\nFirst, how significant is the role of the learnable spatial distortion network compared with spatial distortion augmentation that does not need to train networks? Because in any case, you do not care about exact correspondence, and for OF we have SIFT-flow or semantic flow, which does not require paired images.\nSecond, I am not convinced that doing camera calibration will be such a disadvantage, e.g., occlusions can be resolved using an indicator mask. Is there any data showing that calibration would be introducing errors even when handled correctly?\nGiven the current information, I like to keep my rating.", " * I would like to express my appreciation to the authors to take the time to my questions. most of my concerns were resolved in the rebuttal phase; mainly related to the discriminator, the cost of the proposed network, the description of the dataset, and network design (related to batch size). Therefore, I \bincrease my rating from six to seven through the discussion phase. In addition, I agree that the authors have not constrained the type or direction of the domain adaptation. Despite theoretically unconstrained conditions, however, I would like the authors to exhibit improved performance (in terms of accuracy) through the experiments. The detailed explanations in the Discussion section or a small section could exhibit the possibility of extending the novel methods, the authors proposed, to the general domain adaptation methodology. \n\n* Furthermore, for reproducibility and the improvement of the deep learning society, I would strongly address that the code would be published in public. In hopeful expectation, I will adjust my rating in good faith.", " We thank the reviewer for the positive support and for the time reviewing our work.", " Thanks for the reply, my issues have been basically settled. I decide to keep my rating on 6.", " We thank the reviewer ‘ihqk’ for the informative and carefully written comments, and hope you find our answers helpful to resolve your questions.\n\n* **A1.** Thanks for your suggestion and we will try to shorten the distance between figures and corresponding paragraphs in the final manuscript.\n\n* **A2.** Such irregular edges of the reconstructed images (Fig.2-(d) in the manuscript) can be seen as the artifacts of the diffeomorphic transformation.\nWe may consider adding a constraint to make the boundary of the reconstructed image to be aligned with the edge.\nHowever, it is unknown whether such constraint affects the performance of the segmentation adaptation and we would like to investigate this in future work.\n\n* **A3.**\nAs 'ihqk' pointed out, we found that the randomized affine augmentation via RA is not always beneficial to the segmentation adaptation when it is combined with our relative distortion learning (RDL).\nAs we discussed in Line 282-286 of the manuscript, the effectiveness of RA may depend on the geometric distributional shifts between source and target domains.\nFor example, +RDL+RA shows degraded segmentation adaptation performance than +RDL in *Cityscapes $\\rightarrow$ FDD* in Tab.1 of the manuscript, and *Cityscapes $\\rightarrow$ CityscapesFisheye* in above comment for the reviewer 'QLQF' ([link]( https://openreview.net/forum?id=6RoAxmwj0L2&noteId=ew14s2TDnI \"Title\")). \\\nIn contrast, RDL always leads to improvements in segmentation adaptation, regardless of pairs of datasets, throughout our experiments (e.g., based methods vs. +RDL, and +RA vs. +RA+RDL) as presented in Tab.1 of the manuscript and above result table ([link]( https://openreview.net/forum?id=6RoAxmwj0L2&noteId=ew14s2TDnI \"Title\")).\n\n* **A4-A5.** Yes. [17] and [18] are duplicated references. Thanks for spotting this and we will correct this properly and streamline the reference format.\n\n* **A6.** To our best knowledge, we have referred necessary publications from 2021 (e.g., [2],[13],[26],[42].[43]) and also presented a comparison with one of SOTA approaches (ProDA [42]) from 2021.\nProbably, we could not find more relevant publications after 2021 as we proposed a pioneering work in distortion-aware domain adaptation.\nHope this also appears to be reasonable to the reviewer 'ihqk'.\n", " Above all, we thank the reviewer `83ri' for providing thoughtful and constructive suggestions.\nWe hope our below answers address most of your concerns.\n\n**Further clarification of motivation** \\\nThe reviewer ‘83ri’ expressed curiosity about the various set-ups of domain adaptation tasks among distorted and rectilinear images (e.g., distorted images as source and rectilinear images as a target, or both source and target domains include distorted images).\nAlthough various adaptation tasks could have been considered, we first tackle adapting existing semantic segmentation models trained on rectilinear images to *unlabeled* fisheye images.\nOne of the motivations for our work is the scarcity and the difficulty of constructing *annotations* for distorted fisheye images (e.g., Woodscape [40]), while we already have larger amounts of *annotated* rectilinear images (e.g., Cityscapes [8], GTAV [31]).\nThus, we first define the scope of our tasks in line with the necessity in real-world scenarios.\nIn addition, as the reviewers pointed out, we are not aware of prior works on geometric distortion in domain adaptation.\nAs pioneering work in this direction, we believe that we have brought already interesting and practically valuable domain adaptation tasks with extensive experimental results.\nThe reviewer 'QLQF' also commented that we introduced new benchmarks that \"*could help develop more sophisticated methods that deal with both geometric distortion and appearance change*\".\nSimilarly, the reviewer '83ri' already suggested an interesting direction where distortion-aware adaptation can be extended to various directions in the machine learning society.\nWe are happy to see such constructive suggestions that could have been possible as we brought a new perspective (i.e., geometric distortion shifts) to the domain adaptation field.\n\n**Application to extended domain adaptation tasks** \\\nWe found the aforementioned extensions of the proposed adaptation tasks plausible to carry out.\nBasically, in our relative distortion learning (RDL), the deformation field generator ($G$, see Fig.1 in the manuscript) produces not only the forward deformation field ($\\Phi_{S \\rightarrow T}$), but also its inverse ($\\Phi_{T \\rightarrow S}$); and both fields are utilized in the distortion-aware loss functions (Eq.(2) and Eq.(3)).\nHence RDL is able to rectify distorted images ($I_{S \\rightarrow T}$) to normal images ($I_S'$) via the inverse field ($\\Phi_{T \\rightarrow S}$), E.g., transform Fig.2-(e) to Fig.2-(d) in the manuscript.\nAlso, we do not impose any assumption on the distortion style of source and target images.\nIn particular, both target and source images are randomly cropped and resized (see Fig.2 in the manuscript) in our experiments.\nIn the meanwhile, RDL is able to address various relative distortions between input images (e.g., both source and target images can have some degree of distortions or not).\nTechnically speaking, our distortion-aware adaptation method can be applied to other experimental environments as the reviewer '83ri' suggested and we find this would be one of interesting directions for future work.\n\nWe will further clarify the motivation for defining the scope of our work in Introduction; and discuss the possible extensions of adaptation tasks in Conclusion.\n", " **Effectiveness of Discriminator ($D_G$)** \\\nIn training the deformation field generator ($G$), we observed that the output deformation fields $\\Phi_{S \\rightarrow T}$ and $\\Phi_{T \\rightarrow S}$ get easily converged to a trivial solution (i.e., identity deformation field $\\Phi_{I}$) without introducing the distortion-aware discriminator ($D_G$).\nSuch a trivial solution satisfies Eq.(2) and Eq.(3) and ultimately limits the role of the relative deformation field generator $G$ as an identity field generator.\nThen diffeomorphic transformation only reproduces the identical images (e.g., $I_S \\cdot \\Phi_{S \\rightarrow T} = I_S$ when $\\Phi_{S \\rightarrow T}=\\Phi_{I}$).\nTo prevent such undesirable local minima, we introduced the discriminator $D_G$ which discriminates the distortion style between $I_T$ and $I_{S \\rightarrow T}$.\nWe found that the adversarial loss using $D_G$ is effective in learning relative distortion as shown in the ablation results from Tab.3 (${L}_{adv_G}$) in the manuscript.\n\nPlease note that we designed a discriminator which primarily targets to minimize the distortion gap between $I_T$ and $I_{S \\rightarrow T}$ for the proposed adaptation tasks (from rectilinear source to fisheye target).\nFor this, we also tested whether discriminating the distortion style between $I_S$ and $I_{T \\rightarrow S}$ helps to improve the segmentation adaptation performance in our early design experiments.\nHowever, we did not observe further improvements hence we decided to use a single discriminator for $I_T$ and $I_{S \\rightarrow T}$.\n\n**Cost of the proposed framework** \\\nThe optimization goal of our framework is to find the optimal segmentation adaptation model $M^*$ (see Algorithm 1 in the supplementary material).\nThe optimal model $M^*$ does not require any additional computational modules at test time and thus the inference time and memory usage remain the same compared to the baseline methods (69.385 ms on a 1280 $\\times$ 966 image using about 805.15 MB of an NVIDIA A100 GPU).\nRegarding the training computational costs, as the reviewer '83ri' pointed out, our framework involves additional training of the deformation field generator ($G$), the distortion-aware discriminator ($D_G$), and its adversarial learning (${L}_{adv\\_G}$) as well as the distortion-aware losses.\nFor example, the based method (AdaptSeg) takes about 9.6 GPU hours of training with 17.98 GB GPU memory, while AdaptSeg+RDL takes about 16.5 GPU hours of training with 20.25 GB GPU memory.\n", " **Answers to other questions**\n\n* **Relatively Small Batch Size:**\nWe do not use any batch-normalization layers, following the based methods (AdaptSeg [35] and AdvEnt [36]), since we use a small batch size due to joint training of the deformation field generator and the discriminators along with the segmentation adaptation model.\nIn particular, we used the batch size of four so that the training of the models fits an NVIDIA A100 GPU while considering experimental consistency across adaptation tasks for fair comparisons.\n\n* **Pre-Processing and Dataset Split:**\nWe are not sure whether we completely understand what *”nonsense”* meant by the reviewer '83ri'.\nAs indicated in Line 32 of the supplementary material and above comment on the extension of the tasks ([link](https://openreview.net/forum?id=6RoAxmwj0L2&noteId=IFIbszCvCJN \"Title\")), we randomly cropped and resized both source and target images.\nIn the meanwhile, RDL is able to model relative distortion between any source and target images including randomly cropped and resized images (see Fig.2 and Fig.4 in the manuscript).\nOur extensive experiments also validate the effectiveness of RDL in improving segmentation adaptation using such source and target images.\nMoreover, such data processing is known to be effective for improved semantic segmentation performance [6] and also we followed the pre-processing of the baseline methods for fair comparisons.\nLastly, we implemented a simple hold-out method where we randomly split the target datasets into the training and the validation set.\n\n* **Hyperparameter $\\gamma$:**\nThis is a pre-defined hyperparameter to select the baseline adaptation methods where $\\gamma=1.0$ implements AdvEnt [36] and $\\gamma=0.0$ performs AdaptSeg [35] (see Line 208 in the manuscript).\nSo we did not perform an ablation study on $\\gamma$.\n\n* **(if possible) Additional Evaluation Metrics:**\nThanks for suggesting other evaluation metrics that seem to evaluate the boundary-oriented segmentation accuracy.\nConsidering the short period of time given for rebuttal, we were not able to implement the metrics and would like to consider them in future work.\nHowever, please note that the class-wise activation visualizations (Fig.1-(a) in the manuscript and Fig.4 in the supplementary material) show the competence of our method in generating stronger and finer boundary segmentation of objects under severe radial distortion.\nWe also demonstrated the competence of our adaptation method in predicting distorted regions by using the distortion-aware mIoU (Fig.5 in the manuscript and Tab.4 in the supplementary material).\n\n* **Public Access to Dataset and Code:**\nWe are internally processing the release of the FDD dataset and will release the code upon acceptance.\n\n* We will also re-check typos and grammar errors to improve the quality of the final manuscript.", " We thank the reviewer `QLQF' for a thorough and constructive review.\nBelow, we have answered your questions and concerns regarding the effect of the geometric distortion and other aspects of the submission. \nHope they appear to be reasonable to you too.\n\n**Effect of Geometric Distortion on Unsupervised Domain Adaptation (UDA)** \\\nTo clarify the effect of geometric distortion on the adaptation tasks, the reviewer ‘QLQF’ suggested performing an experiment where the geometric distortion is isolated from other factors in distributional shifts (e.g., visual domain gaps).\nFor example, ‘QLQF’ presented an experiment where we synthetically generate distorted target images from rectilinear source images, and then evaluate the proposed method with other approaches.\n\nActually, in our early stage of development, we executed a similar adaptation experiment to preliminary validate our distortion adaptation approach under only geometric distortion in domain shift.\nThere we took the Cityscapes dataset as source and its distorted counterpart as target (*CityscapesFishEye* includes fisheye-like images similar to $I_T$ in Fig.1 in the manuscript).\nThe distorted images are generated based on the equidistance fisheye camera projection model [R1].\nTo address 'QLQF's concerns, we performed the *Cityscapes $\\rightarrow$ CityscapesFishEye* adaptation task again with the exact same experimental set-up for *Cityscapes $\\rightarrow$ Woodscape* task.\nWe present the results in the following table.\n\\\n\\\n**Results from the *Cityscapes $\\rightarrow$ CityscapesFisheye* adaptation task**\n| **Method** | **mIoU(%)** | **gain** |\n|---------------------------------|----------------|---------------|\n| Oracle (trained on target) | 69.82 | - |\n| Source Only (trained on source) | 35.69 | - |\n||||\n| AdaptSeg [35] | 47.16 | - |\n| AdaptSeg+**RDL** | 57.89 | +10.73 |\n||||\n| AdaptSeg+RA | 54.02 | - |\n| AdaptSeg+RA+**RDL** | 55.58 | +1.56 |\n||||\n| AdvEnt [36] | 46.67 | - |\n| AdvEnt+**RDL** | 57.04 | +10.37 |\n||||\n| AdvEnt+RA | 54.55 | - |\n| AdvEnt+RA+**RDL** | 55.82 | +1.27 |\n\nResults clearly show that our relative distortion learning (RDL) contributes to significant improvements in the adaptation performance up to +10.73% when only geometric distortion is presented in the distributional shifts.\nThis is somewhat obvious to observe since the baseline methods (i.e., AdaptSeg [35], AdvEnt [36]) do not consider the geometric distortion in domain shifts while our approach features distortion-aware adaptation based on relative distortion learning (RDL).\n\nRemarkably, +RDL achieves the largest gain over the based method and such results are echoed in the *Cityscapes $\\rightarrow$ FDD* task in Tab.1 of the manuscript.\nPlease note that RDL always leads to improvements in segmentation adaptation, regardless of domain shift, throughout our experiments (e.g., based methods vs. +RDL, and +RA vs. +RA+RDL) as presented in Tab.1 of the manuscript and above result table.\nIn contrast, the randomized affine augmentation (RA) leads to degraded segmentation adaptation results upon the geometric distributional shifts between source and target domains (e.g., +RDL vs. +RDL+RA in *Cityscapes $\\rightarrow$ CityscapesFisheye* and *Cityscapes $\\rightarrow$ FDD*).\nThus, we may state that our *learnable diffeomorphic transformation* (RDL) plays an important role in aligning the domain gap of geometric deformation.\n\nWe will add the results from *Cityscapes $\\rightarrow$ CityscapesFisheye* to Additional Experimental Results (Sec.B) in the supplementary material and clarify the aforementioned discussion on the effect of +RDL in Comparisons with State-of-the-Art Methods (Sec.4) in the manuscript.\n\n**Reference** \\\n[R1] Kannala et al., “A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses” IEEE transactions on pattern analysis and machine intelligence 2006.\n", " **Relationship to Optical Flow** \\\nOptical flow (OF) approaches (e.g.,[R2-R4]) commonly require *paired* input images sharing similar contents (e.g., temporally aligned images) to generate a flow field defining the displacement of pixels.\nHowever, we proposed a relative distortion generator ($G$), which takes a set of *unpaired* source- and target-domain images, to transform the source image into a new image sharing a similar distortion style to the target image.\nWe are not aware of any existing OF networks that can be directly employed to produce a flow field that defines relative distortion between unpaired input images without introducing fundamental modifications or inventing ideas as we have proposed.\n\n**Relationship to Camera Calibration** \\\nCamera calibration appears to be a simple remedy to address geometric distortion, but it has fundamental disadvantages, including reduced field-of-view (over 30% of pixel losses), resampling distortion artifacts at the periphery, and cumulative calibration errors in practice (see Line 40-44 in the manuscript).\nThese are against the purpose of using larger field-of-view cameras and urge us to use native fisheye images instead of considering naive calibration.\nSuch disadvantageous in camera calibration has been the core motivation for our work and led to interesting and important domain shift problems as all reviewers commented.\nIn addition, our domain adaptation tasks involve not only geometric deformation (e.g., radial distortion) but also visual domain shifts (e.g., texture, lighting, contrast) between source and target images.\nWe are also delighted to have a comment from the reviewer `QLQF' saying that such problems are *“practically meaningful”* and *“could help develop more sophisticated methods”* in the follow-up research.\n\n**Other Comments and Questions**\n\n* **Title of the paper**\\\nBasically, we try to solve an unsupervised domain adaptation problem for semantic segmentation, where we assume the target images do not have corresponding annotations. Thus, we used *”unsupervised”* term in the title.\nHowever, if the reviewer still has a concern about this term, we will consider moving *”unsupervised”* to before *”Domain Adaptation”* which makes the title \"DaDA: Distortion-aware Unsupervised Domain Adaptation for Semantic Segmentation\".\n\n* **Clarifications of terminologies**\n * **Line 58**: we evaluate *“semantic quality”* of deformation fields by how much they can reduce the distribution shifts across domains since it is very challenging to quantitatively measure.\nAlso, we are not aware of any existing methods directly applicable for distortion similarity measures under *unpaired* sets of images.\nInstead we try to enforce the semantic quality at both *”image level”* by aligning the distortion style of transformed source images $I_{S \\rightarrow T}$ with $I_T$ in Eq.(4) and Eq.(5) and *”prediction level”* by minimizing consistency between the predictions, $M(I_T)$, $M(I_{S \\rightarrow T})$, $M(I_S)$, $M(I_{T \\rightarrow S})$, in Eq.(3).\n * **Line 142**: $\\Phi_{S \\rightarrow T}$ maps pixel coordinates of a source image to those of a new image $I_{S \\rightarrow T}$ which shares a similar distortion style of $I_T$ (see Line 139-141).\n\n**References** \\\n[R2] Xu et al., \"GMFlow: Learning Optical Flow via Global Matching.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. \\\n[R3] Luo et al., \"Upflow: Upsampling pyramid for unsupervised optical flow learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. \\\n[R4] Wang et al., \"Displacement-invariant matching cost learning for accurate optical flow estimation.\" Advances in Neural Information Processing Systems. 2020.\n", " This paper concerns the geometric distortion that causes domain gaps during unsupervised domain adaptation for semantic segmentation, which is motivated by the differences between rectilinear images and fisheye images. It is practically well-grounded.\nThe authors propose to train a deformation generator that consumes two images chosen from the source and target domains.\nThe output of the deformation generator is used to map the source image to the target style in terms of geometric distortion.\nThe training or adaptation is then happened by learning from the distorted source image and semantic segmentations.\nThe authors also propose a few benchmarks to evaluate the performance of the method and show good domain adaptation gain compared to baselines that concern global appearance change of the images.\nAn ablation study on each term in the proposed training loss is also performed. Strength:\n- the motivation is practically meaningful and also the introduced setting is interesting.\n- the proposed method is simple yet shows good domain adaptation gain. Also, the proposed benchmarks could help develop more sophisticated methods that deal with both geometric distortion and appearance change.\n- the constraints to impose semantic consistency of the generated geometric distortion are interesting.\n- the paper is well-written and the idea is clearly conveyed.\n\nWeakness:\n- even though the motivation is well-grounded, the study of the validity of the problem, or how the motivating factor affects the adaptation performance is not crystal clear. For example, the authors should isolate geometric distortion from other factors like appearance and output space discrepancies. A simple experiment to do is to create a distorted target dataset from rectilinear images, and check how the other methods and the proposed method perform. The current message in this paper does not support a judgment on this aspect.\n- In theory, an ideal method should close the gap given only geometric distortion. But there is concern on whether the proposed method con do it under only geometric distortion or not, since the proposed randomly picks two images even though they are not in correspondence. This may be okay, since we can think of the proposed as a kind of spatial augmentation method, but we need to check the results.\n- The above concerns are critical for us to analyze the proposed method and the underlying problems that the proposed is trying to solve. For example, affine aug already works well on some of the directions in Table 1, does this mean that spatial augmentation/transformation is the key here? If so, would the proposed method a more sophisticated such augmentation method?\n- Also, how does the proposed method compare to spatial transformation generated by optical networks? The proposed do not really care about finding the correct underlying dense matching, which makes OF look like a good candidate to try, say, use OF to output some flow and then warp the images to the target domain or vice versa and then adapt? What is the difference then?\n- If camera distortion is the only factor that causes domain gap and is the focus of the current paper, then it seems that camera calibration would be a nice tool and can be performed quite well using existing tools. Then what does the paper tell us beyond that? The main concerns are listed in the above section, please provide more information since they are related to the significance of the proposed problem and method.\n\nsome comments and questions on writting\n- the title \"DaDA: Distortion-aware Domain Adaptation for Unsupervised Semantic Segmentation\" may not be appropriate as semantic segmentation is not unsupervised.\n- ln 58 \"to enforce the semantic quality of relative deformation fields at the image- and the prediction level.\" how do you define semantic quality of deformation fields? also, what do you mean by \"image- and prediction level\"?\n- ln 142, the equation is confusing, phi_s2t is mapping from the source domain to the target domain, which takes pixel coordinates from source and maps it to source. Did not observe much negative societal impact. On the limitation side, the authors indicate that other methods that deal with texture differences between domains can be combined with the proposed one to further improve performance. However, this is also an indicator that the paper with a clean motivation is evaluating on datasets with multiple causal factors, which does not convey a clean message.", " This paper proposes a novel segmentation framework that could be aware of distortion caused by optical and geometric reasons and thus improve the segmentation performance using unsupervised domain adaptation. The main contributions of this paper are (1) the framework that can be aware of distortion and utilize the information in the unsupervised semantic segmentation by using distortion-aware domain adaptation, (2) loss functions that effectively enforce the unsupervised domain adaptation and segmentation, and (3) new unsupervised domain adaptation benchmark including pre-existed datasets and newly proposed dataset, where the source and target images have additional domain gaps in the optical distortion.\n\nThe distortion-aware domain adaptation is realized by a novel Relative Distortion Learning (R.D.L.) using adversarial training (GAN) and Diffeomprohic transformation. The generator (G) generates deformation fields (visualization of distortion) between the source and target domain images. \n\nThe proposed method was evaluated using four datasets, especially two for the source domain, and the other two are utilized as the target domain. The images in the source domain are real-world rectilinear images, and the images in the target domains are optically and geometrically distorted images captured by fisheye images. Here, the authors propose a new dataset called the FDD captured by fisheye cameras (200-degree F.O.V.). \n\nThe experiments illustrate the outstanding performance of the proposed distortion-aware domain adaptation framework compared to other state-of-the-art deep learning models and other methods.\n 1. Strengths\n- The reviewer significantly understands the significance and importance of the task proposed in this paper. The distortion caused by the lens can significantly degrade the segmentation performance, especially the object on edge. In addition, domain adaptive semantic segmentation with lens distortion has rarely been studied to our best knowledge, such that the tasks proposed in this paper for the distortion-aware domain adaptation would be novel. Furthermore, since the dataset construction requires heavy cost, the construction of a large number of datasets for training machine learning models is noteworthy. \n\n- Additionally, the problem definition is clear. The authors defined the domain gap using not only distortion but also visual domain gap. At this point, in terms of the domain gap, the discrepancy between visual reasons and optical and geometric reasons should be discussed, and it should be decomposed. The authors proposed the “only” distortion-aware framework regardless of the visual domain gap.\n\n- The manuscript is well organized and well written.\n\n2. Weakness\n- More detailed descriptions are illustrated in the “Limitation” section. Please see below.\n * Questions & Discussions\n1. The definition of the domain adaptation should be more discussed. In this paper, the authors conducted the experiments with the environment where the rectilinear images are the source domain, whereas the images with extreme distortion are the target domain. However, in terms of “general” domain adaptation, the source and the target could be changed. Additionally, both the source domain and the target domain could contain the distortion on the edge. The reviewer is curious that the proposed framework is effective only in a limited environment. Alternatively, the reviewer is curious that other environments should not be discussed in the real-world image. If this case, the authors should clearly discuss this limitation. It is also discussed below [Limitations]-[Major issues].\n\n2. Effectiveness of the discriminator. The reviewer is curious about the effectiveness of the discriminator. Since the generator is well organized in the RDL and the loss function, the adversarial training can cause additional cost rather than a significant improvement in the accuracy. Otherwise, what if the discriminator categorizes the input images into domains (source and target). For instance, what if the discriminator aims to discriminate distortion style between I_T, I_(S->T), I_S, and I_(T->S). Since the generator generates the deformation field (S->T) and the inverse deformation field (T->S), training using four images can lead to the globally optimized discriminator, and it could be more efficient for the framework and the adversarial training. In addition, the ablation study for the discriminator could improve the quality of the manuscript.\n\n3. Cost of the proposed framework. The reviewer is curious about the training time, prediction time, and memory resources. In the manuscript, the authors addressed that the proposed framework is effective compared to the algorithm that removes the distortion from the image. Quantitative analysis or discussion can significantly improve the quality of the manuscript. For the reviewer, since the proposed network includes GAN-based architecture, segmentation-based architecture, and three or more loss functions, the training and prediction cost can be extremely burdened compared to the conventional algorithms.\n 1. Major issues\n- The field of interest suggested in this paper is narrow. The reviewers agree that the distortion can significantly degrade the segmentation performance when the image includes extreme distortion like the image captured using a fisheye camera. Even though real-world images contain the distortion, the reviewers could not be convinced that the minor distortion significantly degrades the segmentation performance. It could be justified through more experiments. The authors should significantly address the necessity of distortion-aware domain adaptation in the machine learning society instead of computer vision society. Additionally, the following concepts and experiments should be discussed to justify that the proposed framework is effective in the \"more general\" domain adaptive segmentation. (1) Utilization of the dataset containing distortion as the source domain and rectilinear images as the target domain. (2) Utilization of the dataset containing distortion as the source domain and another dataset with distortion as the target domain. Note that, this paper discussed the environment where the rectilinear images are utilized as the source domain and the images with distortion as the target domain. Or the authors should justify the experimental environment in this paper.\n\n- In addition, the above questions and the discussion are mainly concerned with a better quality of this paper for Neurips2022 (See Questions).\n\n2. Minor issues\n- Motivation for unsupervised learning needs to be more detailed. \nAs the authors illustrated in Line 45-47 on page 2, the motivation for unsupervised learning is the lack of a real-world public dataset with segmented annotations. Clearly, the lack of annotations for the dataset, including many images, could lead to unsupervised learning. However, that sentence author mentioned in the manuscript implies that the lack of the images (or dataset) leads the unsupervised learning in this paper. As the authors already knew, unsupervised learning with a few images could induce problems with biased training (e.g., overfitting). The reviewer expects that the precise motivation for unsupervised learning should be addressed strongly and clearly and that the authors can improve the quality of the manuscript with few modifications.\n\n- Small batch size should be discussed. As illustrated in Line 30-34, on page 1, in the supplementary file, the batch size for training deep learning models is four until convergence, and the batch size is significantly small. The importance of the normalization methods (e.g., batch normalization or group normalization) depends on the batch size. Therefore, discussing the batch size and the normalization methods would be better.\n\n- The vague description of the dataset and preparations should be improved. (1) the authors mentioned, \"Both source and target images are randomly cropped\" in the supplementary file in Line 32, on page 1. As the reviewer understood, the images in the target domain include the distortion on edge. At this moment, the random cropping method for the images in the target domain is nonsense. (2) In the experiments, the authors split images into training and validation sets with the random selection method. Is it for the cross-validation or hold-out methods? More explanations and statistical analysis (if authors conducted experiments with many folds) should be discussed even in the supplementary files.\n\n- Justification to search hyper-parameters should be discussed. Common hyper-parameters to train deep learning models are well illustrated in the manuscript and the supplementary files (especially the selection of beta 1-3). However, the reviewer is curious about the selection process of gamma. \n\n- (if possible) Experiments with other evaluation metrics could improve the quality of the manuscript. For instance, the boundary is a significantly important target in the segmentation tasks, and the distortion could degrade the boundary-oriented segmentation accuracy. Therefore, the discussion using evaluation metrics [1-3] for the boundary of the target objects can significantly improve the quality of the manuscript.\n\n[1] Fernandez-Moral, Eduardo, et al. \"A new metric for evaluating semantic segmentation: leveraging global and contour accuracy.\" 2018 IEEE intelligent vehicles symposium (iv). IEEE, 2018.\n\n[2] Lee, Kyungsu, et al. \"Boundary-oriented binary building segmentation model with two scheme learning for aerial images.\" IEEE Transactions on Geoscience and Remote Sensing 60 (2021): 1-17.\n\n[3] Cheng, Bowen, et al. \"Boundary IoU: Improving object-centric image segmentation evaluation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n3. Simple recommendations\n- Please re-check the typo and grammar errors to improve the quality of the manuscript.\n- Complicated quantitative analysis (Tables 5 and 6 in the supplementary material) should be clearly illustrated. \n- The reason why the authors illustrated the visualization of the class-activation map in the manuscript should be described in the main manuscript.\n- Public access to the code and the dataset should be preceded. ", " The author proposed a distortion-aware domain adaptation (DaDA) framework that is capable of modeling domain shifts in geometric deformation based on a relative distortion learning (RDL) method. \n\nThe proposed method tackles the task of unsupervised domain adaptation for semantic image segmentation where unknown optical distortion exists between the source and target images.\n\nAdequate experimental results also prove the validity of the proposed method。 1.Strengths\nThis paper proposes a novel domain adaptation method for unsupervised semantic segmentation. And it combines geometric and optical distortion in domain shift. And the extensive experimental results highlight the effectiveness of our approach over state-of-the-art methods under unknown relative distortion across domains.\n2.Weaknesses\nSee the “Questions” part. 1.The figures in the text are far apart from where the figures are mentioned in the body text. It is suggested that the author shorten the corresponding distance to facilitate the reader’s comparative reading.\n\n2.Why are the edges of the reconstructed source domain image not neat? Such as the “d” part in figure2.\n\n3.In the \"Experimental Details\" section, the performance of both the “AdaptSeg+RA+RDL” and “AdvEnt+RA+RDL” methods decreases compared to the RA removal method when cityscapes is the source domain dataset and FDD is the target domain dataset. The authors need an explanation for this experimental result.\n\n4.Are the references in [17] and [18] duplicated?\n\n5.The format of references is not uniform. For example, the format of the reference in [28] differs from the rest of the literature.\n\n6.Most of the references are earlier than 2020. It is recommended to cite more papers in related fields after 2021. The authors adequately address the limitations of their work and the potential negative social impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iVsVTBe-u-8", "g6-jD-Y8SY3", "KHblGrI1xoq", "uUQG0prGnPi", "YSOJdSaN4t2", "mPmdStNPsz-", "RuxFDBQMHBh", "AKd3LobaIMp", "AKd3LobaIMp", "AKd3LobaIMp", "RmBGanDRVtz", "RmBGanDRVtz", "nips_2022_6RoAxmwj0L2", "nips_2022_6RoAxmwj0L2", "nips_2022_6RoAxmwj0L2" ]
nips_2022_AlgbeSuE1lx
Coded Residual Transform for Generalizable Deep Metric Learning
A fundamental challenge in deep metric learning is the generalization capability of the feature embedding network model since the embedding network learned on training classes need to be evaluated on new test classes. To address this challenge, in this paper, we introduce a new method called coded residual transform (CRT) for deep metric learning to significantly improve its generalization capability. Specifically, we learn a set of diversified prototype features, project the feature map onto each prototype, and then encode its features using their projection residuals weighted by their correlation coefficients with each prototype. The proposed CRT method has the following two unique characteristics. First, it represents and encodes the feature map from a set of complimentary perspectives based on projections onto diversified prototypes. Second, unlike existing transformer-based feature representation approaches which encode the original values of features based on global correlation analysis, the proposed coded residual transform encodes the relative differences between the original features and their projected prototypes. Embedding space density and spectral decay analysis show that this multi perspective projection onto diversified prototypes and coded residual representation are able to achieve significantly improved generalization capability in metric learning. Finally, to further enhance the generalization performance, we propose to enforce the consistency on their feature similarity matrices between coded residual transforms with different sizes of projection prototypes and embedding dimensions. Our extensive experimental results and ablation studies demonstrate that the proposed CRT method outperform the state-of-the-art deep metric learning methods by large margins and improving upon the current best method by up to 4.28% on the CUB dataset.
Accept
This paper proposes a coded residual transform for deep metric learning, aiming to improve the generalization ability of metric learning to unseen classes. Four expert reviewers assessed this paper, with preliminary reviews at odds. After author rebuttal, some reviewers acknowledged the rebuttal by increasing the score, but one reviewer still held major concerns including the vague motivation and unconvincing evaluation. AC read the paper itself as a neutral referee, and considered all reviewing material. AC's take on the paper is as follows. - The motivation of using coded residual to improve generalization to unseen classes lacks a sufficient elaboration, either theoretical or technological -- showing the motivation with a toy example is not strong. Also, there is no clear connection between the coded residual and the generalization property in Roth et al. [22] -- again, only a partial result in Table 1 is not enough. - The algorithmic contribution is slightly below the bar of NeurIPS. By toothing apart each component, only Eq. (1) regarding the coded residual is somewhat novel to me. Eq. (2) is a common feature combination method, similar to concat, element-sum, or the aggregation used in GNNs and PointNet. Eq. (3) is a common criterion in linear discriminant analysis. Eq. (4) is something like the kernel matrix, which is of O(N^2) complexity, very time-consuming to compute for deep learning. In all, the technical novelty is relatively slim while the practical value in terms of efficiency is not high. - Writing is problematic in some way. Authors try to explain the motivation of coded residual but in fact the idea was firstly proposed in the classic method VLAD [49]. Perhaps the new thing is that authors also try to explain that the coded residual is more generalizable to unseen classes, but this is less elaborated. Authors shall clearly credit to VLAD and limit their own contribution to "generalizing to unseen classes". Nonetheless, AC feels that the idea of coded residual is interesting in the metric learning context, and authors are suggested to work forward for a stronger approach that generalizes learned metrics to unseen classes, which is important for practical open-world applications. After discussion between AC and SAC, their opinion on this paper was somewhat in disagreement. SAC suggested that by taking the reviews as well as the scores (8/7/6/4) into consideration, the paper should be accepted, and the negative points above can be addressed by the authors in the next version. AC revised the metareview from (Borderline) Reject to (Borderline) Accept accordingly.
train
[ "nnWBwDzavyp", "5iq1rMqBaPp", "X5MBPsKA4yd", "sdZCGbSh8Tf", "D5TrfcIVTY9", "SC70x8mq18e", "7M_LY0cjZZ", "id7VFp3f6Y0", "s0ATuTBWe3T", "t6L395q_4WT", "K747PMs2fm", "mzAUlJFng8U", "pAa6aGXQvFz", "yFX5wB0tId_", "NKgm4-zf9uc" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the thoughtful response to my review and others. I will keep my current rating and hope to see this paper accepted. Nice work authors!", " Dear Reviewer, \n\nWe really appreciate your kind reply!\n\nSorry for the confusion about the motivation part. In the original paper, we used the **toy example** of daytime and night time faces to motivate the following thinking: an effective face detection with sufficient generalization power will not focus on the absolute pixel values of the face image. Instead, it detects the face based on the relative change patterns between neighboring regions inside the face image. From the generalization point of view, we find that it is more effective to learn the embedding based on relative difference between features since the absolute value of features may vary significantly from the training to new test classes, but the relative change patterns between features may have less variations. \n\nDuring our experiments, we find that, although the training and testing classes share the same project anchors, the projection anchors of the training classes and test classes have different distributions. During our coded residual transform, we assign different weights for different project residuals based on the correlation between the feature and the corresponding anchor. Therefore, for training classes, the subset of anchors which are close to the training images will have larger weights. Similarly, for the testing classes, the subset of anchors which are close to the test images will have larger weights. This correlation-based weighting for the projection residual is the main idea of our method and contribute significantly to the overall performance gain. \n\nWe have revised the paper in the Introduction section. We hope that this has addressed your concern.\n\nFor your original comment 3 on performance comparison with the feature dimension of 512, we have followed the MS paper and conducted experiments using the BN-Inception backbone network on the CUB dataset. The top-1 recall rate is 66.5% for the proposed CRT method, which is 0.8% higher than the MS method (65.7%). We have added this result to the Supplemental Materials in the revised paper. \n\nThank you so much!\nAuthors\n\n", " Dear Reviewer, \n\nWe really appreciate your time and efforts in reviewing our paper, and your valuable comments for improve our paper! Thank you very much for deciding to improving the final rating!\n\n", " Thank authors for the response. The rebuttal addresses my second question but that for the others are not satisfied. First, the motivation is inappropriate as confirmed by authors. Second, it is important to apply state-of-the-art configuration for a fair comparison. A better performance with 512 features is more convincing than a comparable performance with 128 features. Therefore, I would like to keep my rating since the work can be further polished. However, it is OK to accept the paper.", " After reading other reviews and rebuttals, I decide to improve the final rating. ", " Thanks for your valuable comment and feedback to improve the paper, as well as your strong recommendation of our paper! \n- Authors", " After reviewing the author feedback, I maintain the score to accept the paper", " We really appreciate your thorough and insightful review of our paper! We also thank your positive and encouraging comments about our paper. In the following, we provide detailed response to your comments. \n\n$\\textbf{1. Response to the comment on Figures 1 and 2: }$\n\nThanks for this comment! In the revised paper, we have followed your comment and moved Figure 2 to the supplemental materials in the revised paper. \n\n$\\textbf{2. Response to the comment on the terminology of anchor:}$\n\nThanks for this valuable comment! At the end of this discussion with more feedback from you, we will follow your comment to replace the anchor with another word, such as prototype.\n\n$\\textbf{3. Response to comment on Figure 3 (now Figure 2):}$\n\nThanks for pointing out this! In Figure 2 (Figure 3 in the original paper), the red dots on the top right corner indicates the background anchor. These examples show the diversity of the learned anchors. We have emphasized this in the revised paper (Figure 2). \n\n$\\textbf{4. Response to the comment on correlation/similarity matrix for the anchor pool:}$\n\nThanks for this helpful comment! In the revised paper, following your comment, we have added a new figure, Figure 11 in the Supplemental Materials to visualize the correlation matrix. We can see that most of the similarities between anchors are very low, only few similarities computed between anchors are relatively large. This result shows that most of the anchors learned by our method are independent.\n\n$\\textbf{5. Response to the comment on the sampler detail, batch size, and classes in a batch:}$\n\nThanks for this comment! We have added the description of the batch samplers for training in the revised paper (Section 4.1). During training, we randomly sample a set of images in each iteration to train the network. In each iteration, we first randomly choose the image classes, and them randomly sample 5 images from each class to form a batch. For the CUB and Cars datasets, we sampled 16 classes in a batch. For the SOP and In-Shop datasets, we sampled 36 classes in a batch. Thus, the batch size is 80 for the CUB and Cars datasets, and 180 for the SOP and In-Shop datasets.\n\n", " Thank you very much for this detailed and insightful review of our paper! We really appreciate your positive comments and recommendation of our paper! In the following, we provide detailed response to your comments. \n\n$\\textbf{1. Response to the comment on Hyp-DeIT results in Table 4:}$\n\nThanks for your comment! We cite the results from the Hyp-DeiT method under the same experimental settings (Table 2 in their original paper) where the embedding size is 128 with image size of 227$\\times$227, and a pretrained model on the ImageNet-1K. We choose this setting so that we can compare our method with many recent papers on metric learning. In the original paper of the Hyp-DeiT, improved results of Hyp-DINO and Hyp-ViT§ were reported based on a different pre-trained model, which is much different from our experimental conditions. Thus, we only include the results of Hyp-DeiT with the embedding size of 128 in our paper. In the revised paper, we have emphasized this in Section 4.2.\n\n$\\textbf{2. Response to the comment on Tables 8 and 9:}$\n\nThanks for pointing out this! Sorry that we did not emphasize this. In Table 8, the number of projection anchors in the second embedding branch is fixed at 64. This table reports the results of embedding computed from the first embedding branch when the number of projection anchors changed from 1 to 64. The weights of the backbone networks are shared and the weights of the CRT heads are not shared for this experiment. In Table 9, the number of project anchors are equal for the first and the second CRT embedding branches, which are changed from 1 to 100, and the weights of both the backbone network and the CRT branches are shared in this experiment. \n\nWe can see that when the number of anchors is set to be 64, the embedding performance reaches to the best. And the performances are approximately the same for models trained with and without shared weights. In other experiments, to decrease the memory consumption, we shared weights for all the experiments.\n\nIn the revised paper, we have further clarified this in Section 4.4.\n\n$\\textbf{3. Response to the comment on backbone networks:}$\n\nThe two embedding CRT branches are shared the same backbone network. The backbone network is pretrained on the ImageNet-1K dataset and fine-tuned during training. The embedding dimensions of the ViT based methods are changed by adding a linear layer. Here, we set the embedding size to be 128 is because recent methods are developed based on a dimension of 128 and it is important for large-scale datasets and real-time applications. \n\nIn Table 5 of supplemental material, we have provided ablation studies to evaluate the impact of different embedding sizes.\n\n$\\textbf{4. Response to the comment on limitations and non-sharing configurations:}$\n\nThanks for this valuable comment! The backbone network is shared for these two embedding branches. Following your comment, in the revised paper, we have further emphasized this in Sections 4.4 and 5. For non-sharing configuration, only the weights of CRT branches are not shared, the weights of the backbone network are shared for all the experiments.\n", " We sincerely thank you for this thorough and insightful review of our paper! We also appreciate your positive and encouraging comments about our paper! In the following, we provide detailed response to your comments. \n\n$\\textbf{1. Response to comments on margin-based softmax and Eqs. (3) and (4):}$\n\nThanks for this detailed comment! In the revised paper, we have followed your comment and discussed the margin-based softmax methods in related work. We agree that the parameters W in softmax can be treated as the anchor features. We actually have tried this during our experiments and the performance degradation is very large. The major reason is that the softmax only provides a high-level global description of the input. However, in our paper, the set anchors learned from the training set sever as the building blocks of the image scenes. They correspond to different scene objects at different spatial locations in the input image. The anchor distributions at the training set and the test set could be different. \n\nWe followed your suggestion and test the performance of arcface on the CUB dataset based on the ResNet-50 backbone, the top-1 retrieval accuracy is 61.39%, which much lower the proposed CRT method that has obtained 64.20% top-1 retrieval accuracy under the same experimental conditions.\n\nSorry for the confusion! We agree that Eq. (3) and (4) are related to softmax. However, they are different. The softmax performs normalization on one vector. Eqs. (3) and (4) defines the correlation between two feature vectors and the similarity matrix for a set of features. We use the cosine similarity. The value of the cosine similarity computed on the L2 normalized features is between 0 and 1, which is similar with the softmax normalization. It should be noted that before computing the correlation, we did perform the softmax-like normalization on the feature.\n\n$\\textbf{2. Response to the comment on batch samplers for training in the experiment settings:}$\n\nThanks for this suggestion! We have added the description of the batch samplers for training in the revised paper (Section 4.1). During training, we randomly sample a set of images in each iteration to train the network. In each iteration, we first randomly choose the image classes, and then randomly sample 5 images from each class to form a batch. For the CUB and Cars datasets, we sampled 16 classes in a batch. For the SOP and In-Shop datasets, we sampled 36 classes in a batch.\n\n$\\textbf{3. Response to the comment on the total loss function and other metric learning loss such as Siamese loss, }$\n$\\textbf{triplet loss and n-pair loss for comparison: }$\n\nThanks for this valuable comment! The MS loss function is necessary for the total loss function. First, from our experiments, we observe that the MS loss is required since it is a very effective loss for metric learning. It provides a very important starting point for our proposed method. Without this, the metric learning performance will be degraded. \n\nAnother reason is that the MS loss function considers all the similarity relationship between samples in a batch, which is important for the training of a well generalizable model. It is also consistent with other two loss function terms. However, the Siamese loss, triplet loss and n-pair loss functions only consider the similarity between a set of samples with the anchor, which is a small portion of the similarity relationship between samples in the whole batch. For example, the N-pair loss did not consider the similarity relationship in N negative examples. \n\nTo verify this phenomenon, we conducted new experiments using the N-pair loss to instead of the MS loss under the same experimental conditions on the CUB dataset. The top-1 retrieval accuracies are 56.1% and 75.9% for the N-pair loss and MS loss, respectively. This result demonstrated the need and advantage of using the MS loss.\n", " We really appreciate your time and efforts on thorough review of our paper and your valuable comments! Also thank you for your positive and encouraging comments about our paper: $\\textit{“Adopting residuals for generating features is novel for deep metric learning…}$$\\textit{The experiments is sufficient on benchmark data sets.”}$ \n\nIn the following, we provide detailed response to your comments. We hope these responses are able to address your concerns.\n\n$\\textbf{1. Response to the comment on “anchors are also learned from the training data, which cannot work as reference points}$\n$\\textbf{for unseen classes. For example… daytime faces… night-time faces”.}$\n\nThanks for this very insightful comment! Yes, in our current design, the anchors are learned from the training side and also used for the test unseen classes. According to our analysis, the anchors can be considered as a dictionary of prototypes which construct the image scenes, both in the training and test datasets. Although they share the same anchor set, we observe that the anchor distributions at the training side and the test side are different. This fact is aligned to the existing research on distribution shift. \n\nIn our future work, motivated by your comment, we will investigate how the dictionary of anchors can be updated for the unseen test classes to further improve the generalization capability and performance of our method.\n\n$\\textbf{2. Response to the comment on additional branch and the training cost:}$\n\nThanks for pointing out this! Following your comment, we have added a discussion on the extra training cost caused by the additional branch in the revised paper. The parameters of the backbone network are shared for these two CRT branches. According to our estimation, the increase of training cost is only about 5%. The second branch is only used during training to provide guidance, which will not increase the test cost. Therefore, it will not affect the complexity at the test time. \n\n$\\textbf{3. Response to the comment on MS loss and BN-Inception backbone: }$\n\nThanks for this valuable comment! For the MS method, the results reported in the original paper is based on an embedding size of 512. In our paper, for fair comparison with other methods, we set the embedding size to be 128 as reported in Table 5, since most recent methods reported in the literature are using the dimension of 128. The choice of 128 is important for large-scale datasets and real-time applications. The results of BN-Inception have already been reported in Table 5 in the original paper. Sorry that we did not mention this clearly.\n\n\n", " This work proposes to apply residuals to the anchors as representations for inputs. Compared with the features directly extracted from inputs, the residuals provide the relative information with a learnable codebook, which can alleviate the overfitting problem. Besides, an additional branch is included for the consistency between representations from different codebook. Strong \n1.\tAdopting residuals for generating features is novel for deep metric learning. \n2.\tThe ablation study shows that a consistency constraint is helpful for learning representations. \n3.\tThe experiments is sufficient on benchmark data sets.\n\n\nWeak \n1.\tThe motivation of introducing residuals for better generalization on unseen classes is not convicting. Note that the anchors are also learned from the training data, which cannot work as reference points for unseen classes. For example, according to the example in Line 57, the anchor points will only contain daytime faces from training while the residuals of night-time faces to those anchors cannot capture the appropriate information. \n2.\tThe additional branch increases the training cost, which should be discussed. \n3.\tThe performance of MS in Table 5 is degenerated compared to the original paper, which reports the similar performance without the proposed method. Since many previous works have BN-Inception as the backbone, it is better to include the results with the same backbone and same dimension of features for a fair comparison. \n My major concerns are about motivation and the fair comparison issue as listed in weakness. Compared baseline methods, the additional branch for consistency constraint may require more computational resources/running time.", " In this paper, the authors propose a coded residual transform (CRT) to improve generalization of deep metric learning method. They use a set of learned anchors to encode the image embeddings. The proposed CRT obtains state-of-the-art performance on four datasets including CUB-200-2011, Cars-196, Standard Online Products and In-shop Clothes Retrieval. Strengths:\n\n1. The authors proposed a coded residual transform (CRT) to improve generalization for deep metric learning. \n2. The authors utilize a set of earned anchor features to encode the image embeddings. The anchor diversity loss, CRT consistency loss and MS-loss [11] is used for optimization. \n3. The experiments and ablation studies are solid. \n\nWeaknesses:\n\nThe authors should discuss and compare margin-based softmax [a][b][c] in the sections of related work and experiments. Although the CRT is different from margin-based softmax, the insight is similar. The parameters W in softmax can be treated as the anchor features. \n\n[a] Liu, Weiyang, Yandong Wen, Zhiding Yu, and Meng Yang. \"Large-Margin Softmax Loss for Convolutional Neural Networks.\" In ICML. 2016.\n\n[b] Wang, Hao, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. \"Cosface: Large margin cosine loss for deep face recognition.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5265-5274. 2018.\n\n[c] Deng, Jiankang, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. \"Arcface: Additive angular margin loss for deep face recognition.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690-4699. 2019. 1. As stated in weakness, the insight of margin-based softmax is similar to CRT. For Eq. (3) and Eq. (4), why not to use softmax formula such as n-pair loss [a] ?\n2. The details of batch samplers for training should be clarified in the experiment settings. \n3. For the total loss function (Line 205), is MS-loss necessary? Other metric learning loss such as Siamese loss, triplet loss and n-pair loss should be compared. \n\n[a] Sohn, Kihyuk. \"Improved deep metric learning with multi-class n-pair loss objective.\" Advances in neural information processing systems 29 (2016). See weaknesses and Questions. ", " The manuscript proposes a new framework for generalisable deep metric learning. The main idea is to learn a collection of anchors and get the final representation by transforming the original embedding vectors into their distances to the anchors, weighted by their correlation with said anchors. These distances are then aggregated by anchor-wise projection and averaging. The model is trained with Anchor Diversity Loss (minimizes anchors’ cross-correlations) and Multi-Similarity loss, which is a recent metric learning loss [11].\n The proposed framework is sound, intuitive and the paper is easy to follow. The presented performance results are also very promising, as is the empirical support to the authors’ claims of the generalisation capability of the model (Lines 206-219). I also believe similar or modified versions of this framework could be used in the future, which makes this a good contribution. \n\nI’ve found Table 4 to be confusing to parse given that the numbers presented here do not match with their published versions (I’ve only checked Hyp-DeIT); the authors acknowledged the discrepancy and attributed it to different settings and results that are hard to reproduce under compute constraints, which I’ve found to be fair compromises given that they have also designed the experiments in Table 5, where comparison is made fair by comparing against baseline versions of a large set of backbones. \n\nThe ablation study is complete enough and it is good to know that performance is not too sensitive to weight sharing and the number of anchors in the first embedding branch. I’ve just found it hard to parse the permutations described in Lines 297-308 and presented in tables 8-9; I don’t know what’s the number of anchors in the first branch for Table 9, or why the weights are shared for this experiment (are they not shared for Table 8?) and it raises the question of whether the weights are shared or not for the other experiments as well since the performance is comparable and the gain in compute use is likely considerable.\n Does the backbone get trained as well or are the weights frozen?\nWas the choice of dimensionality made out of compute restrictions? I ask since other papers have used higher dimensionalities such as 384 (for ViT based models) or 512.\nRelated to the previous question, how did you change the output dimensionality for the ViT based methods such as Hyp-DeiT? Did you train those from scratch with an added bottleneck layer?\nIt is not clear to me if the second embedding branch also uses a second backbone. Does it use another backbone and is it trained separately or do they share backbones? Since this is an image retrieval paper that makes use of common benchmarks, I do not believe there is a negative social impact to it beyond the environmental cost of training deep neural networks. \n\nThe authors only present one limitation in their conclusion: that the anchors are learned from the training set and therefore may not be the best possible anchors. I believe another limitation to be the need for two embedding branches, since in the non-sharing configuration it incurs in increased memory and compute usage (this is a lesser issue if the backbone is not duplicated)", " This paper revives a classic image retrieval technique, VLAD, with modern Deep Network and show promising results. In short for the main idea of the paper, the proposed code residual transform (CRT) method maps the feature map into anchor pools and output residue features to represent an input image. In table 5, CRT outperforms the baseline method by a clear gap across multiple datasets. Instead only providing the number to beat the SOTA, the author also provide additional evidence such as feature visualization and embedding space density metric to support the proposed method. Strengths\nOriginality: The proposed method revives a classic image retrieval technique, VLAD, with modern Deep Network is novel\nQuality: The submission is technically sound. The anchor pool is well visualized in Figure 3(But still some flaw, see later). Additional metric such as table 1 embedding space density which helps understanding where is gain coming from with the proposed method\nClarity: The submission is clearly written and well organized.\nSignificance: Reviving a classic image retrieval technique with modern techniques is significant.\n\nWeaknesses\nClarity: Figure 1 and Figure 2 should be merged together. Figure 1 is clear enough for presenting the idea, but figure 2 is not. \nClarity: Please use another terminology for the \"anchor\" in the main paper. The term anchor is already used in the triplet loss. It will confuse readers who are familiar with DML. 1. In the figure3 visualization, can you explain the red dots on the top right corner of image in the end of first row? \n2. Would you like provide the correlation/similarity matrix for the anchor pool? It is supposed to be independence as much as possible as show in Eq 3. But as shown in figure 3, different attentions are kind of overlap. \n3. can you provide the sampler detail? what is the batch size? how many classes in a batch?\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "s0ATuTBWe3T", "sdZCGbSh8Tf", "D5TrfcIVTY9", "K747PMs2fm", "t6L395q_4WT", "7M_LY0cjZZ", "id7VFp3f6Y0", "NKgm4-zf9uc", "yFX5wB0tId_", "pAa6aGXQvFz", "mzAUlJFng8U", "nips_2022_AlgbeSuE1lx", "nips_2022_AlgbeSuE1lx", "nips_2022_AlgbeSuE1lx", "nips_2022_AlgbeSuE1lx" ]
nips_2022_ZVe_WeMold
S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning
State-of-the-art deep neural networks are still struggling to address the catastrophic forgetting problem in continual learning. In this paper, we propose one simple paradigm (named as S-Prompting) and two concrete approaches to highly reduce the forgetting degree in one of the most typical continual learning scenarios, i.e., domain increment learning (DIL). The key idea of the paradigm is to learn prompts independently across domains with pre-trained transformers, avoiding the use of exemplars that commonly appear in conventional methods. This results in a win-win game where the prompting can achieve the best for each domain. The independent prompting across domains only requests one single cross-entropy loss for training and one simple K-NN operation as a domain identifier for inference. The learning paradigm derives an image prompt learning approach and a novel language-image prompt learning approach. Owning an excellent scalability (0.03% parameter increase per domain), the best of our approaches achieves a remarkable relative improvement (an average of about 30%) over the best of the state-of-the-art exemplar-free methods for three standard DIL tasks, and even surpasses the best of them relatively by about 6% in average when they use exemplars. Source code is available at https://github.com/iamwangyabin/S-Prompts.
Accept
This paper adopts the prompt learning idea into continual learning to tackle the problem of domain incremental learning. The proposed approach is clear, the writing is easy to follow. The experiment is convincing.
train
[ "aksVSJo4sNX", "xXnmBu9kF6y", "fQGVbluGqzp", "q7l6hc8e6bL", "I7dxnysR8lN", "tJ-1SoaJAuY", "jPrTj8aczV", "mG2WgYO8JIYP", "W4bwHiTNmX", "jfvoJJ2rlp4-", "XCiMWx72JdE", "IUXGTrCY-82", "5a8MvLkaIJw", "mT0j2wB1cJ9", "sPqkMChnQS8", "1vElwhZa_de" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThanks for making the further comment. The reason why we did not provide the performance of the base-ViT for DyTox on CDDB in the main paper is three-fold. Firstly, the original DyTox paper [14] suggests using a more advanced ViT (i.e., base-ConViT [15]), and thus we directly evaluate its officially implemented model that uses base-ConViT as the backbone to show its full potential for the three standard domain incremental learning tasks in the paper. Secondly, when using exemplars, we find that the performance (86.21%) of the DyTox+base-ConViT [15] is superior to that (84.20%) of the DyTox+base-ViT [13] on CDDB, as presented in Footnote 3 of the original supp. material paper. The similar observation on the other two datasets can be obtained from Table R11. This justifies that ConViT is a better choice for the exemplar-based DyTox. Thirdly, the paper mainly focuses on the comparison to the exemplar-free methods. It is worth noting that the exemplar-free DyTox+base-ViT fails to work and almost collapses (with the average accuracy being 48.89% that is even worse than that of the random guess) on CDDB. By comparison, DyTox+base-ConViT works more normally on CDDB, while its performance (51.27%) is slightly better than that of the random guess for deepfake detection. The inferior performance of the exemplar-free DyTox is mainly due to its strong demand on the use of exemplars for a distillation, which is designed to balance different classes from new/old domains in the context of the DyTox framework. Considering the relatively better performance of DyTox+base-ConViT, we choose to report its results on CDDB for both the exemplar-based and exemplar-free cases. Although DyTox uses a more advanced backbone (i.e., base-ConViT), it is still outperformed by our proposed methods. \n\nFollowing your good comment, we will further add this clarification in the revised paper.\n", " \nThank you for raising the further suggestion. Following this suggestion, we additionally evaluate the mentioned ablation on CORe50 and DomainNet. Table R14 shows the results. The observation on these two datasets is consistent with that on CDDB. In particular, for DomainNet, the considerable improvement of the final S-liPrompts model demonstrates its clear superiority over the ablated model. In comparison, the improvement on CORe50 is clear but relatively smaller for the CLIP case, while the increase is remarkable when comparing our ViT-based method (S-iPrompts) against ViT-based L2P on CORe50. This might be because CORe50’s domain gaps are not as large as those of CDDB and DomainNet (the CORe50 domains are captured by the same Kinect sensor under different backgrounds and lighting [38]). On CORe50, we find most of the evaluated methods have much less forgetting compared to those on CDDB and DomainNet due to the smaller domain gaps of CORe50. In this case, the more powerful language-image prompting scheme might share beneficial knowledge from those similar domains, which does not harm the performance a lot. In contrast, the relative improvements are very remarkable on CDDB (Table R13) and DomainNet (Table R14), showing the significant superiority of the proposed prompting method in the cases with large domain gaps. We will further include such additional experiments and the discussion into the revised supp. material. \n\n---\nTable R14: Accuracies (%) of ablating the components of the proposed S-liPrompts on the CORe50 and DomainNet datasets.\n| Method | Prompting Scheme | Corresponding Method | CORe50 | DomainNet |\n|----------------------------------------------------|---------------------------------------------------------------------|----------------------|-----------|-----------|\n| S-liPrompts (fixed Image & Language Prompt length) | dependent image prompts tuning + dependent language prompts tuning | CLIP + L2P | 85.51 | 56.79 |\n| **S-liPrompts (final, K=5)** | independent image prompting + independent language prompting | CLIP+ S-liPrompts | **89.06** | **67.78** |\n---", " Thank you for your rebuttal on more ablation studies.\n\nTable R13 provides the effectiveness of S-liPrompts.\n\nI think CLIP + L2P on CORe50 will be more effective in showing the novelty of the proposed methods if possible.", " Thank you for your rebuttal. Now, I understand the status of the experiments.\n\nRegarding the backbone issues on CDDB, the authors used ConViT[15] (a more advanced one). \n\nI would like to know why the authors didn't report the performance of the based-ViT on CDDB for fair comparisons.", " We thank Reviewer CnBo for the constructive and valuable feedback, which encourages us to further improve this paper.\n\n### **Q1. The same architecture for fair comparison**\n\nAs presented in Line 236-241 of the main paper, to compare fairly, we used the same backbone (i.e., base-ViT [13]) for all the real competitors (except for DyTox that used a more advanced ViT model, i.e., ConViT [15]) as well as the proposed methods (S-iPrompts, S-liPrompts) for all the experiments (Table 1, 2, 3). In Footnote 3 of the supp. material, we additionally state that we reported the results of DyTox with base-ConViT [15] rather than base-ViT, since the performance (86.21%) of the former model is better than that (84.20%) of the latter model on CDDB. Hence, we follow the suggestion of the original DyTox paper to use a more advanced ViT model for DyTox in the main paper. Following your suggestion, we further evaluate DyTox with base-ViT on the other two datasets (CORe50 and DomainNet), and we find that the ViT-based exemplar-free DyTox still fails on these two datasets as discovered on the ConViT-based DyTox in the main paper. This is mainly because DyTox requests a distillation on exemplars, which are more demanded to balance the multiple classes from new/old domains on CORe50 and DomainNet, for a promising prompt learning [14] (Footnote 4 in the main paper). Therefore, we here report the results of the exemplar-based DyTox. The results of Table R11 show that DyTox with base-ConViT generally outperforms (or at least comparable with) DyTox using base-ViT on the three used datasets. Besides, CaSSLe is a self-supervised incremental learning method, and replacing supervised trained weights trained on ImageNet is not suitable. Hence, we use its officially released backbone. We added such clarification in the revised main paper and the supp. material.\n\n---\nTable R11. Accuracies (%) of the exemplar-based DyTox using two different backbones (i.e., base-ViT [13] and base-ConViT [15]) on the three used benchmark datasets. \n| | DyTox using Pre-trained base-ViT | DyTox using Pre-trained base-ConViT (reported in the main paper) |\n|-----------|----------------------------------|------------------------------------------------------------------|\n| CDDB | 84.20 | 86.21 |\n| CORe50 | 80.11 | 79.21 |\n| DomainNet | 60.83 | 62.94 |\n---\n\n\n\n\n---\nTable R12. Accuracies (%) of the proposed S-iPrompts and the main competing exemplar-free methods (L2P and DyTox) on the three used datasets. Note that CDDB is for continual binary-class classification, while CORe50 and DomainNet are both for continual multi-class classification. DyTox generally fails for the two multi-class classification tasks (CORe50 and DomainNet), as it requests a distillation on examplars for the more challenging balance problem among multiple classes from new/old domains.\n| | L2P | DyTox | Proposed S-iPrompts | Relative Improvement |\n|-----------|-------|-------|---------------------|----------------------|\n| CDDB | 61.28 | 51.27 | 74.51 | 13 |\n| CORe50 | 78.33 | Fails | 83.13 | 5 |\n| DomainNet | 40.15 | Fails | 50.62 | 10 |\n---\n", " ### **Q2. S-prompts (S-liPrompts) seems strongly dependent on CLIP**\n\nIn the paper, we make two main technical contributions: (1) independent image-end prompt learning paradigm, and (2) language-image prompting scheme with the pre-trained CLIP model. While the proposed independent prompting paradigm can be applied to any pre-trained transformer based models, we exploited two concrete methods (S-iPrompts, S-liPrompts) based on ViT and CLIP respectively in this paper (Line 74-76 of the main paper). \n\nTable R12 reports the results that are from Table 1, 2, 3 of the main paper. The considerable relative improvement (13% on CDDB, 5% on CORe50, 10% on DomainNet) of the proposed S-iPrompts (independent prompting) over the two main competitors L2P and DyTox (dependent prompting) justifies the significant superiority of the contribution (1). We conduct this comparison fairly in the scenario of exemplar-free domain incremental learning (DIL), which is the main aim of our paper, i.e., better data security, privacy and less memory consumption (Line 34-36 of the main paper, as acknowledged by Reviewer GPZj). In the context of exemplar-free DIL, DyTox collapses on CORe50 and DomainNet, because it requires a distillation on selected exemplars for a better balance among multiple classes from new/old domains on these two datasets, which is essential for a promising dependent prompting [14] (Footnote 4 in the main paper). In addition, we also compared the proposed S-iPrompts against those state-of-the-art exemplar-based methods for a reference. We are delighted to find that S-iPrompts (exemplar-free) outperforms the exemplar-based DyTox clearly (about 4% increase) and the exemplar-based L2P (about 2% improvement) on CORe50, although it is outperformed by the exemplar-based DyTox on CDDB and DomainNet using a large number of exemplars (e.g., 17,250 exemplars totally on DomainNet). The use of large exemplar data is generally not favorable in the real-world scenarios. \n\n\n---\nTable R13: (results are from Table 4 in the main paper). Accuracies (%) of ablating components of the proposed S-liPrompts on the CDDB datasets. Note that we use the official implements [1] of CLIP to do Zero-shot classification on three benchmarks without any change. Here we use the text templates of ImageNet as their implementation [1] for CLIP (Zero-shot). \n[1] https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb\n\n| Method | Prompting Scheme | Corresponding Method | CDDB |\n|----------------------------------------------------|---------------------------------------------------------------------|--------------------|--------|\n| CLIP (Zero-shot) | Handcrafted language prompts | Vanilla CLIP | 49.52 |\n| S-liPrompts (fixed Image & Language Prompt length) | dependent image prompts tuning + dependent language prompts tuning | CLIP + L2P | 64.90 |\n| S-liPrompts (fixed Language Prompt weights) | independent image prompting + dependent language prompts tuning | CLIP + S-iPrompts | 72.29 |\n| S-liPrompts (final, K=5) | independent image prompting + independent language prompting | CLIP+ S-liPrompts | 88.65 |\n---\n\nTable R13 presents the results that are from Table 4 in the main paper, with additional notes in the second and third columns for further clarification. As shown in Table R13, performing CLIP or CLIP+S-liPrompts directly cannot achieve improved performance over S-liPrompts using ViT as a backbone. In contrast, applying the proposed language-image prompting scheme on the CLIP model, i.e, the contribution (2), makes remarkable improvement. From the comparison between S-liPrompts (final, K=5) and S-liPrompts (fixed Language Prompt weights), we can find the contribution (2) brings an significant improvement of about 16% over the proposed S-iPrompts that is based on the contribution (1). By comparing S-liPrompts (fixed Image & Language Prompt length) against S-liPrompts (fixed Language Prompt weights), we can see the considerable superiority (about 8% relative improvement) of S-iPrompts (independent image-end prompting) over L2P-like learning paradigm (dependent image-end prompting), which verifies the significance of the contribution (1) again. Additionally, the results in Table 1, 2, 3 of the main paper show that our best model S-liPrompts, i.e., contribution (1) + contribution (2), supasses the best of the state-of-the-art exemplar-free methods significantly (30% relative improvement on average) for the three standard DIL benchmark datasets, and even outperforms them clearly (an average increase of 6%) when they use exemplars (Line 89-91). \n \nIn conclusion, the proposed S-prompts is only partly dependent on CLIP. We added such clarification in the revised supp. material.\n\n", " We thank Reviewer WPXn for the thorough and constructive feedback. The feedback enables us to further strengthen the paper.\n\n### **Q1: The contributions from (1) the proposed learning independent image-end prompts for each domain, and (2) using CLIP as backbone with the suggested independent language-image prompting scheme?**\n\n\n---\nTable R7. Accuracies (%) of the proposed S-iPrompts and the main competing exemplar-free methods (L2P and DyTox) on the three used datasets. Note that CDDB is for continual binary-class classification, while CORe50 and DomainNet are both for continual multi-class classification. DyTox generally fails for the two multi-class classification tasks (CORe50 and DomainNet), as it requests a distillation on examplars for the more challenging balance problem among multiple classes from new/old domains.\n| | L2P | DyTox | Proposed S-iPrompts | Relative Improvement |\n|-----------|-------|-------|---------------------|----------------------|\n| CDDB | 61.28 | 51.27 | 74.51 | 13 |\n| CORe50 | 78.33 | Fails | 83.13 | 5 |\n| DomainNet | 40.15 | Fails | 50.62 | 10 |\n---\n\nAs shown in Table R7 where the results are mainly from Table 1, 2, 3 of the main paper, the remarkable relative improvement (**13%** on CDDB, **5%** on CORe50, **10%** on DomainNet) of the proposed S-iPrompts (independent prompting) over the two main competitors L2P and DyTox (dependent prompting) demonstrates the significant superiority of the contribution (1). This comparison is conducted fairly in the context of exemplar-free domain incremental learning, which is the focus of our paper with the aim being for better data security, privacy and less memory consumption (Line 34-36 of the main paper, as acknowledged by Reviewer GPZj). Note that in the exemplar-free setup, DyTox collapses on CORe50 and DomainNet, since it requests a distillation on exemplars to balance the multiple classes from new/old domains for promising dependent prompting [14] (Footnote 4 in the main paper). For a reference, we also compared the proposed S-iPrompts against those exemplar-based methods. We are delighted to find that S-iPrompts (exemplar-free) surpasses the exemplar-based DyTox clearly (about 4% increase) and the exemplar-based L2P (about 2% improvement) on CORe50, while it is outperformed by the exemplar-based DyTox on CDDB and DomainNet using a large number of exemplars (e.g., 17,250 exemplars totally on DomainNet), which is generally not favorable in the real-world scenarios. \n\nTable R8 presents the results that are from Table 4 in the main paper, with additional notes in the second and third columns for further clarification. The results in Table R8 show that applying CLIP or CLIP+S-liPrompts directly cannot obtain improved performance over S-liPrompts that uses ViT as a backbone. Instead, performing the proposed language-image prompting scheme on the CLIP model, i.e, the contribution (2), brings significant improvement. In particular, by comparing S-liPrompts (final, K=5) against S-liPrompts (fixed Language Prompt weights), we can see the contribution (2) makes a considerable improvement of about **16%** over the S-iPrompts (with the contribution (1)). By comparing S-liPrompts (fixed Image & Language Prompt length) with S-liPrompts (fixed Language Prompt weights), we can discover the clear superiority (about **8%** increase) of S-iPrompts (independent image-end prompting) over the L2P-like learning paradigm (dependent image-end prompting), justifying the significance of the contribution (1) again. In addition, from Table 1, 2, 3 of the main paper, we can see that our best model S-liPrompts, i.e., contribution (1) + contribution (2), outperforms the best of the state-of-the-art exemplar-free methods significantly (an average of **30%** relative improvement) for the three standard DIL benchmark datasets, and even suparsses them relatively by an average of **6%** when they use exemplars (Line 89-91). \n\nIn summary, the contribution (1) and contribution (2) are both significant. For instance, they contribute relative improvement of about 13% and 16% respectively over the competing methods on the CDDB dataset. We added such clarification in the revised supp. material.\n", " \n\n---\nTable R8: (results are from Table 4 in the major paper). Accuracies (%) of ablating components of the proposed S-liPrompts on the CDDB dataset. Note that we use the official implements [1] of CLIP to do zero-shot classification on three benchmarks without any change. Here we use the text templates of ImageNet as their implementation [1] for CLIP (Zero-shot). \n[1] https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb\n\n| Method | Prompting Scheme | Corresponding Method | CDDB |\n|----------------------------------------------------|---------------------------------------------------------------------|---------------------|--------|\n| CLIP (Zero-shot) | handcrafted language prompts | Vanilla CLIP | 49.52 |\n| S-liPrompts (fixed Image & Language Prompt length) | dependent image prompts tuning + dependent language prompts tuning | CLIP + L2P | 64.90 |\n| S-liPrompts (fixed Language Prompt weights) | independent image prompting + dependent language prompts tuning | CLIP + S-iPrompts | 72.29 |\n| S-liPrompts (final, K=5) | independent image prompting + independent language prompting | CLIP + S-liPrompts | 88.65 |\n\n---\n\n\n\n\n### **Q2: Good performance of random domain selection seems to imply that much of the performance is due to a simple prompt + backbone transformer**\n\n\n---\nTable R9. Average accuracies (%) of using random domain selection and our proposed domain selection on CDDB and DomainNet for domain incremental learning. The two domain selection cases are both based on the proposed S-liPrompts. \n| | Random Domain Selection | Proposed Domain Selection | Relative Improvement |\n|-----------|-------------------------|-----------------------------|----------------------|\n| CDDB | 80.12 | 88.65 | 8 |\n| DomainNet | 49.94 | 67.78 | 18 |\n---\n---\nTable R10: Out-of-distribution (OOD) experiments (supp. material paper Table2). Accuracies of the application of the trained S-liPrompts on S1-S5 to out-of-domains OOD1-OOD3 that are 3 unseen domains used in CDDB. S1-S5: GauGAN, BigGAN, WildDeepfake, WhichFaceReal, SAN, which are used to train S-liPrompts on the CDDB dataset. OOD1-OOD3: FaceForensic++, Glow, StarGAN, which are not used to train S-liPrompts.\n| | OOD1 | OOD2 | OOD3 | Avg |\n|------|-------|-------|-------|-------|\n| CDDB | 75.69 | 64.44 | 92.46 | 77.53 |\n---\n\nApart from the study on CDDB (Table 4 in the main paper), we further compare the case of using random domain selection against that of performing our proposed domain selection on DomainNet. As shown in Table R9, the performance of the random selection is actually not good enough. In particular, it is outperformed by the proposed domain selection significantly, with relative degradation being more than **8%** on CDDB and about **18%** on DomainNet. This considerable improvement demonstrates the necessity of the proposed domain selection, which attributes to the significance of the proposed independent prompting paradigm. In Table R10, the out-of-distribution (OOD) experiment (results from the supp. material Table 2) justifies that the random domain selection (in Table R9) has promising results, which is mainly due to the strong generalization of the proposed language-image prompt learning scheme over the CLIP model. Nevertheless, the results can be further improved remarkably using the proposed domain selection (Table R9). This shows the clear contribution of the proposed independent prompting paradigm. We included such a study on the necessity of performing domain identification/selection in the revised supp. material.\n\n### **Q3: Typos and grammatical typos**\n\nThank you for your suggestion. We carefully corrected the typos and grammatical errors throughout the paper.\n\n", " We thank Reviewer GPZj for the thorough and constructive feedback that helps us to further improve the paper. We are glad that Reviewer GPZj finds our paper is clear and elegant. We also thank Reviewer GPZj for stating that the paper has no serious weaknesses.\n\n### **Q1: Memory consumption comparison**\n\n\n---\nTable R6. Memory overheads of the proposed S-iPrompts (ViT-based), S-liPrompts (CLIP-based), the ViT-based prompting methods (DyTox and L2P), and ViT-based non-prompting ones (Others without expansion) on the CDDB dataset. In the setup, there are 5 sessions, each of which includes 2 classes (real and deepfake classes). The average increase corresponds to the parameter increase per session.\n\n| | DyTox | L2P | S-iPrompts | S-liPrompts | Others |\n|-----------------------------------|-------|--------|------------|-------------|--------|\n| Base Model | 86M | 86M | 86M | 201M | 86M |\n| Total Increase | 7.09M | 92.16K | 0.26M | 0.40M | 0 |\n| Average Increase | 1.42M | 18.43K | 52.22K | 80.89K | 0 |\n| Relative Increase (Increase/Base) | 1.65% | 0.02% | 0.05% | 0.03% | NA |\n---\n\nWe follow your suggestion to further report the memory overheads of the proposed S-iPrompts/S-liPrompts methods (ViT/CLIP-based), the ViT-based prompting methods (DyTox and L2P) as well as those ViT-based non-prompting methods (without expansion). As the non-prompting methods do not expand their architectures for the increasing tasks, their model parameters keep the same for the learning process. Therefore, the main comparison is on the prompting based methods including ours. As shown in Table R6 and Line 36-44 (the supp. material), the proposed S-iPrompts increases the model parameters for the domain-specific prompts that are of 52.22K (0.05% relative increase) for each domain (session), and the proposed S-liPrompts’s increase is of 80.89K (0.03% relative increase) per session. By comparison, DyTox needs to add one task-attention block over the ViT-based model and dynamically expands the domain-specific tokens (prompts) every session. L2P is like ours directly using ViT-based model without adding any neural network blocks, and it assigns a certain memory to save the pool of used prompts once for the initialization. From the results, we can see that DyTox’s increase is the most, and that of L2P is the least. The relative increase ratio of the proposed S-liPrompts/S-iPrompts is very close to that of L2P, while obtaining clearly better performances than the other competing methods. We added this table as well as such discussion in the revised supp. material. \n\n\n### **Q2: Meaning of 'dynamic classifiers'**\n\n\nWe originally used “dynamic classifiers” to represent the growing pool of the language model based classifiers in the context of CLIP. As shown in Line 213 and Line 246 of the revised main paper, we used “the growing pool of domain-based classifiers” instead to avoid confusion.\n", " We thank Reviewer Lxav for the very positive feedback and the constructive suggestions. Following the suggestions to further strengthen our paper, we discuss the questions on the potential conflict between the domain-specific prompting paradigm and the robustness to incorrect domain identification, the insignificant separation between different domains/classes as well as the improvement of using ensemble methods with more experiments.\n\n\n### **Q1: Necessity of performing domain identification/selection.**\n---\nTable R1. Average accuracies (%) of using random domain selection and our proposed domain selection on CDDB and DomainNet for domain incremental learning. The two domain selection cases are both based on the proposed S-liPrompts. \n| | Random Domain Selection | Proposed Domain Selection | Relative Improvement |\n|-----------|-------------------------|----------------------------|----------------------|\n| CDDB | 80.12 | 88.65 | 8 |\n| DomainNet | 49.94 | 67.78 | 18 |\n---\n\n---\nTable R2: Out-of-distribution (OOD) experiment (supp. material paper Table2). Accuracies (%) of the application of the trained S-liPrompts on S1-S5 to out-of-domains OOD1-OOD3 that are 3 unseen domains used in CDDB. S1-S5: GauGAN, BigGAN, WildDeepfake, WhichFaceReal, SAN, which are used to train S-liPrompts on the CDDB dataset. OOD1-OOD3: FaceForensic++, Glow, StarGAN, which are not used to train S-liPrompts.\n\n| | OOD1 | OOD2 | OOD3 | Avg |\n|------|-------|-------|-------|-------|\n| CDDB | 75.69 | 64.44 | 92.46 | 77.53 |\n---\n\nIn addition to the comparison on CDDB (Table 4 in the main paper), we further compare the case of performing random domain selection against that of using our proposed domain selection on DomainNet. Table R1 reports the results on these two datasets. It shows the consistently remarkable relative improvements (by more than **8%** on CDDB, and about **18%** on DomainNet) of the proposed domain selection over the random domain selection. This considerable improvement justifies the necessity of the proposed domain selection, which attributes to the significance of the proposed independent prompting paradigm. On the other hand, the out-of-distribution (OOD) experiment in Table R2 (from supp. material Table2) verifies that the random domain selection (in Table R1) has promising results mainly due to the satisfactory ability of the proposed language-image prompt learning scheme over the CLIP model. Nonetheless, the results can be further improved significantly using the proposed domain selection (Table R1), showing the clear effectiveness of the proposed independent prompting paradigm. In the revised supp. material, we added such a study on the necessity of performing domain identification/selection.\n\n### **Q2: Introducing ensemble methods into S-Prompts.**\n\nFollowing your good suggestion, we further study one of the most popular ensemble methods (i.e., voting) on the proposed S-liPrompts. Table R3 summarizes the comparison of the proposed S-liPrompts (with domain selection) against its voting-based version in terms of average accuracies on CDDB and DomainNet for domain incremental learning. The significant relative increases (by about **13%** on CDDB, **9%** on DomainNet) show that the proposed S-liPrompts method favors the proposed domain selection method that chooses the most expertised (i.e., the most related domain-based) prompting model for the inference on each test sample. The excellent performance of the selected domain-based prompting models verifies the significance of the proposed independent prompting scheme. For the voting-based inference on each test sample, we first feed all the learned domain-based prompts to the CLIP models one by one resulting in multiple predictions, and we then use the majority voting strategy on the individual results to get the final prediction. The clearly superior performance of the proposed S-liPrompts with domain selection mainly stems from the most expertised model for the given test sample. Except for this model, the rest ones are less expertised for the given test sample, and they might be dominant to corrupt the final prediction when doing the majority voting. In this case, the voting scheme could lead to performance degradation. Moreover, it requests running all the learned CLIP-based prompting models for the voting on each test sample, and thus it is much more time-consuming than the proposed inference scheme. We added such additional experiments and discussions in the revised supp. material.\n", " \n---\nTable R3. Average accuracies (%) of the originally proposed S-liPrompts (each inference uses one single domain-selected CLIP prediction) and its ensemble version (each inference utilizes the voting strategy on all domain-based CLIP predictions) on the CDDB and DomainNet datasets for domain incremental learning.\n| | Proposed S-liPrompts (Voting) | Proposed S-liPrompts (Proposed Domain Selection) | Relative Improvement |\n|-----------|-------------------------------|---------------------------------------------------|----------------------|\n| CDDB | 65.47 | 88.65 | 13 |\n| DomainNet | 58.85 | 67.78 | 9 |\n---\n\n### **Q3: Metrics to measure the degree of separation.**\n\n\nThanks for raising the insightful concern. We used t-SNE as it is a popular choice for visualization. Nonetheless, as you pointed out, though the significant superiority of our proposed S-liPrompts is shown clearly via t-SNE, it is mainly in terms of domain separation rather than class separation. This phenomenon is mainly from the fact that t-SNE visualization is limited to the 2-dim projection of the original high-dimensional data. Therefore, apart from using t-SNE visualization, it is better if we can additionally use quantitative metrics to measure the degree of class separation domain by domain.\n\nAccordingly, we (re-)collected the classification/detection accuracies in Tables 1 & 2 (main paper), which reflects the average domain-wise accuracies and thus can be a favorable metric for this purpose. Following your suggestion and the related paper [32], we additionally introduce a precision based metric to measure the domain-wise class separation degree. The results in Tables R4, R5 demonstrate the consistent superiority of the proposed S-iPrompts and S-liPrompts methods in terms of the class separation using such various metrics. We included this additional study in the revised supp. material.\n\n\n---\nTable R4. Detection accuracies (%) of the proposed S-liPrompts/S-iPrompts and the main competing methods (L2P and DyTox) on the used CDDB dataset for deepfake domain incremental learning. Task1-tas5: GauGAN, BigGAN, WildDeepfake, WhichFaceReal, SAN, which are used to train S-liPrompts on the CDDB dataset. **Bold**: best results, *Italic*: second best results.\n| | task1 | task2 | task3 | task4 | task5 | Average | Min | Max |\n|-------------|:-----:|:-----:|:-----:|:-----:|:-----:|:---------:|:---------:|:---------:|\n| S-liPrompts | 99.30 | 96.75 | 82.06 | 96.25 | 68.89 | **88.65** | **68.89** | **99.30** |\n| S-iPrompts | 90.30 | 81.88 | 72.76 | 84.25 | 43.30 | _74.50_ | _43.30_ | _90.30_ |\n| L2P | 80.73 | 62.60 | 58.98 | 57.48 | 46.59 | 61.28 | 46.59 | 80.73 |\n| DyTox | 48.91 | 50.00 | 59.19 | 50.00 | 50.00 | 51.62 | 48.91 | 59.19 |\n---\n---\nTable R5. Precisions (%) of the proposed S-liPrompts/S-iPrompts and the main competing methods (L2P and DyTox) on the used CDDB dataset for deepfake domain incremental learning. Here precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp the number of false positives. Task1-tas5: GauGAN, BigGAN, WildDeepfake, WhichFaceReal, SAN, which are used to train S-liPrompts on the CDDB dataset. **Bold**: best results, *Italic*: second best results.\n| | task1 | task2 | task3 | task4 | task5 | Average | Min | Max |\n|-------------|:-----:|:-----:|:-----:|:-----:|:-----:|:---------:|:---------:|:---------:|\n| S-liPrompts | 99.80 | 98.45 | 81.97 | 96.54 | 75.76 | **90.50** | **75.76** | **99.80** |\n| S-iPrompts | 90.95 | 83.64 | 73.21 | 84.08 | 43.75 | _75.13_ | 43.75 | _90.95_ |\n| L2P | 83.10 | 77.29 | 65.39 | 57.79 | 66.67 | 70.05 | _57.79_ | 83.10 |\n| DyTox | 52.99 | 50.00 | 85.86 | 50.00 | 50.00 | 57.77 | 50.00 | 85.86 |\n---\n\n\n\n\n", " We are glad that the reviewers found the idea in the paper novel/simple (Reviewers Lxav and WPXn). The reviewers also find the proposed language-image prompting brand-new (Reviewers Lxav and CnBo), and the proposed method clear and elegant (Reviewer GPZj). We are also delighted that the reviewers found the experiments and analysis to be extensive (Reviewers Lxav and GPZj), and that the experimental results demonstrate the impressive/persuasive superiority (Reviewers Lxav, GPZj and CnBo) of the proposed methods compared to the state-of-the-art. The reviewers also point out that the paper is well-written and easy to follow (Reviewers Lxav, GPZj and WPXn). Below, we address all reviewers' questions and concerns.", " This paper proposes to tackle the problem of domain incremental learning (in which its setting is distinct from the problem of class-incremental learning. In domain incremental learning, different domain data arrive in a sequence, one domain per incremental phase, while these domains share the same group of object/image classes), in which the domain indexes are not given during the inference (therefore dissimilar to the setting of task-incremental learning). As other incremental learning scenarios, the catastrophic forgetting is one of the biggest challenges in domain incremental learning, where the domain gap is main reason behind to cause the forgetting (i.e. while learning to classify new domain data, the model's ability on classifying old domain data could deteriorate). In order to tackle such catastrophic getting issue, instead of additionally maintaining portion of old domain data as what replay-based approaches do (under the consideration for better data security and privacy), the proposed method learns a set of prompts over (pretrained and fixed) transformers where the domain-specific knowledge is stored by a prompt pool, in which during the inference the classification is done by firstly identifying the domain ID (via K-NN to search for the nearest domain centroids obtained by K-means on training data to the test image feature) then feeding the corresponding domain-specific prompt and the image tokens to the transformer to perform the classification via the corresponding domain-specific classifier. With the fact that the proposed method is able to not only achieve image prompt learning (via ViT) but also a brand-new language-image prompt learning (via CLIP), the extensive experiments are conducted to show the superior performance of the proposed method against various baselines (including exemplar-free and replay-based incremental-learning methods, self-supervised learning method, and the recently published prompting methods) on several datasets. Pros:\n+ The proposed method is built upon pretrained (image or even language-image) transformers (without any fine-tuning needed) with advancing the existing prompting methods of continual learning on a novel idea: although existing methods also aim to learn the domain-specific prompts, their learning is dependent across domains thus potentially leading to less separation for the classes from different domains, while the proposed method particularly drives the prompt learning independently across domains to achieve the best for each domain. Experimental results as well as ablation study will verify the contribution of such novel idea. \n+ The proposed prompt learning (with only little overhead needed to store the domain-specific prompt) surpasses not only other prompting methods but even the replay-based incremental learning approaches (where the exemplars of old domain data are explicitly stored in the replay buffer and used during the learning for new domain data to alleviate the catastrophic forgetting) and the expansion-based incremental learning approaches (e.g. DER++, where different feature extractors are adopted per domain) , which again demonstrates the contribution of the novel idea behind the proposed method.\n\nCons:\n- Though the authors do provide additional experiments to demonstrate that the random assigning domain ID or having errors on domain identification (i.e. to firstly recognize which domain that the test data belongs to) would not have huge impact on the final classification performance, such argument seems to be contradictive to the idea of having domain-specific prompts and learning them in a domain-independent way. In other words, if using the wrong domain-specific prompt can still provide the slightly worse but comparable performance, then perhaps it could be unnecessary to perform domain identification. Instead, directly having the ensemble over the results produced by using each of the domain-specific would potentially contribute the best performance? The authors are highly encouraged to provide more discussion on this potential contradiction and experiment the variant of ensemble in the rebuttal.\n- Though the t-SNE visualizations show that the proposed method is able to produce more separation between the classes from different domains, but actually the degree of separation is not that significant (if we consider the high-level idea illustration provided in Figure.1, where the domain identification ideally would produce different subspaces across domains). The authors are highly encouraged to provide further analysis (e.g. are we able to define a metric to measure the degree of separation thus having more objective comparison between different methods) and discussion on such aspect. Overall I pretty enjoy reading the whole paper and like the novel idea of learning domain-specific prompts in a domain-independent way (which strike a much better balance between alleviating the forgetting and learning to recognize new domain data) as well as its applicability on pretrained and fixed image/language-image transformers. The questions that I would encourage the authors to address in the rebuttal, for further strengthening the paper, are the ones listed in cons, including the potential contradiction between the domain-specific prompts and the robustness to erroneous domain identification, and the insignificant separation between different domains. there is no potential negative societal impact", " The submission focus on the catastrophic forgetting problem in domain-incremental learning, in which a model needs to learn from different domains sequentially. To achieve the goal in an exemplar-free way, the authors design an S-Prompting mechanism. A new trainable prompt is introduced every time a new domain appears. The trained prompt is then added to a prompt repository. At inference time, when new data comes, the prompt of the corresponding domain is retrieved and prepend to the input of ViT or CLIP. Extensive experiments are conducted and the best S-Prompts surpasses not only exemplar-free methods but also some exemplar-based method. Strengths:\n\n1. The proposed approach is clear and elegant. It uses prompt to store the domain-related feature while keeping the backbone model unchanged though it is not the first time that prompt method is used for continual learning. Besides, no additional space is required to store the exemplar.\n\n2. The experiment is extensive and persuasive. Previous methods are properly considered. The performance is impressive, especially when compared with exemplar-based methods.\n\n3. The writing is clear and easy to follow.\n\nWeaknesses:\n\nNo serious weaknesses found. 1. I could that see there is an analysis in Section 4 and supplementary materials about the overheads of the proposed methods compared to its backbone, but there is no information about the memory overheads of the baselines. I would suggest listing the parameter scale of the baseline methods as well.\n\n2. What do you mean by ``dynamic classifers'' in line 246 and line 272? The term never appears in the approach section. The limitations are adequately discussed in the submission.", " This paper proposes to address DIL by (1) learning a pool of prompts and classifier heads (one for each domain) for ViT and CLIP (2) inferring domain label based on K-means and KNN. Strengths\n1. This paper is written clearly and generally easy to follow.\n2. The proposed idea is simple.\n\nWeaknesses\n1. I’m not sure whether the performance improvement over baselines (especially L2P and DyTox) are due to the proposed method or the usage of CLIP. The comparisons based on the same backbone, ViT (corresponding to S-iPrompts), don’t show a clear improvement over the two prompt-based baselines. And the significant improvement comes from the CLIP-based models. \n2. Related to the previous point, the good performance of random domain selection (Table 4) seems to imply that much of the performance is due to a simple prompt + backbone transformer.\n3. Minors: there’re typos and grammatical errors and they should be corrected. e.g., cheep prompts -> cheap prompts? Can the authors clarify or disentangle the contributions from (1) CLIP as a backbone and (2) learning independent prompts for each domain? See weaknesses and questions above. ", " This paper proposed S-Prompting and two concrete approaches to highly reduce the forgetting degree in continual learning scenarios, i.e., domain incremental learning (DIL).\n\nThe main idea is to learn independent prompts across domains with pre-trained transformers. \n\nThe independent prompt can achieve the best for each domain.\n\nThe learning method derives an image prompt learning approach and a brand-new language-image prompt learning approach. \n\nThe methods outperformed all three standard DIL tasks.\n (+) Compared with prior works: L2P and DyTox, the proposed S-Prompts learn the tasks independently.\n\n(+) The S-prompts are technically sound and seem to be well supported by experimental results on the three standard Domain-IL benchmark datasets, and outperformed others. \n\n(-) In Tables 1 and 2, for fair comparisons with others, the S-prompts compared with others under a little bit different architecture.\n\n(-) S-prompts (S-liPrompts) seems strongly dependent on CLIP (a pre-trained Image-text mapping function), which decreases the main novelty.\n\n In Tables 1 and 2, for fair comparisons with others, the architecture should be the same as the base-ViT model. \nIn the appendix, as the authors state, the number of parameters -ConViT(78M) and base-ViT(86M) are almost similar. However, I think if the structures are one another, the representations also differ.\nI would recommend the authors compare the performances under the same architecture such as the base-ViT(86M).\n The authors discuss the limitation of the S-prompts properly in the script.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "q7l6hc8e6bL", "fQGVbluGqzp", "tJ-1SoaJAuY", "I7dxnysR8lN", "1vElwhZa_de", "1vElwhZa_de", "sPqkMChnQS8", "sPqkMChnQS8", "mT0j2wB1cJ9", "5a8MvLkaIJw", "5a8MvLkaIJw", "nips_2022_ZVe_WeMold", "nips_2022_ZVe_WeMold", "nips_2022_ZVe_WeMold", "nips_2022_ZVe_WeMold", "nips_2022_ZVe_WeMold" ]
nips_2022_Nay_rOB-dZv
Fairness Reprogramming
Despite a surge of recent advances in promoting machine Learning (ML) fairness, the existing mainstream approaches mostly require training or finetuning the entire weights of the neural network to meet the fairness criteria. However, this is often infeasible in practice for those large-scale trained models due to large computational and storage costs, low data efficiency, and model privacy issues. In this paper, we propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique. Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger, which is tuned towards the fairness criteria under a min-max formulation. We further introduce an information-theoretic framework that explains why and under what conditions fairness goals can be achieved using the fairness trigger. We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models by providing false demographic information that hinders the model from utilizing the correct demographic information to make the prediction. Extensive experiments on both NLP and CV datasets demonstrate that our method can achieve better fairness improvements than retraining-based methods with far less data dependency under two widely-used fairness criteria. Codes are available at https://github.com/UCSB-NLP-Chang/Fairness-Reprogramming.git.
Accept
Overall the reviews are more or less positive towards weak accept. While there are some remaining concerns (e.g., a wording in the abstract), I think many of the raised concerns are addressed properly and some of them are checked. Hence, I believe this paper is worth being published.
train
[ "LtynAqCKHMy", "5x8f7gUiSmQ", "Q1HQ6XAbyCP", "BTXJ_OzD4-s", "jMOORhBbK6V", "BayFPPow41_", "oRN30sbMUz4", "vmSYix3tA_j", "DCIvhAxBZhX", "0z68iCryE3W", "Poxf-QqS3r3", "h9g2DSFxE4", "DTF8LyaMiyu", "KBgjXE1o_dJ", "uGAFzuzabti", "1cFuszm2Ot2", "U1hwY0v36Ep", "1yrLfnhXXf8" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I see what you mean. In my mind, the obtaining/applying distinction isn't especially meaningful in terms of what level of access is assumed for the proposed method. For example you said \"consider a black-box NLP model\" not \"consider a black-box NLP model where the fairness trigger has previously been obtained\". This trigger needs to be obtained somehow, and it seems that we all agree that the embedding function is required for this, so I would say that fairness reprogramming (as proposed) is not a black-box method.\n\nI agree that the idea of training the fairness trigger using a separate architecture/dataset, then transferring it to a target model, would be a more compelling story. Recall that in my original review I expressed concern that describing the investigating robustness-to-random-seed as \"enjoying great transferability\" may be an overstatement. However, what you are not describing in the responses sounds more interesting in terms of a transfer problem and could be set up in a more authentically black-box way. I would also agree that the connection to adversarial attacks becomes more important if you were to take this angle. Data poisoning, including transferability thereof [Zhu et al 2019], is another related field to look into.\n\nI appreciate the authors staying engaged throughout the discussion period. I see the author-reviewer discussion as an opportunity to clarify the reviewer's understanding of what the paper claims and how those claims are substantiated. Answering the reviewer questions can certainly help clarify any misunderstandings, but these clarifications don't necessarily merit an increase to the score, which is ultimately reflects my opinion of the paper's quality, potential impact, and relevance to the conference. In my original review I expressed a concern that this paper oversells the approach. Even after the author-reviewer discussion, this is still a concern for me.\n\nReferences:\n[Zhu et al 2019] https://arxiv.org/abs/1905.05897", " Thank you for your rapid responses! We are sorry for the possible confusion and we make the further clarification below.\n\nOur method consists of two phases, namely to **obtain** the trigger first and then **apply** it. We agree that we need to use the embeddings for **obtaining** the trigger $\\delta$ with $\\delta_i=Ev_i$ in our current training method. On the other hand, our method does not require access to embeddings for **applying** the reprogramming compared to linear projections, supposing the trigger has already been obtained. As indicated by lines 264-267, the token selection vector $v_i$ is one-hot in *FairReprogram (HARD)*, so each $v_i$ corresponds to a specific word in the vocabulary. That being said, for applying the *FairReprogram (HARD)* during the inference phase, we could simply add the text suffix indicated by $v$ into the inputs, which is equivalent to appending $\\delta=Ev$ in the embedding space. It is also worth mentioning that *FairReprogram (HARD)* has been shown to enjoy great transferability in Figure 5 of the paper, so one potential application is to obtain the triggers with a substitute model where embeddings are known and apply it to the target model of interest. We apologize for the confusion and we would make it clearer in our revised version!\n\nBesides, we would like to mention that it is also possible to optimize the fairness triggers with query-based methods so that the embeddings are not necessary for **obtaining** triggers either. Such methods have shown great success in generating adversarial attacks for black-box NLP models [1, 2, 3], which is similar to our setting. We thank you for your insightful question and we leave this for our future research.\n\nAgain, thank you for your response and if you have any questions, we are more than happy to address them.\n\n> [1] Li, Linyang et al. “BERT-ATTACK: Adversarial Attack against BERT Using BERT.” ArXiv abs/2004.09984 (2020): n. Pag.\n>\n> [2]Jin, Di et al. “Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment.” AAAI (2020).\n>\n> [3] Garg, Siddhant and Goutham Ramakrishnan. “BAE: BERT-based Adversarial Examples for Text Classification.” ArXiv abs/2004.01970 (2020): n. pag.", " For the NLP setting, you say that linear projections would require access to text embeddings, whereas \"fairness reprogramming still works as it only appends the trigger into the input sentences\". When I look at the paper, in lines 258--260 I see that the perturbation is defined as a $\\delta_i = E v_i$ where $E$ is the BERT embedding. From this I conclude that perturbations are optimized in the embedding space and require the embedding function. Am I missing something...?", " Dear Reviewer 4uyl,\n\nWe are very grateful for your valuable suggestions and insightful questions! We have tried our best to address your concerns. As there is only one day left for the author-reviewer discussions, we sincerely hope that you can provide us feedback before the discussion phase ends. We will be happy to know if there are any other concerns that we could address to better help you consider raising the score. Once again, thank you for your time and efforts in our work.\n\nBest regards,\n\nAuthors", " Dear Reviewer cTsj,\n\nWe are very grateful for your valuable suggestions and insightful questions! We have tried our best to address your concerns. As there is only one day left for the author-reviewer discussions, we sincerely hope that you can provide us feedback before the discussion phase ends. We will be happy to know if there are any other concerns that we could address to better help you consider raising the score. Once again, thank you for your time and efforts in our work.\n\nBest regards,\n\nAuthors", " Dear Reviewer 4SAR,\n\nWe are very grateful for your valuable suggestions and insightful questions! We have tried our best to address your concerns. As there is only one day left for the author-reviewer discussions, we sincerely hope that you can provide us feedback before the discussion phase ends. We will be happy to know if there are any other concerns that we could address to better help you consider raising the score. Once again, thank you for your time and efforts in our work.\n\nBest regards,\n\nAuthors", " Dear Reviewer,\n\nWe appreciate your efforts in reviewing our paper and your valuable comments. We have tried our best to address all your questions in detail. Could you please check our response, and let us know if you have further questions? Once again, thank you very much for your time, help and consideration.\n\nBest regards,\n\nAuthors", " Dear Reviewer,\n\nWe appreciate your efforts in reviewing our paper and your valuable comments. We have tried our best to address all your questions in detail. Could you please check our response, and let us know if you have further questions? Once again, thank you very much for your time, help and consideration.\n\nBest regards,\n\nAuthors", " Dear Reviewer,\n\nWe appreciate your efforts in reviewing our paper and your valuable comments. We have tried our best to address all your questions in detail. Could you please check our response, and let us know if you have further questions? Once again, thank you very much for your time, help and consideration.\n\nBest regards,\n\nAuthors", " Thank you very much for your response and the follow-up questions. Please see our point-to-point answer to your questions below. \n\n**Q1: I would suggest the language in the abstract be changed to make any claims about the proposed method as specific as possible. Correct me if I misunderstand finetuning as a type of retraining method (not retraining from scratch, but none-the-less updating model parameters).**\n\nA1: Thanks very much for your kind suggestions. Yes, you are correct about the interpretation of finetuning used in our paper and we will revise the description in the revised version to make our motivation clearer. \n\n**Q2: I'm not sure I agree about linear probes (applied on top of a fixed embedding) assume more access than reprogramming in the embedding space (e.g. in the NLP domain). It's possible I'm missing something basic, but could you expand on this point?**\n\nA2: We are sorry that we may have a misunderstanding on your question. We are not sure whether “the fixed embedding which the linear probes are applied on top of” refers to the input embeddings or the last model hidden layer output. For both cases, however, the accessibility of the embeddings are always necessary when **applying the linear probes**, which could be infeasible in practice. For example, let’s consider a black-box NLP model whose parameters and architecture are transparent to users and only the output can be provided for a given input. The linear projection could not be applied due to the lack of access to embeddings. By contrast, fairness reprogramming still works as it only appends the trigger into the input sentences to re-purpose the model.\n\nBesides, back to the original question, we agree that our reprogramming method is equivalent to adding a linear transformation directly to the inputs in some simple cases like tabular data. We conducted additional experiments on the UCI Adult dataset with a two-layer MLP. An additive trigger is added to the original inputs with the input dimension unchanged, *i.e.*, $\\tilde{x}=m \\circ x+\\delta$, where $m$ is a multi-dimensional binary mask and $\\delta$ is the trigger. The **[results](https://ibb.co/ssNyK7v)** show that our method is comparable with the post-processing adversarial training baseline, which empirically demonstrates the equivalence. We believe such a discussion may provide a very valuable insight on how our method works beyond our conceptual proof in Section 3.4. \n\nWe will add all the discussions above to our revised version and we truly appreciate your illuminating question!\n\n**Q3: Thank you for providing the new baseline. Are you sure these results are for Celeb-A? The shape of the pareto curves and legend seem to indicate the results come from civil comments. It would be helpful to either (a) see all baselines with the proposed method on the same plot or (b) fix the axes of the new MMD plot to match those of the original plot.**\n\nA3: Thanks for pointing this out! We are sorry there was a typo on the dataset name and we have corrected it to Civil Comment. According to your suggestion by putting all the baseline methods on the same plot, we have updated the results in this **[Figure](https://ibb.co/3vXzcn5)**. To improve the readability, we also provide another version in this **[Figure](https://ibb.co/Xyd4GWT)**, where all the data points are removed and only the curves remain. We hope the new plots can alleviate your concern that fairness reprogramming has a better performance simply because of the instability of adversarial training of the baselines. We will also update this figure in the revision.\n\nWe hope our responses could address your questions. If you have additional comments, please feel free to let us know. We will try our best to resolve them.\n", " Thank you for the considered response.\n\nI think I better understand the motivation now. The response indicates that the authors are not claiming a substantial speedup w.r.t. finetuning, but the current abstract still states that the proposed method enjoys \"far less training cost\" than retraining-based methods. I would suggest the language in the abstract be changed to make any claims about the proposed method as specific as possible. Correct me if I misunderstand finetuning as a type of retraining method (not retraining from scratch, but none-the-less updating model parameters). \n\nI'm not sure I agree about linear probes (applied on top of a fixed embedding) assume more access than reprogramming in the embedding space (e.g. in the NLP domain). It's possible I'm missing something basic, but could you expand on this point?\n\nThank you for providing the new baseline. Are you sure these results are for Celeb-A? The shape of the pareto curves and legend seem to indicate the results come from civil comments. \n\nIt would be helpful to either (a) see all baselines with the proposed method on the same plot or (b) fix the axes of the new MMD plot to match those of the original plot.", " Thank you very much for providing us with very constructive comments. In what follows, please see our responses.\n\n**Q1: How are correlations among features handled in FairReprogram?**\n\n\n**A1**: In the theoretical analysis, we made a simplifying assumption that the features are uncorrelated. However, this is just an assumption for the ease and brevity of our proof. In fact, if features do have correlations, our theoretical analysis will still hold – it can still be shown that the FairReprogram can provide false demographic info to overshadow the true one. The only difference from the case without correlations is that in the case with correlations among features, the trigger needs to provide even stronger false demographic cues to overshadow the additional demographic information reflected in the correlations among features. Moreover, our empirical results also verify that FairReprogram handles the correlations among features well, as can be shown by its superior performance on various datasets (Table 3), where correlations among features are ubiquitous. We will add this discussion to the paper.\n\n\n**Q2: What is the intuition behind adding noise as a fairness trigger, such as in patch trigger and border trigger? Does this mean demographic information is confined either in the border of the image or in a specific area of an image covered by the patch?**\n\n**A2**: When an image is appended with the fairness trigger, there will be two types of demographic cues. First, the original, true demographic cues that reside in the original image; second, the false demographic cues that reside in the trigger in the border/patch. The two cues can coexist and the false cues do not need to overlie the true cues. The key is that the false cues need to be strong enough so that the neural model, when presented with the two potentially conflicting cues, will go for the false one. This is entirely possible because the neural model has not seen the fairness trigger before so it cannot learn to ignore it. This intuition is also supported by our empirical analysis in Table 3, where the trigger is found to contain strong demographic cues. We will move Table 3 to the main paper and improve the clarity of the theoretical analysis sections.\n\n**Q3: In Figure 1, perhaps the captions are wrong as 1(a) should be about border trigger and vice-versa. A careful recheck is encouraged.**\n\n**A3**: Thank you very much for pointing out this typo and we will fix it in the revised version. \n\n**Q4: Does the method extend to tabular data with a fixed set of features in matrix form?**\n\n**A4**: Yes, fairness reprogramming can be applied to tabular data. There are many ways to design triggers. As the tabular data have a fixed input size, we can directly apply the **additive trigger** to the input data to keep the input dimension unchanged (i.e., adding a perturbation on the original input), just as we adopted in image domains (Figure 1). Thanks for pointing this out, we will include more discussion on trigger designs for different modalities of data in the revised version. To verify our argument, we applied our method to the tabular data and conducted additional experiments on the UCI Adult dataset with a two-layer MLP model, and the results are shown in this **[Figure](https://ibb.co/ssNyK7v)**. The results suggest that our method could effectively improve model fairness for tabular data. Our method achieves comparable debiasing performance with the post-processing adversarial training method without modifying any model parameters.\n\n**Q5: A comparison with existing fairness improvement techniques such as pre-processing, in-processing, post-processing fairness algorithms should be discussed. In which family of fairness algorithm does this approach belong to?**\n\n**A5**: Our work belongs to the post-processing category. The key difference between our method and pre/in-processing approaches lies in that our approach does not change the training data or interfere with the model training process. In contrast, pre-processing methods need to alter the training data and therefore, need full access to the training data, model training process, and model parameters, which is a quite demanding requirement in real-world applications. Our method focuses on the case, where we have no access to the training process at all but only the model. Our method is also applicable to black-box settings (empirical results are shown in Appendix B), where we could correct a biased model without accessing the model parameters/gradients, which provides us a significant advantage over other in-processing approaches. In addition, more empirical comparisons to other post-processing baselines can be found in Appendix B.\n", " We thank the reviewer for the constructive feedback. Please find our detailed responses below.\n\n**Q1: The paper is not self-contained with several key information in Appendix for a good understanding of the claimed contributions. With the above, it is not clear whether fairness can be really achieved by the proposed method.**\n\n**A1**: Thanks for your suggestion. Due to the page limit, we put the theoretical proof in the appendix and we will add more intuitive explanation as well as move the results shown in Table 3 (Appendix B) back to the main manuscript in the revised version to ensure better readability. As a brief summary of why our method works, essentially fairness triggers provide false and constant demographic info that tricks the biased model into believing all the input is from the same demographic group, which hinders the model from using the true demographic info to produce biased prediction. As verified by the results in Table 3, the triggers all contain strong demographic cues. We will include all these discussions in the main paper. We also provide an intuitive interpretation of our theoretical proof below in A6.\n\n**Q2: Missing related works.**\n\n**A2**: Thanks very much for pointing out the relevant works. We will cite and discuss them in the related work in the revised version.\n\n**Q3: The size ratio for civil comments is 1/5 which does not align with the motivation that is infeasible to retrain or finetune well-trained large-scale models.**\n\n**A3**: First, we would like to bring to your attention that we have performed experiments with various data ratios as shown in Figure 4, where our method achieves significant improvement over baselines even in the extreme case with only 0.1% of the available data. This experiment can verify the ability of our algorithm to debias under extreme data scarcity. Moreover, we would like to clarify that the motivation of the proposed algorithm is to tackle the challenges in scenarios where access to the model parameters is restricted due to security, privacy or proprietary concerns (such as commercial API), rather than due to data limits. Therefore, we would need to test our algorithms under all different data ratios. That’s also the reason why we show the effectiveness of our proposed method in both the white-box and the black-box scenarios (Figure 10 in Appendix B), where even the model gradient is not accessible. The black-box setting is the most realistic application in the real-world scenario, while the white-box setting helps analyze and verify the efficacy of our approach. Given this motivation, we believe the experiments on civil comments constitute a relevant and valid test of our algorithm. We will modify our paper to clarify our motivation.\n\n**Q4: The captions of Figure 1(a) and 1(b) are misplaced.**\n\n**A4**: Thank you for pointing this out. We will fix this typo in the revised version.\n\n**Q5: Many biased datasets are tabularly represented. How does your work apply on tabular data? Appending additional dimension of vector directly?**\n\n**A5**: Fairness reprogramming can be applied to tabular data. For reprogramming, there are many ways to design triggers according to different tasks and requirements. Unlike NLP, where we append the trigger to the input or embeddings, the model for tabular data is sensitive to input size. As the tabular data have a fixed input size, we can directly apply the **additive trigger** to the input data to keep the input dimension unchanged (i.e., adding a perturbation on the original input), just as we adopted in image domains (Figure 1). Thanks for pointing out this possible application scenario for our method. To verify our argument, we applied our method to the tabular data and conducted additional experiments on the UCI Adult dataset with a two-layer MLP model, and the results are shown in this **[Figure](https://ibb.co/ssNyK7v)**. The results suggest that our method could effectively improve model fairness for tabular data. Our method achieves comparable debiasing performance with the post-processing adversarial training method without modifying any model parameters. ", " **Q6: Can you summarize why the appended information can always cut off the biased information path?**\n\n**A6**: The trigger learned by the reprogram contains very strong demographic information and blocks the model from relying on the real demographic information from the input. This argument is both empirically verified by experiments (shown in Table 3) as well as theoretically proven in Sec. 3.4. Since the same trigger is attached to all the input, the uniformal demographic information contained in the trigger will weaken the dependence of the model on the true demographic information contained in the data, and thus improve the fairness of the pretrained model. Please kindly refer to our response to Q1 for a brief summary of how our algorithm works. We will move the relevant content to the main paper to improve the readability of the paper.\n\n**Q7: How do you distinguish this work with pre-processing approaches?**\n\n**A7**: Our work belongs to the post-processing category. The key difference between our method and the pre-processing approaches lies in that our approach does not need to change the training data or interfere with the model training process. In contrast, pre-processing methods need to alter the training data and therefore, need full access to the training data, model training process, and model parameters, which is a quite demanding requirement in real-world applications. Our method focuses on the case, where we have no access to the training process at all but only the model.", " We greatly appreciate your thoughtful comments! Please see our response to your questions and concerns as below.\n\n**Q1: Don’t we still need to backpropagate through the entire network to learn the fairness trigger? If yes, there could be a memory savings as we don’t tune as many parameters, but it’s not clear to me that the method would train substantially faster than a fine-tuning baseline.**\n\n**A1**: We would like to clarify that the main motivation for the fairness reprogramming algorithm is not to improve computational efficiency, but to resolve the challenges in many real-world applications where access to the model parameters is restricted, and therefore it is impossible to directly modify the model towards the fairness goals. That being said, we totally agree that the proposed method would not train substantially faster than the fine-tuning baseline and we do not intend to claim it does. It may still train slightly faster because of the reduced tuning parameters but that is a bit outside the scope and our claimed contributions of this paper. We will modify our paper to make this clearer.\n\n**Q2: If the fairness trigger is learning a constant perturbation along some subspace with lots of demographic information, is it possible that a similar solution could be found by simply projecting away that subspace using a linear probe?**\n\n**A2**: Firstly, we agree that for certain simple models, the reprogramming method is equivalent to adding a linear probe. Specifically, if the model is a simple MLP, a trigger added to the input can be easily regarded as appending a bias term to the first layer. Nevertheless, similar conclusions can not be extended to transformers or convolutional layers as used in the NLP and CV domain in our paper, since their functions are more complex and cannot be represented by simple linear transformations. The reprogramming method still has a stronger representation power in this case. Moreover, please kindly be reminded that the motivation of fairness reprogramming is to resolve fairness tuning without having access to the model parameters. Under this scenario, linear probe insertion is less applicable, whereas our method remains a feasible solution with decent representation power. Nevertheless, we greatly appreciate this inspiring question and we will regard it as an interesting topic for future research.\n\n**Q3: I am slightly concerned that the in/post-processing methods could be doing worse simply because of the instability of adversarial training, or due to overfitting given the limited number of data available during fine tuning for the post-processing method. I would suggest adding an MMD baseline as an additional baseline.**\n\n\n**A3**: Thank you for your suggestion! Following your suggestion, we conducted some additional experiments and introduced the suggested MMD baseline as used in https://arxiv.org/abs/1511.00830. Please refer to the results shown in this **[Figure](https://ibb.co/CsKQJ5B)**. As we can see, our proposed method still outperforms the MMD baselines, which can alleviate the concern that fairness reprogramming has a better performance simply because of the instability of adversarial training of the baselines. We will add the new baseline to our pool of standard baselines in the paper.\n\n**Q4: I would suggest adding a basic description of the task to Section 1 wherever the results are summarized.**\n\n\n**A4**: Thanks for your suggestion and we will add the task description in the introduction section as well in the revised version.\n\n\n**Q5: Could the authors clarify the citation (in this context) or remove it?**\n\n**A5**: We thank you for pointing out the typo here, and we would modify it in the revision.\n\n**Q6: The claim at the end of the paper that the method “enjoys great transferability” [line 356] seems like an overstatement. I would be especially concerned that adding a single constant feature could suffer from covariate/distribution shift, which was not examined.**\n\n**A6**: Thanks for pointing this out. In our previous transfer experiments, we adopted different models with the same architecture. As the reviewer suggested, we conduct additional experiments where we transfer the trigger from ResNet18 to a different architecture (ResNet20s). We also change the training task from predicting the hair color to predicting smiling/not smiling. The results are shown in this **[Figure](https://ibb.co/QPSdfB3)**. We can see that the trigger still has good transferability with different model architectures. Meanwhile, we find that the triggers are able to boost the fairness of the model in the task-transfer setting, but the accuracy is traded off more than the original setting. We will add the new experiments to the paper, as well as modify the paper with a more precise claim and a more detailed discussion.\n\n**Q7: In Figure 1 it seems that the captions and images should be swapped.**\n\n**A7**: Thanks a lot for pointing this out and we will fix it in the revised version.", " This paper discusses how to learn a constant feature that can be concatenated to a frozen pre-trained embedding to promote statistical group fairness in a downstream classifier. These extra features, called the “fairness trigger”, are jointly trained with the prediction model with the aim of inducing a demographic parity or equalized odds property in the final solution. Strengths\n* Demonstrates that reprogramming techniques can be adapted to target group fairness metrics in CV and NLP contexts.\nInteresting qualitative analysis of the resulting models [Figs 6, 7] suggesting that the learned concatenated constant features cause the downstream classifier to focus less on spurious features (in this case, features that indicate demographic membership). \n* Provides a conceptual proof of concept for reprogramming in a simple bag-of-words classification setting where data are generated according to an anti-causal graphical model [Figure 2].\nWeaknesses\n* I found the proposed method interesting and I think the neurips community will as well. My main complaint is that the paper, in my opinion, oversells this approach as strictly superior to previous methods. When we apply the fairness trigger in the original input space (e.g. CV), don’t we still need to backpropagate through the entire network to learn the fairness trigger? If yes, there could be a memory savings as we don’t tune as many parameters, but it’s not clear to me that the method would train substantially faster than a finetuning baseline. On the other hand I can see why you would save on compute when applying the fairness trigger *in* the embedding space (e.g. NLP)\n* The paper mainly compares against fine tuning methods, but there are other feasible approaches such as adding a linear probe on the frozen embedding (which I don’t believe is considered the experiments). When we teach linear regression we often bring up two alternate parameterizations: (a) \\hat y = x^T W + b and (b) \\hat y = (\\tilde x)^T W. These are equivalent if we define \\tilde x as x with an extra constant feature. By analogy, I wonder if there is some equivalence between the fairness trigger (learned bias) and a linear probe/transformation applied on the embedding (learned projection). If the fairness trigger is learning a constant perturbation along some subspace with lots of demographic information, is it possible that a similar solution could be found by simply projecting away that subspace using a linear probe? It seems possible to me, and makes me wonder if the experimental gains are due to the fact that the fairness trigger optimizes a lower dimensional parameter space that would be less prone to overfitting given the limited data available for post-processing. Even if this were the case (might be possible to find out by adding linear probes as baselines in the experiments) the paper would be interesting, but might suggest that fairness reprogramming is best under a constrained data budget, not best in general. \n * I am slightly concerned that the in/post-processing methods could be doing worse simply because of the instability of adversarial training, or due to overfitting given the limited number of data available during fine tuning for the post-processing method [line 244]. I would suggest adding an MMD baseline as an additional baseline (e.g. as applied in https://arxiv.org/abs/1511.00830). This would hopefully be more stable, but may also suffer from overfitting.\n* In my opinion, the claim of achieving “lower bias scores over two fairness criteria in the CelebA dataset” [lines 54–55] is not meaningful without defining the prediction task, since many different tasks (related to fairness and otherwise) have been proposed that use the crowdsourced attributes of CelebA. The authors do a good job of providing this context in Section 4 [lines 219–200]; I would suggest adding a basic description of the task to Section 1 wherever the results are summarized.\n* I’m unclear on why the JTT paper [12] is cited as a paper that “[learns] task-specific embedding prompts concatenated to the inputs” [lines 36–37]. My understanding of this paper is that it uses the error cases of an ERM reference model to train a robust model via importance weighting. Could the authors clarify the citation (in this context) or remove it?\n* I was interested in the experiment showing that the method is not overly sensitive to random seed [lines 291–300]. Calling this a “transfer” is a bit generous—there is no notion of a distribution shift or new prediction task, or a change in the model architecture—but it is good to know that a learned prompt can be reused if the model is retrained on the same data. The claim at the end of the paper that the method “enjoys great transferability” [line 356] seems like an overstatement. I would be especially concerned that adding a single constant feature could suffer from covariate/distribution shift, which was not examined. \n* In Figure 1 it seems that the captions and images should be swapped. I.e. Fig 1a is the border trigger, not the patch trigger, if I understand correctly.\n Yes, the authors discuss several limitations in the appendix. I would encourage including them in the main paper.\n", " This paper introduces a model reprogramming based fairness promoting method, consisting of a fixed ML model and optimizing a set of vectors concatenated on inputs to boost model fairness. An information-theoretic framework is also introduced to explain the rationales of fairness booster. Experiments on one NLP and another CV dataset demonstrate the utility of this approach. Strengths: \n1. This paper tackles the fairness issue with reprogramming which is a relatively less explored area in fairness-aware learning.\n2. The proposed method can finetune the pretrained model with computational efficiency. \n3. Experiments show the effectiveness of proposed method.\n\nWeaknesses:\n1. The paper is not self-contained with several key information in Appendix for a good understanding of the claimed contributions. For example, without the understanding of \"It can be shown (in Appendix C) that the posterior distributions..\" and \"it can be shown (in Appendix C) that the posterior distribution..\", the claimed information-theoretic framework remains unclear. \n2. With the above, it is not clear whether fairness can be really achieved by the proposed method.\n3. Some related work on reprogramming fairness (for example \"Reprogramming FairGANs with Variational Auto-Encoders: A New Transfer Learning Model\") is not mentioned in the literature review. This work also related to incremental fairness but lacks of relevant discussion (for example \"FAHT: An Adaptive Fairness-aware Decision Tree Classifier\"). \n4. The size ratio for civil comments is 1/5 which does not align with the motivation that infeasible to retrain or finetune well-trained large-scale models.\n5. The caption of Figure 1(a) and 1(b) are misplaced. \n\n\n\n 1. Many biased datasets are tabular represented, how does your work apply on tabular data? Appending additional dimension of vector directly? \n2. Can you summarize why the appended information can always cut off the biased information path? \n3. How do you distinguish this work with pre processing approaches? See comments above. \n\n", " Traditional approaches for fairness in neural networks suggest training or fine-tuning the entire weights of the network to achieve desired fairness criteria. This paper, instead, proposes a reprogramming based technique, called FairReprogram. Considering the neural network fixed, FairReprogram appends a global fairness trigger to the input to achieve fairness improvement with respect to existing fairness metrics. The results are backed up with theoretical analysis and experimental evaluations in NLP and CV datasets. Strength\n\n- The paper is very well written. Statements are supported with examples. The paper is also well-motivated as reprogramming towards fairness improvement is less costly than retraining or fine-tuning with fairness objectives. \n\nWeakness:\n\n- How are correlations among features handled in FairReprogram?\n\n- What is the intuition behind adding noise as fairness trigger, such as in patch trigger and border trigger? Does this mean demographic information is confined either in the border of the image or in a specific area of an image covered by the patch?\n\n- In Figure 1, perhaps the captions are wrong as 1(a) should be about border trigger and vice-versa. A careful recheck is encouraged.\n\n- Does the method extend to tabular data with fixed set of features in matrix form? \n\n- A comparison with existing fairness improvement techniques such as pre-processing, in-processing, post-processing fairness algorithms should be discussed. In which family of fairness algorithm does this approach belong to? Please address questions in the `weakness` of the paper above. There is no negative societal impact of the work, as far as I know." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "5x8f7gUiSmQ", "Q1HQ6XAbyCP", "0z68iCryE3W", "Poxf-QqS3r3", "U1hwY0v36Ep", "1yrLfnhXXf8", "Poxf-QqS3r3", "1yrLfnhXXf8", "U1hwY0v36Ep", "Poxf-QqS3r3", "uGAFzuzabti", "1yrLfnhXXf8", "U1hwY0v36Ep", "U1hwY0v36Ep", "1cFuszm2Ot2", "nips_2022_Nay_rOB-dZv", "nips_2022_Nay_rOB-dZv", "nips_2022_Nay_rOB-dZv" ]
nips_2022_Wk-4Tp-gPpv
DeepTOP: Deep Threshold-Optimal Policy for MDPs and RMABs
We consider the problem of learning the optimal threshold policy for control problems. Threshold policies make control decisions by evaluating whether an element of the system state exceeds a certain threshold, whose value is determined by other elements of the system state. By leveraging the monotone property of threshold policies, we prove that their policy gradients have a surprisingly simple expression. We use this simple expression to build an off-policy actor-critic algorithm for learning the optimal threshold policy. Simulation results show that our policy significantly outperforms other reinforcement learning algorithms due to its ability to exploit the monotone property. In addition, we show that the Whittle index, a powerful tool for restless multi-armed bandit problems, is equivalent to the optimal threshold policy for an alternative problem. This observation leads to a simple algorithm that finds the Whittle index by learning the optimal threshold policy in the alternative problem. Simulation results show that our algorithm learns the Whittle index much faster than several recent studies that learn the Whittle index through indirect means.
Accept
The paper considers a subset of dynamic problems, in which the optimal policy is a threshold-policy. The authors use this attribute to formulate tailored off-policy actor-critic algorithms, for both MDPs and RMABs which are gradient-based, so can utilize neural networks. They empirically compare their method to SOTA methods in three MDP domains and three RMAB parameterizations, the results show that their method, DeepTOP, performs better than the compared methods in all the experiments. The paper is well written and the claims are correct. The performance of DeepTOP compared to the other methods is impressive. All four reviewers were on the positive side for acceptance.
train
[ "qAe_fhsq1X", "5oI5LiEo1Hu", "kamPV9LE01p", "WqbAQDp5lv", "eAbmRpC9JAP", "-zsIjXX3B7o", "MhxaGhZAmsr", "dasj9-QtSe2", "eLB9DSYDy1", "yWi5p3H1JJd", "rN4GoSrG_N9g", "zuKQjTy8M3s", "A1pXy6wyFil", "SteInCClNqx", "xHfCq08hMjb" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the comments. For the first question:\n\nWe agree that the following statements are true:\n1. If all three actor-critic algorithms use the same rule for updating the actor network, then they become the same and would have exactly the same performance.\n2. If we run the algorithms for a much longer time, then Neural LPQL and Neural WIBQL would eventually offer the same performance as our DeepTOP.\n\nHence, we also agree that the benefit of DeepTOP over the other two algorithms is that its gradient update for the actor network is more efficient. This allows it to converge faster by achieving a near-optimal performance with a smaller number of time steps, or, equivalently, achieving a better performance when the number of time steps is small.\n\nFor the second question:\n\nThe study of using ML for RMAB is still in its infancy. All baseline policies we found were published in the last two years. This is why there are no actor-critic algorithms for this problem in the literature.", " Thank you for providing the detailed response. I agree that the benefit of DeepTOP is not just due to the benefit of using actor-critic model but also the update rule (a faster way to compute policy gradient as the main contribution of the paper). One small confusion I have in mind ( sorry for my late response so I understand there may not be enough time to respond) is that the x-axis in Figure 3 is the number of timesteps run by each algorithm, which if the authors also use a actor-critic method (without using the faster policy gradient), it should perform almost identical to the DeepTOP algorithm. I think the benefit of DeepTOP is on the computation but not on the performance (although they are correlated). My understanding is that the faster policy gradient update rule (the contribution of this paper) makes actor-critic style algorithm possible, and thus leads to better performance.\n\nIf possible, could you also briefly explain why there was no actor-critic style algorithm in the literature? Is the computation the major bottleneck of the actor-critic style algorithm? Thank you!", " I thank the authors for their detailed response and the revised version. As I said in my review, I think it is a good paper, which has several contributions (both theoretical and empirical). I believe that each contribution as itself is minor, but all of them together are good enough. \nI still think that more complex empirical experiments could improve the overall contribution (one of the advantages of DeepTOP is the ability to tackle more complex settings than the baselines, maybe use different baselines for the complex experiment?).", " I thank the authors for their response. The response clarifies the paper better. I am satisfied with most of the authors’ responses. However, I am still not convinced about the contribution in the domain of RMAB as the analysis for MDPs and RMABs are similar with no major differences.\nI have increased my review score accordingly.\n", " We thank the reviewer for their detailed comments. If our responses are satisfactory, we kindly ask the reviewer to update the score.\n\n\n### *The restrictions on V seems to be quite drastic (discrete set at line 66, distinct threshold values in Theorem 1) so I think a short explanation is in order (is this just a technical restriction or a real pain?, why the restriction exists?)*\n\nThe assumption is needed for the first step of the proof, where we decompose the region $[-M, M]$ into several distinct intervals.\n\nIn practice, this assumption is not a serious restriction. For any randomly initialized neural networks, it is near impossible to have the same outputs for two different inputs during any update sequences.\n\n\n### *The main limitations are two actions policies and the policy structure. Subsequently, the significance of the paper is highly limited just by tackling this small range. In its niche, I think the paper gives a very useful insight, even if its not very sophisticated. In addition, with some thought the core idea might extend to more general scenarios. For example - other cases where the gradient can be calculated easily or other problems that can be solved to threshold policies.*\n\nWe thank the reviewer for the insight. We are indeed working on extending this work for multi-action threshold policies. In this case, the neural network outputs multiple threshold values $\\lambda_k, \\lambda_{k+1},...$. The policy deterministically picks action $a_k$ if the scalar state lies in the interval between $\\lambda_k$ and $\\lambda_{k+1}$.\n\nWe would also like to emphasize that there are many problems, especially in the queueing and networking community, where it is natural to consider threshold policies, and, in many cases, the optimal policies are indeed threshold ones. For example:\n\n[36] H. Tang, J. Wang, L. Song and J. Song, \"Minimizing Age of Information With Power Constraints: Multi-User Opportunistic Scheduling in Multi-State Time-Varying Channels,\" in IEEE Journal on Selected Areas in Communications, vol. 38, no. 5, pp. 854-868, May 2020, doi: 10.1109/JSAC.2020.2980911.\n\n[37] B. Zhou and W. Saad, \"Minimum Age of Information in the Internet of Things With Non-Uniform Status Packet Sizes,\" in IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 1933-1947, March 2020, doi: 10.1109/TWC.2019.2959777.\n\n[38] Eitan Altman, Rachid El-Azouzi, Daniel Sadoc Menasche, and Yuedong Xu. 2019. Forever Young: Aging Control For Hybrid Networks. In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobihoc '19). Association for Computing Machinery, New York, NY, USA, 91–100. https://doi.org/10.1145/3323679.3326507.\n\n[39] Guidan Yao, Ahmed M. Bedewy, and Ness B. Shroff. 2021. Battle between Rate and Error in Minimizing Age of Information. In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (MobiHoc '21). Association for Computing Machinery, New York, NY, USA, 121–130. https://doi.org/10.1145/3466772.3467041.\n\n[40] T. Z. Ornee and Y. Sun, \"Sampling and Remote Estimation for the Ornstein-Uhlenbeck Process Through Queues: Age of Information and Beyond,\" in IEEE/ACM Transactions on Networking, vol. 29, no. 5, pp. 1962-1975, Oct. 2021, doi: 10.1109/TNET.2021.3078137.\n\n", " ### *[Stated by the authors] The algorithm is only applicable to MDPs that admit a threshold policy.* and \n### *[Minor]The proposed policy only works when threshold policy is good enough. Otherwise, a more expressive policy parameterization is still needed in order to achieve better performance.*\n\nYes, our contribution is limited to threshold policies. However, we would like to note that there are many problems, especially in the queueing and networking community, where it is natural to consider threshold policies, and, in many cases, the optimal policies are indeed threshold ones. For example:\n\n[36] H. Tang, J. Wang, L. Song and J. Song, \"Minimizing Age of Information With Power Constraints: Multi-User Opportunistic Scheduling in Multi-State Time-Varying Channels,\" in IEEE Journal on Selected Areas in Communications, vol. 38, no. 5, pp. 854-868, May 2020, doi: 10.1109/JSAC.2020.2980911.\n\n[37] B. Zhou and W. Saad, \"Minimum Age of Information in the Internet of Things With Non-Uniform Status Packet Sizes,\" in IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 1933-1947, March 2020, doi: 10.1109/TWC.2019.2959777.\n\n[38] Eitan Altman, Rachid El-Azouzi, Daniel Sadoc Menasche, and Yuedong Xu. 2019. Forever Young: Aging Control For Hybrid Networks. In Proceedings of the Twentieth ACM International Symposium on Mobile Ad Hoc Networking and Computing (Mobihoc '19). Association for Computing Machinery, New York, NY, USA, 91–100. https://doi.org/10.1145/3323679.3326507.\n\n[39] Guidan Yao, Ahmed M. Bedewy, and Ness B. Shroff. 2021. Battle between Rate and Error in Minimizing Age of Information. In Proceedings of the Twenty-second International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing (MobiHoc '21). Association for Computing Machinery, New York, NY, USA, 121–130. https://doi.org/10.1145/3466772.3467041.\n\n\n[40] T. Z. Ornee and Y. Sun, \"Sampling and Remote Estimation for the Ornstein-Uhlenbeck Process Through Queues: Age of Information and Beyond,\" in IEEE/ACM Transactions on Networking, vol. 29, no. 5, pp. 1962-1975, Oct. 2021, doi: 10.1109/TNET.2021.3078137.\n\n\n### *I think there is a missing integral over $\\lambda'$ in the definition of Q function in Equation (1).*\n\nWe thank the reviewer for the correction. We have updated the expression in the uploaded rebuttal version to be $ Q_{\\mu}\\Big(\\lambda, v, 1(\\mu(v) > \\lambda)\\Big) = \\sum_{v'\\in \\mathcal{V}} \\int_{\\lambda' = -M}^{\\lambda' = +M}\\rho_{\\mu}(\\lambda',v', \\lambda, v)\\bar{r}\\big(\\lambda',v', 1(\\mu(v') > \\lambda')\\big).$\n\n### *I understand that finding the optimal policy in MDPs and RMABs are both challenging due to the PSPACE hardness. But finding the Whittle index in RMABs may be polynomial time solvable when there are only finitely many states [34, 35], where given indexability, [34] uses the definition of Whittle index and the Bellman equation to form a LP to solve in polynomial time, and [35] leverages the threshold policy to construct a faster algorithm for a specific type of RMABs problems. Is it possible to directly compute the Whittle index using similar LP method without using RL or neural networks? If not, what is the major difficulty of computing the Whittle index directly in your case?*\n\nYes, it is possible to directly calculate the Whittle index using algorithms in [34,35]. However, these algorithms require the knowledge of the transition kernel or a good estimate of it. In contrast, our algorithms learn the optimal action without the transition kernel in a model-free fashion. \n\nMoreover, [34,35] require the bandits to be indexable. In contrast, our Theorem 2 makes no assumptions on indexability. Even when bandits are not indexable, our DeepTOP is guaranteed to find a locally optimal threshold policy.\n\n\n### *Why did you choose to use a specific quadratic form of the reward function in the RMAB simulation in Section 6.3? Does the reward function structure affect the convergence of the actor-critic gradient descent update?*\n\nThere is no particular reason for using the quadratic form. To show that the reward function structure does not impact the performance much, we trained DeepTOP and the baselines on a linear reward function $1 - \\frac{(99 - s_{i,t})}{99}$ and a cubic reward function $1 - (\\frac{(99-s_{i,t})}{99})^3$. \n\nWe provide the results in the updated rebuttal version in figures 4 (for linear reward function) and figure 5 (for cubic reward function).\nIn both cases, DeepTOP still outperforms the baselines, and gives a superior performance for the three reward functions (quadratic, linear, cubic).\n", " We thank the reviewer for their detailed comments. If our responses are satisfactory, we kindly ask the reviewer to update their score.\n\n### *In the RMABs context, the proposed algorithm outperforms other Q-learning based algorithm without using actor-critic algorithm. It is known that actor-critic can improve the performance of the RL challenges. I believe this is the main advantage of the proposed algorithm compared to other baselines.*\n\nWe would like to emphasize that, for RMAB, we have indeed evaluated other Q-learning algorithms with actor-critic implementations. They are called \"Neural LPQL\" and \"Neural WIBQL\" in Fig. 3. Neural LPQL, Neural WIBQL, and our DeepTOP have the same implementation of the critic network. They only differ in the update rule for the actor. Fig. 3 shows that DeepTOP significantly outperforms Neural LPQL and Neural WIBQL. This shows that the superiority of DeepTOP is not only due to actor-critic networks, but also due to its better update rule for the actor. Below, we explain in details why Neural LPQL and Neural WIBQL perform worse than DeepTOP.\n\nNeural LPQL operates as follows: Given the critic networks of all bandits, LPQL use all of them together to find a single Lagrange multiplier. LPQL then updates the actor network to find the optimal index of each individual bandit under this Lagrange multiplier. The problem of this approach is that the calculation of index of bandits is not independent from each other. If the critic network of a single bandit is far off, then it will cause LPQL to obtain a wrong Lagrange multiplier and results in the wrong indexes of all bandits. In contrast, DeepTOP train each bandit completely independently from each other. This ensures that the inaccuracy of one bandit will not propagate to other bandits.\n\nNeural WIBQL operates as follows: Given the critic network Q of one bandit, WIBQL aims to update the actor so that $Q(k,0)-Q(k,1) = 0$ (Eq. (11) in [1]) for each state k independently. This is less direct and efficient than our DeepTOP.\n\nAnother important limitation of WIBQL is that it requires each bandit to be indexable. In contrast, our Theorem 2 does not require the bandit to be indexable. Even if a bandit is not indexable, DeepTOP is guaranteed to find a locally optimal threshold policy.\n\n\n### *The MDPs and RMABs domains are similar with no major differences.*\n\nWe agree that extending DeepTOP for RMABs is not difficult. However, we would like to note that RMAB is a very important field of study. The fact that our DeepTOP can be easily applied to RMABs should be considered as a major strength.\n\nIn addition, we would like to report that we have conducted experiments for another RMAB problem. In particular, we have evaluated the recovering bandit problem described in [19]. The results are shown in figure 7 in the uploaded rebuttal version. It can be seen that DeepTOP achieves the best performance compared to the baselines.\n\n### *The threshold policy considered in the paper can only handle one single scalar, which limits the applicability of the threshold policy. The proposed algorithm only works with threshold policy with a single scalar value.*\n\nWe thank the reviewer for the comment. We believe it is possible to extend this work to more sophisticated policies. For example, one extension we are considering is multi-action threshold policies. In this case, the neural network outputs multiple threshold values $\\lambda_k, \\lambda_{k+1},...$. The policy deterministically picks action $a_k$ if the scalar state lies in the interval between $\\lambda_k$ and $\\lambda_{k+1}$.\n\nSimilarly, we can also consider systems where the state consists of a vector of scalars. The threshold policy would output a vector of thresholds, and the action taken depends on which scalar state is above its corresponding threshold.", " We thank the reviewer for their detailed comments. If our responses are satisfactory, we kindly ask the reviewer to update the score.\n\n### *1. But I do have concerns regarding the empirical experiments, I think that the environments are rather toy problems, and since DeepTop incorporate neural-networks, its main advantage over tailored analytical methods is in complex environments.*\n\nWe would like to report that we have conducted experiments for an RMAB with a more complicated setting. In particular, we have evaluated the recovering bandit problem described in [19]. The results are shown in figure 7 in the uploaded rebuttal version. It can be seen that DeepTOP achieves the best performance compared to the baselines.\n\nWhen designing the environments, our goal was to use the same, or very similar, environments to those employed in the baseline policies. This is why we chose the EV problem from [32], the make-to-stock problem from [24] for MDP experiments, and why we extend the two-state process from [16] to the 100-state process for RMAB experiments.\n\n### *2. The last part of section 6 seems like it addressed to the reviewers (lines 220-221). Rephrase.*\n\nWe have rephrased the sentence in the updated rebuttal version. We will link the code repository after the rebuttal period.\n\n### *3. The main contribution of the paper is an empirical method, and the experiments conducted in simple domains. I think that domains that are more challenging should be considered* and *4. Would you be able to run experiments in more complex domains?*\n\nSince Theorem 2 holds for all RMABs, DeepTOP is naturally applicable to more complicated settings. The new results on the recovering bandits' setting in figure 7 show that DeepTOP outperforms the baselines. \nWe chose the original RMAB simulation setting because it is a more complicated version of the setting used in [16], and we wanted to use a setting similar to [16]. Based on the request of another reviewer, we also evaluated the case when the reward function is $1 - (\\frac{99 - s_{i,t}}{99})$ and $1 - (\\frac{99-s_{i,t}}{99})^3$. Results in the updated paper (figures 4 and 5) show that our DeepTOP algorithm is still better.\n\n### *5. Is it possible to give a similar analysis for any policy which is a fixed, deterministic function of some scalar $\\lambda_t$ and an output of a neural-network? might be a future direction.*\n\nWe thank the reviewer for the insight. This is indeed a promising future direction. \n\nWe are actually working on a special case of this direction. We are considering expanding the threshold policy gradient theorem to the case with multiple actions. In this case, the neural network outputs multiple threshold values $\\lambda_k, \\lambda_{k+1}, ... $. The policy deterministically picks action $a_k$ if the scalar state lies in the interval between $\\lambda_k$ and $\\lambda_{k+1}$.", " ### *6. The authors have performed extensive simulations on various problems such as electric vehicle charging problem, inventory management, and make-to-stock problem. By leveraging the monotone property, DeepTOP performs better than DDPG and TD3. However, the explanation regarding how it outperforms SALMUT is not clear.*\n\n\nSALMUT has the following two important limitations:\n\nFirst, it requires the states to be pre-sorted by their indexes. In Section 3, [24] states \"We consider the set of threshold policies where the thresholds for different events are ordered $(\\tau(i) \\geq \\tau(j) \\text{ for } i < j)$ and represent them as policies parametrized by the threshold vector $\\boldsymbol{\\tau} = [\\tau(0), \\tau(1), ..., \\tau(N)]^T$ where $\\tau(0) \\geq \\tau(1) \\geq ... \\geq \\tau(N).$\" In contrast, our DeepTOP does not require the knowledge of ordering.\n\nSecond, SALMUT does not directly consider threshold policies, which are deterministic policies whose outcomes are not continuous. Instead, SALMUT approximates threshold policies by randomized policies based on sigmoid functions. (See Eq. (7) of [24]) SALMUT needs this approximation because it can only handle continuous and differentiable functions. We believe this approximation might be the reason why SALMUT is less accurate than DeepTOP.\n\nIn contrast, DeepTOP directly considers deterministic threshold policies. In fact, the piece-wise constant behavior of threshold policies is the key part in the proof of Theorem 1. On line 115, we stated: \"In other words, for any vector state $v$, the threshold policy would take the same action under all $\\lambda \\in (\\mathbb{M}^{n+1}, \\mathbb{M}^n)$, and we use $\\pi^{n+1}(v)$ to denote this action.\"\n\n\n### *7. Although DeepTOP employs the threshold policy gradient directly, if you take the policy gradient algorithm in [b] and encode the threshold policy information in the gradient of the transition probability matrix, is that not the same as the threshold policy gradient theorem (Theorem 1 in the paper)?*\n\nAs explained in our response to question 2, this is not doable because threshold policies are deterministic policies. Also, the state space of our problem is not finite. Rather, it is uncountably infinite.", " ### *2. It is assumed that $\\lambda_t \\in [-M, M]$ for all $t$ and the states can be numbered. This essentially translates into a finite state space. In the following paper, can’t Theorem 1 be derived as a corollary of the policy gradient theorem in [b] Marbach, P., & Tsitsiklis, J. N. (2001). Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control, 46(2), 191-209.*\n\nFirst, we would like to emphasize that our state space is not finite. We assume that each state has two components, $\\lambda$ and $v$. The component $v$ is from a finite and discrete set, and hence can be numbered. However, $\\lambda$ can be any real number in $[-M, M]$. Hence, the state space is uncountably infinite.\n\nSecond, the policy gradient theorem in [b] is not applicable to our paper. The paper [b] considers stochastic policies and employs the policy gradient theorem to update the probability distribution of actions. On the other hand, our paper considers threshold policies, which are deterministic policies as they will choose action = 1 with probability 1 if $\\lambda$ is smaller than the threshold. Hence, [b] cannot be directly applied to threshold policies. One way to apply [b] for threshold policies is to approximate threshold policies by a stochastic policy. This is basically what SALMUT did. However, the approximation inevitably leads to inaccuracy. We will discuss the shortcomings of SALMUT in the response to comment 6.\n\nSince our Theorem 1 directly considers deterministic threshold policies, it cannot be derived from [b].\nWe will state this difference between theorem 1 and [b] in the updated paper.\n\n\n### *3. In the proof of Theorem 1, in the first step, the rationale behind swapping integration and summation is not clear. It needs to be explicitly stated in the paper.*\n\nWe thank the reviewer for highlighting this step. We integrate over a finite sum of vector states $v$.\nSince the $Q_{\\mu^\\phi}\\Big(\\lambda, v, 1(\\mu^\\phi(v) > \\lambda)\\Big)$ values are discounted, the finite sum converges for all Q-values.\n\nThe integral $ \\int_{\\lambda = -M}^{\\lambda = +M} \\sum_{v\\in\\mathcal{V}}Q_{\\mu^\\phi}\\Big(\\lambda, v, 1(\\mu^\\phi(v) > \\lambda)\\Big) d\\lambda < \n\\infty$ \nfor the range $\\lambda \\in [-M, +M]$. Since the finite sum converges, using the Fubini-Tonelli theorem, the two terms are equal\n$ \\int_{\\lambda = -M}^{\\lambda = +M} \\sum_{v\\in\\mathcal{V}}Q_{\\mu^\\phi}\\Big(\\lambda, v, 1(\\mu^\\phi(v) > \\lambda)\\Big) d\\lambda = \\sum_{v\\in\\mathcal{V}}Q_{\\mu^\\phi}\\int_{\\lambda = -M}^{\\lambda = +M} \\Big(\\lambda, v, 1(\\mu^\\phi(v) > \\lambda)\\Big) d\\lambda$. \n\nWe added this description in theorem 1 proof in the uploaded rebuttal version.\n\n### *4. In Algorithm 1, why don’t the authors consider a decreasing $\\epsilon$? Does constant guarantee convergence to the optimal solution?*\n\nWe thank the reviewer for the suggestion. To respond to this comment, we trained DeepTOP-MDP (algorithm 1) and the baselines with a decaying $\\epsilon$ at rate $1/500$ per timestep and an initial $\\epsilon = 1$.\nResults provided in figure 6 show that DeepTOP-MDP still outperforms other baselines with a decaying $\\epsilon$.\n\n### *5. The deep threshold optimal policy computation of RMAB in Section 5 appears to be a straightforward extension of the policy gradient theorem in Theorem 1 because of the equivalence between obtaining the whittle index in a Restless Multi-armed Bandit (RMAB) problem and optimal threshold policy in an MDP. However, since this idea was already introduced in [1,a], the contribution in Section 5 is limited.*\n\nWe agree that the proof of Theorem 2 (for RMAB) is similar to that of Theorem 1 (for MDP), and we have stated as such in the paper. However, we would like to emphasize that RMAB is an important field of study. Hence, the applicability of DeepTOP for RMAB should be considered as a significant strength.\n\nAs mentioned for question 1, there are important differences between [1,a] and our DeepTOP in multiple aspects.\n\n\n", " \nWe thank the reviewer for the detailed comments. If our responses are satisfactory, we kindly ask the reviewer to update the score.\n\n### *1. The idea of viewing the Whittle index policy for RMABs as an optimal threshold policy is already developed in [1] as stated in the paper. Another important work in this direction is [a] Robledo, F., Borkar, V., Ayesta, U., & Avrachenkov, K. (2022). QWI: Q-learning with Whittle Index. ACM SIGMETRICS Performance Evaluation Review, 49(2), 47-50. See Equation (8) in the paper above. How is the proposed algorithm in this paper different from the schemes described in these papers?*\n\nWe thank the reviewer for the comment. We would like to emphasize that there are important differences between our DeepTOP and [1,a] in design principles, actual algorithms, theoretical properties, and simulation performance, which we detail below:\n\n**[Design principle]**\nThe main design principle of [1,a] is that the Whittle index is the solution to the equation $Q(x,0) = Q(x,1)$ (Eq. (6) in [a]), instead of an optimal threshold policy. In fact, the word \"threshold\" does not appear in [a], and [1] specifically states that it does not learn the Whittle index as a threshold policy because it is hard to do so, \"The Whittle index itself, however, is not a simple threshold, but a function of the state.... At the same time, Whittle index is defined in terms of an equality. So a much simpler scheme is used here, which makes incremental changes towards forcing this equality.\"\n\nIn contrast, our DeepTOP views the Whittle index as the optimal threshold policy. We demonstrate that this view leads to a simple learning algorithm. Hence, our DeepTOP also solves the hard challenge described in [1].\n\n**[Algorithm design]**\nThe algorithms in [1,a] are based on satisfying Eq. (6) in [a] for each individual state. As a result, the algorithms in [1,a] need to update the Whittle index of each state independently. This is evident in Eq. (8) in [a], which only concerns the state x. \n\nIn contrast, DeepTOP aims to find the optimal threshold policy with respect to one single objective function, Eq. (8) in our paper. Hence, it only needs to apply Eq. (9) in our paper once.\n\nWhile the difference may seem subtle at first, it leads to significant difference in practice, as we will show below.\n\n**[Theoretical properties]**\nThe algorithms in [1,a] are only applicable to indexable bandits due to their reliance on Eq. (6) in [a]. \n\nIn contrast, as we stated in the paper, our Theorem 2 makes no assumption on the indexability of bandits. Hence, DeepTOP can still be employed for bandits that are not indexable, and is guaranteed to find a locally optimal threshold policy for non-indexable bandits. \n\n**[Simulation performance]**\nWe have implemented a neural-network extension of the algorithms in [1,a], which is called \"Neural WIBQL\" in Fig. 3 in our paper. Neural WIBQL uses the same neural network architecture and the same critic network update as our DeepTOP. The only difference is in the update of actor. In each update, Neural WIBQL updates the Whittle index of each state according to Eq. (8) in [a]. Our DeepTOP performs one single update according to Eq. (9) in our paper. As a result, Neural WIBQL is much slower than DeepTOP. For example, to run the setting in Fig. 3(a), Neural WIBQL takes 58 minutes and DeepTOP only takes 14 minutes.\n\nIn addition, it can be seen that DeepTOP significantly outperforms Neural WIBQL. This shows that our algorithm is both more time-efficient and more sample-efficient than the algorithms in [1,a].\n\n", " In this paper, the problem of learning the optimal threshold policy for Markov Decision Processes (MDPs) is considered. Using the monotonicity property of threshold policies, the authors establish a simple policy gradient formula for the class of threshold policies. Using that, an off-policy actor-critic algorithm (DeepTOP) is proposed to learn the optimal policy in a situation where the optimal policy is known to possess a threshold structure. Moreover, the equivalence between obtaining the whittle index in a Restless Multi-armed Bandit (RMAB) problem and optimal threshold policy in an MDP is established. Following that, the DeepTOP algorithm is extended to the RMAB setting. Extensive simulation results are presented to demonstrate that the proposed algorithms outperform other algorithms in the literature. The paper is well-written, and the claims appear to be correct. Extensive simulations have been performed to demonstrate the efficacy of the proposed approaches. However, there are several concerns as stated below.\n1.\tThe idea of viewing the Whittle index policy for RMABs as an optimal threshold policy is already developed in [1] as stated in the paper. Another important work in this direction is \n[a] Robledo, F., Borkar, V., Ayesta, U., & Avrachenkov, K. (2022). QWI: Q-learning with Whittle Index. ACM SIGMETRICS Performance Evaluation Review, 49(2), 47-50.\nSee Equation (8) in the paper above. \nHow is the proposed algorithm in this paper different from the schemes described in these papers? \n2.\tIt is assumed that $\\lambda_t\\in[-M,M]$ for all $t$ and the states can be numbered. This essentially translates into a finite state space. In the following paper, can’t Theorem 1 be derived as a corollary of the policy gradient theorem in \n[b] Marbach, P., & Tsitsiklis, J. N. (2001). Simulation-based optimization of Markov reward processes. IEEE Transactions on Automatic Control, 46(2), 191-209.\n3.\tIn the proof of Theorem 1, in the first step, the rationale behind swapping integration and summation is not clear. It needs to be explicitly stated in the paper. \n4.\tIn Algorithm 1, why don’t the authors consider a decreasing $\\epsilon$? Does constant $\\epsilon$ guarantee convergence to the optimal solution?\n5.\tThe deep threshold optimal policy computation of RMAB in Section 5 appears to be a straightforward extension of the policy gradient theorem in Theorem 1 because of the equivalence between obtaining the whittle index in a Restless Multi-armed Bandit (RMAB) problem and optimal threshold policy in an MDP. However, since this idea was already introduced in [1,a], the contribution in Section 5 is limited. \n6.\tThe authors have performed extensive simulations on various problems such as electric vehicle charging problem, inventory management, and make-to-stock problem. By leveraging the monotone property, DeepTOP performs better than DDPG and TD3. However, the explanation regarding how it outperforms SALMUT is not clear. Although DeepTOP employs the threshold policy gradient directly, if you take the policy gradient algorithm in [b] and encode the threshold policy information in the gradient of the transition probability matrix, is that not the same as the threshold policy gradient theorem (Theorem 1 in the paper)? The authors are requested to explain this point.\n Overall, although the authors’ effort in exploiting the information regarding the existence of the threshold-based optimal policy in the learning framework is appreciable, the contribution regarding extension towards RMAB needs to be better highlighted. Moreover, how the policy gradient theorem (Theorem 1) presented in the paper is a non-trivial extension of the policy gradient theorem in [b] within the context of threshold policies, needs to be established clearly. ", " The paper consider a subset of dynamic problems, in which the optimal policy is a threshold-policy. The authors use this attribute to formulate tailored off-policy actor-critic algorithms, for both MDPs and RMABs which are gradient-based, so can utilize neural networks. They empirically compare their method to SOTA methods in three MDP domains and three RMAB parametrizations, the results show that their method, DeepTOP, performs better than the compared methods in all the experiments. Overall, I think it is a good paper, which contributes to the community. But I do have concerns regarding the empirical experiments, I think that the environments are rather toy problems, and since DeepTop incorporate neural-networks, its main advantage over tailored analytical methods is in complex environments.\n\n# Strengths:\nThe performance of DeepTOP compared to the other methods is impressive. I think that while being limited, threshold policies are indeed interesting. The theorems are important contributions as well. \n\n# Weaknesses:\n1. The main contribution of the paper is an empirical method, and the experiments conducted in simple domains. I think that domains that are more challenging should be considered.\n2. While being important, the theorems are minor, hence they do not compensate lack of experiments\n\n# Minor Comments:\nThe last part of section 6 seems like it addressed to the reviewers (lines 220-221). Rephrase. 1. Would you be able to run experiments in more complex domains? \n2. Is it possible to give a similar analysis for any policy which is a fixed, deterministic function of some scalar $\\lambda_t$ and an output of a neural-network? might be a future direction. The authors addressed the limitations of their work.", " The paper presents an algorithm to compute an optimal threshold policy in MDPs and RMABs with state information composed of a scalar state and a vector state. The authors propose to learn a mapping from the vector state to a scalar number to compare with the scalar state. This function is used to construct the threshold policy where the action (0 or 1) only depends on the comparison between the scalar state and the produced scalar value. In order to learn the mapping used for threshold policy, the authors use an actor-critic algorithm, where the scalar mapping and the associated threshold policy are used as the actor function, and a neural network is used as the action-value function (Q-function) as the critic function. The losses of the actor and critic functions are defined as the standard actor-critic work using the expected performance and the Bellman error. In this paper, the authors compute the derivative of the expected performance and identify a simple expression of the actor's derivative. This is used to perform actor-critic gradient updates more efficiently.\n\nIn the RMAB domain, the same algorithm and derivative simplification can be applied to the RMAB domain. RMABs is a special case of multiple MDPs with scalar and vector states. Specifically, the objective is defined as the integral of all activation cost $lambda$, assuming Whittle index exists and thus there exists a threshold policy that is optimal for all activation cost. This makes finding the optimal threshold policy equivalent to finding the Whittle index (if exist). \n\nThe proposed method is evaluated on three domains and compared with other RL-based baselines, including general RL algorithms (DDPG, TD3, SALMUT) in the MDP setting, and Q-learning based (LPQL, WIBQL) and Whittle index based (NeurWIN) in the RMABs setting. My interpretation of why the proposed algorithm can outperform the general RL-based algorithms is that the proposed method simplifies the search space to threshold policy, while in contrast the general RL algorithms may use more complex models (e.g., neural networks) to represent the actor function. This advantage makes the proposed algorithm find the optimal policy more efficient but also restricted more to threshold policy. It may not work when threshold policy is not optimal. Specifically, in the context of MDPs considered in this paper, it is possible that threshold policy is not optimal. In those cases, general RL algorithms may still be needed.\n\nin the RMABs context, the proposed algorithm outperforms other Q-learning based algorithm without using actor-critic algorithm. It is known that actor-critic can improve the performance of the RL challenges. I believe this is the main advantage of the proposed algorithm compared to other baselines. ## Strengths\n- The paper is well-presented and easy to follow. I appreciate the clarity of the presentation and idea.\n- The simplified expression of the actor derivative (expected reward derivative) is new.\n- Thorough evaluations and experiments\n\n--------\n## Weaknesses\n- The novelty is incremental. The main contribution is based on the use of threshold policy and simplification of the policy gradient.\n- The MDPs and RMABs domains are similar with no major differences.\n- The threshold policy considered in the paper can only handle one single scalar, which limits the applicability of the threshold policy.\n- [Minor]The proposed policy only works when threshold policy is good enough. Otherwise, a more expressive policy parameterization is still needed in order to achieve better performance. ## Comments\n- I think there is a missing integral over $\\lambda'$ in the definition of Q function in Equation (1).\n\n## Questions\n- I understand that finding the optimal policy in MDPs and RMABs are both challenging due to the PSPACE hardness. But finding the Whittle index in RMABs may be polynomial time solvable when there are only finitely many states [34, 35], where given indexability, [34] uses the definition of Whittle index and the Bellman equation to form a LP to solve in polynomial time, and [35] leverages the threshold policy to construct a faster algorithm for a specific type of RMABs problems. Is it possible to directly compute the Whittle index using similar LP method without using RL or neural networks? If not, what is the major difficulty of computing the Whittle index directly in your case?\n- Why did you choose to use a specific quadratic form of the reward function in the RMAB simulation in Section 6.3? Does the reward function structure affect the convergence of the actor-critic gradient descent update?\n\n\nReferences:\n[34] Qian, Yundi, Chao Zhang, Bhaskar Krishnamachari, and Milind Tambe. \"Restless poachers: Handling exploration-exploitation tradeoffs in security domains.\" In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, pp. 123-131. 2016.\n[35] Mate, Aditya, Jackson Killian, Haifeng Xu, Andrew Perrault, and Milind Tambe. \"Collapsing Bandits and Their Application to Public Health Intervention.\" Advances in Neural Information Processing Systems 33 (2020): 15639-15650.\n\n ## Limitations\n- [Stated by the authors] The algorithm is only applicable to MDPs that admit a threshold policy.\n- The proposed algorithm only works with threshold policy with a single scalar value.\n\n## Negative societal impact\nN/A", " The paper considers threshold policies problem. The authors show that the gradient for these problems has a simple expression. The authors also propose a rephrasing of Whittle index policies for restless multi-armed bandits in the form of threshold policy and match their algorithm to this scenario. The authors support their results with simulations. Originality\nTo the best of my knowledge the results are novel.\n\nQuality\nThe theoretical results do not seem very surprising, but I did find them interesting and useful. \n\nClarity\nThe paper is written very clearly. \nThe restrictions on V seems to be quite drastic (discrete set at line 66, distinct threshold values in Theorem 1) so I think a short explanation is in order (is this just a technical restriction or a real pain?, why the restriction exists?)\n\nSignificance\nThreshold policies seem to match a rather small range of problems. The main limitations are two actions policies and the policy structure. Subsequently, the significance of the paper is highly limited just by tackling this small range. In its niche, I think the paper gives a very useful insight, even if its not very sophisticated. In addition, with some thought the core idea might extend to more general scenarios. For example - other cases where the gradient can be calculated easily or other problems that can be solved to threshold policies. The paper is pretty straight-forward and clear. I have no questions or suggestions. The paper has no potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "5oI5LiEo1Hu", "-zsIjXX3B7o", "dasj9-QtSe2", "eLB9DSYDy1", "xHfCq08hMjb", "MhxaGhZAmsr", "SteInCClNqx", "A1pXy6wyFil", "yWi5p3H1JJd", "rN4GoSrG_N9g", "zuKQjTy8M3s", "nips_2022_Wk-4Tp-gPpv", "nips_2022_Wk-4Tp-gPpv", "nips_2022_Wk-4Tp-gPpv", "nips_2022_Wk-4Tp-gPpv" ]
nips_2022_DRckHIGk8qw
GAMA: Generative Adversarial Multi-Object Scene Attacks
The majority of methods for crafting adversarial attacks have focused on scenes with a single dominant object (e.g., images from ImageNet). On the other hand, natural scenes include multiple dominant objects that are semantically related. Thus, it is crucial to explore designing attack strategies that look beyond learning on single-object scenes or attack single-object victim classifiers. Due to their inherent property of strong transferability of perturbations to unknown models, this paper presents the first approach of using generative models for adversarial attacks on multi-object scenes. In order to represent the relationships between different objects in the input scene, we leverage upon the open-sourced pre-trained vision-language model CLIP (Contrastive Language-Image Pre-training), with the motivation to exploit the encoded semantics in the language space along with the visual space. We call this attack approach Generative Adversarial Multi-object Attacks (GAMA). GAMA demonstrates the utility of the CLIP model as an attacker's tool to train formidable perturbation generators for multi-object scenes. Using the joint image-text features to train the generator, we show that GAMA can craft potent transferable perturbations in order to fool victim classifiers in various attack settings. For example, GAMA triggers ~16% more misclassification than state-of-the-art generative approaches in black-box settings where both the classifier architecture and data distribution of the attacker are different from the victim. Our code is available here: https://abhishekaich27.github.io/gama.html
Accept
The authors proposed the first multi-object generative attack, GAMA, which utilizes the vision-language model CLIP as an attacker's tool in the training of the generator to enhance the transferability across different data distributions. All four reviewers recognize that this paper is well-written and easy to follow. The presented results also are promising. Most importantly, the Generative Adversarial Multi-object scene Attack is good direction for further study. Since the four reviewers consistently accept the paper with good comments, the AC made a decision of acceptance.
train
[ "QyfrIm3BQsQ", "CxD35dDmZJ", "3mXohj1tR32", "f57evnvBH0K", "Gc3-B6yxEke", "1MSqlPHSzsD", "yYOOct9j0HC", "M7fbwTR7F7K", "2UlfMjkTtu", "Sqn9nNWBfIZ", "OfEEY6S44KP", "JFtHaGgFVKO", "SykELBQK34F", "mUBB6Yiyouj", "qYPh2wQ3Lw1" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to read our response. We are happy to see your concerns are addressed.\n\nBest Wishes,\n\nAuthors", " Thanks for the response from the authors. It is great to see the method performs well on single-object datasets. My concerns are addressed.", " Thank you for taking the time to read our rebuttal. Please feel free to further raise your concerns in case you have any.\n\n----\n- \"***It would be great to include the exhaustive search of the mid-level layer of TAP in the discussion.***\"\n\n Thank you for the suggestion. We have included a discussion on exhaustive search of the mid-level layer of TAP in the Supplementary Material (L59-L76). \n----\n- \"***Ideally, having a metric to quantify the computational complexity of these methods will clarify things much better.***\"\n\n The search for the optimal mid-layer (that gives the best attack results) requires searching over each block of *K* layers (which is around an average of 5 layers [12]) for each surrogate model. Hence to find the best layer to train a perturbation generator for a particular model as per [12], the computation time cost will be *KN* GPU hours (where *N* is the total training time (in GPU hours) for one layer) for **every** training data distribution and for **every** surrogate model. Furthermore, our analysis shows that the resultant optimal layer might not be the best one when the training data distribution varies (as shown in the analysis in our paper). On the contrary, our method doesn’t depend on such exhaustive search for best attack results.", " It would be great to include the exhaustive search of the mid-level layer of TAP in the discussion. Ideally, having a metric to quantify the computational complexity of these methods will clarify things much better.", " Hello Reviewers,\n\nThank you again for your helpful and insightful comments. We would be happy to address any further concerns you have based on our rebuttal responses. \n\n\nBest wishes, \n\nAuthors", " 3. **Fairness of baselines:** Since our method is the first that considers generative approaches for crafting attacks on multi-object scenes, we need to adapt existing methods to be able to compare fairly. We have two options.\n - *Option 1*: First, we could use existing methods trained on single-object images and use them to craft attacks on multi-object images. This is obviously unfair since these methods were never aware of the characteristics of multi-object images.\n - *Option 2*: Second, we can make suitable adjustments to the prior attack algorithms (see L233-236) to train with multi-object scenes and ask them to create perturbations on multi-object scenes. This is a fairer approach than *Option 1* as the attacks now use multi-object images during training their perturbation generator. Results of this approach are shown in Tables 2-7 in the main paper.\n\n Hence, it is fair to compare prior generative attacks under *Option 2* as done in our paper.", " 1. **Response to Question 1:** We believe a possible explanation for this phenomenon is: Existing COCO image annotations have been shown to be extremely noisy (Section 3.4 of [A], also see [B]), and the multi-object classifiers are trained on these original (noisy) annotations. As a result, the image features computed by these classifiers have a mismatch with the image features computed by CLIP (that do not use these image annotations in any manner). Hence, this is probably creating a performance bottleneck. \n\n [A] “TIDE: A General Toolbox for Identifying Object Detection Errors”, ECCV 2020\n\n [B] “How I found nearly 300,000 errors in MS COCO” https://medium.com/@jamie_34747/how-i-found-nearly-300-000-errors-in-ms-coco-79d382edf22b\n----\n2. **Response to Question 2:** Based on the Reviewer’s suggestion, we evaluated the CLIP (as a “zero-shot prediction” model) on the perturbed images and computed the top two associated labels as suggested by CLIP in Figure 2 in Supplemental Material. We can observe that the perturbations change the labels associated with the clean image.", " 1. **Response to W1/Q1:** \n - We attribute this performance gap to the optimal mid-level layer (from the surrogate model, chosen to compute the learning loss) that is being searched manually for TAP as pointed out by the authors of TAP (see Limitations in their paper). This manual search (*a*) is extremely time-consuming and not scalable as it has to be done for every combination of surrogate model and data distribution and (*b*) is data distribution specific as manual layers (chosen for ImageNet in [12]) do not yield the same level of attack performance when trained with different data distributions (see L265-L270). Our mid-layer is chosen based on the embedding size of CLIP: e.g. if the embedding size of the CLIP encoder is 512, we select the layer from the surrogate model that outputs 512 dimension features. \n - The best attack method that doesn’t need such a manual layer search is CDA [11]. But we convincingly outperform them in all settings as shown in Table 2-7. \n----\n2. **Response to W2/Q2:** Thank you for the suggestion. Due to the limitation of space in the main paper, we provided the feature visualization comparison with TAP [12] for both MS-COCO and Pascal VOC for cross-domain black-box attacks in Figure 1 of Supplementary Material.\n----\n3. **Response to W3:** Thank you for pointing these out. We have added and discussed them in the revised version (L88 and L105-106, New Ref 1 is [61], NewRef 2 is [60]).\n\n\n", " 2. **Comparison when trained with a single-object dataset:** Based on the Reviewer’s suggestion: \n - We analyze the average performance on the single-object dataset ImageNet for GAMA and compare it with two SOTA generative attacks: CDA [11] - no manual optimal layer search from the surrogate model is required for training the perturbation generator, and TAP [12] - manual optimal layer search required for best attack performance. \n - We train on ImageNet with DenseNet169 as a surrogate model. We then evaluate the attacks for six victim models for datasets ImageNet/Pascal-VOC/MS-COCO and one victim model for datasets CIFAR10/CIFAR100. Lower is better.\n\n| | ImageNet | Pascal-VOC | MS-COCO | CIFAR10 | CIFAR100 |\n| --- | --- | --- | --- | --- | --- |\n| CDA[11] | 33.65 | 36.22 | 26.98 | 85.01 | 54.71\n| TAP[12] | **09.46** | 24.84 | 17.51 | 82.35 | 49.38\n| Ours | 21.19 | **20.89** | **14.50** | **75.49** | **43.60** \n \n Our results are better than CDA on all datasets and better than TAP on all datasets except Imagenet, which is explained below. \n\n- Other than ImageNet, our results on all the datasets are better than TAP (average TAP/Ours- 50.73%/38.62%). Our results are poorer than TAP on ImageNet because TAP manually searches for the optimal layer from the surrogate model to train the generator (see *Limitation* in [12]). Such a search is very time-consuming, impractical, and clearly not scalable. Our method doesn’t rely on manually finding such an optimal layer as our mid-layer is decided by looking at the embedding size of CLIP features. Next, directly using TAP’s suggested layer is not possible as the embedding size doesn’t match that of CLIP, and would require us to introduce embedding modifications (e.g. PCA/tSNE) leading to an unreasonable increase in training time. Finally, TAP shows degradation in performance when distribution changes from ImageNet (as shown in our paper) and would still require a manual search for all the different combinations of surrogate model and data distributions we have explored in this work.\n\n- If we do not consider the manual search of an optimal layer from the surrogate model to train the generator, then the proper baseline on ImageNet would be CDA [11]. The average attack performance (over six models) for ImageNet is: CDA/Ours = 33.65%/21.19%, and we convincingly outperform them on all other settings.\n----\n2. **Typos:** Thank you for pointing these out. We have corrected them in the revised version. \n----\n3. **Normalization of embeddings:** We are using the embeddings directly from the image/text encoder. Yes, they are normalized before using them in the proposed loss functions. We have highlighted this in the revised version (L213).\n", " 1. **Algorithmic contribution:** The overwhelming majority of work in generative adversarial attacks [10-13] has been on single-object images. Recently, there have been methods [26-30] for attacking multi-object images which are more natural or better representatives of real-world scenes. However, all these works are image-specific approaches. Our work proposes the first method for attacks on multi-object images in a generative setup, which is the main contribution of the paper. Our approach for crafting perturbations builds on pre-trained DNNs but incorporates open-source vision-language models like CLIP to encode the relationships between multiple objects in the scene *via* language derivatives. The effectiveness of our approach does not arise because we use CLIP. Rather, we use CLIP because we work on far more complex images than existing approaches (multi-object vs single-object images) and we need a tool like CLIP to understand the relationships between the objects.\n More specifically, the use of CLIP in our method is non-trivial. \n\n - In order to understand the “contextual information” relationships between the different objects in the multi-object scenes, we leverage CLIP’s inherent ability to encode text into features for representing this “context information” in the multi-object images through language derivatives. \n - As CLIP is trained on ~400 million “image-text pairs”, it aligns the context encoded in language space with context from image space. Our novelty is in exploiting this aligning property to our advantage and **misaligning** the perturbed image w.r.t. the context captured in text features.\n----\n2. **Comparison when trained with a single-object dataset:** Regarding Table 1, we mean that it can handle a data distribution (Pascal-VOC and MS-COCO) containing both multiple object images and single object images. Based on the Reviewer’s suggestion: \n - We analyze the average performance on the single-object dataset ImageNet for GAMA and compare it with two SOTA generative attacks: CDA [11] - no manual optimal layer search from the surrogate model is required for training the perturbation generator, and TAP [12] - manual optimal layer search required for best attack performance. \n - We train on ImageNet with DenseNet169 as a surrogate model. We then evaluate the attacks for six victim models for datasets ImageNet/Pascal-VOC/MS-COCO and one victim model for datasets CIFAR10/CIFAR100. Lower is better.\n\n| | ImageNet | Pascal-VOC | MS-COCO | CIFAR10 | CIFAR100 |\n| --- | --- | --- | --- | --- | --- |\n| CDA[11] | 33.65 | 36.22 | 26.98 | 85.01 | 54.71\n| TAP[12] | **09.46** | 24.84 | 17.51 | 82.35 | 49.38\n| Ours | 21.19 | **20.89** | **14.50** | **75.49** | **43.60** \n \n Our results are better than CDA on all datasets and better than TAP on all datasets except Imagenet, which is explained below. \n\n- Other than ImageNet, our results on all the datasets are better than TAP (average TAP/Ours- 50.73%/38.62%). Our results are poorer than TAP on ImageNet because TAP manually searches for the optimal layer from the surrogate model to train the generator (see *Limitation* in [12]). Such a search is very time-consuming, impractical, and clearly not scalable. Our method doesn’t rely on manually finding such an optimal layer as our mid-layer is decided by looking at the embedding size of CLIP features. Next, directly using TAP’s suggested layer is not possible as the embedding size doesn’t match that of CLIP, and would require us to introduce embedding modifications (e.g. PCA/tSNE) leading to an unreasonable increase in training time. Finally, TAP shows degradation in performance when distribution changes from ImageNet (as shown in our paper) and would still require a manual search for all the different combinations of surrogate model and data distributions we have explored in this work.\n\n- If we do not consider the manual search of an optimal layer from the surrogate model to train the generator, then the proper baseline on ImageNet would be CDA [11]. The average attack performance (over six models) for ImageNet is: CDA/Ours = 33.65%/21.19%, and we convincingly outperform them on other all settings.\n\n----\n\n\n", " We are very thankful to the reviewers for their thorough reviews and encouraging comments. We have uploaded the revised version of the main manuscript and supplementary material with the following changes (highlighted in blue):\n\n1. Typos pointed out by Reviewer *pwfE* are corrected.\n2. L213: Statement added to indicated embeddings are normalized before loss functions are computed as suggested by Reviewer *pwfE*.\n3. NewRef 1 and NewRef 2 suggested by Reviewer *zurs* are added and discussed in the Related Works section.\n4. L53-L57: Added paragraph and Figure 2 in Supplementary Material for response to Reviewer *SbKy*. \n \nWe can further incorporate other changes that the reviewers might suggest based on the post-rebuttal discussion. We now provide detailed responses to each reviewer's queries. We look forward to addressing their subsequent comments.\n\n----\n\n*Post-rebuttal edits to manuscript*:\n1. A discussion on mid-layer selection from surrogate model (in comparison to prior works) has been added to Supplementary Material (L59-76) as suggested by Reviewer *zurs*.\n\n\n", " This paper proposes the GAMA attack, a generative approach to generating adversarial examples. The proposed method incorporates the vision-language model CLIP in the training of the generator. Experiments demonstrate the effectiveness of the proposed method in various attack settings. ### Strengths \n* The paper is well-written and easy to follow.\n* Experiments demonstrate the effectiveness of the proposed method in various attack settings, including both the white-box and the black-box, even under the setting with different datasets and tasks.\n\n### Weaknesses\n* The major concern is that the algorithmic contribution of the proposed method is limited. The main contribution of the proposed method is incorporating the vision-language model CLIP in the training of the generator, which is also the main concern. Previous methods only train the generator with a pre-trained DNN and the corresponding dataset. Instead of a pre-trained model, the proposed method requires access to the CLIP model. The CLIP extracts knowledge from ~400 million image-text pairs. Thus, the proposed method actually leverages much more information in training the generator, compared with previous generative approaches. Thus, the effectiveness of the proposed method is obvious, since it leverages much more information. From this perspective, the algorithmic contribution of the proposed method seems limited.\n\n\n * Does the GAMA attack perform well on the single-object dataset like ImageNet? In Table 1, it is said that the GAMA attack can analyze both attacking scenarios with input scenes that contain multiple objects or a single object. However, in experiments, the GAMA attack is only trained on PASCAL-VOC and MS-COCO, which are scenes containing multiple objects. So does the GAMA attack perform well when training on the single-object dataset like ImageNet?\n* How to conduct fair comparisons with baseline methods, which aim to handle attacking scenarios with input scenes that contain a single object? As mentioned in Table 1, previous methods only analyze single-object attacking scenarios. However, in experiments, all previous methods are also trained on PASCAL-VOC and MS-COCO, which are scenes containing multiple objects. Is it fair to compare with baseline methods in a different setting from their original design? The authors adequately addressed the limitations and potential negative social impact of their work.", " This paper introduces the first multi-object generative attack, GAMA, which utilizes a pre-trained CLIP model as an attacker tool to enhance the transferability across different data distributions. Extensive experiments in this paper show that GAMA can achieve state-of-the-art transferability on various black-box settings when training on multi-label datasets. Besides, GAMA also shows its superior efficacy against various defense methods compared with baselines. Pros:\n\n1. This paper first proposes the problem of multi-object scene based adversarial attack. \n\n2. Introducing joint vision-and-language pre-trained models, such as CLIP, to adversarial attacks is interesting. Specifically, the authors generate multi-class text prompts and leverage the semantics relationship underlying the text representations to tackle the multi-object attack problem. \n\n3. When training on multi-label datasets, the proposed method outperforms prior works in terms of transferability on various black-box settings.\n\n4. This paper is clearly written and very easy to follow. \n\nCons:\n\n1. This paper only considers the settings of training on multi-object datasets in the experiments, such as Pascal-VOC and MS-COCO, and shows its superiority compared with prior works. Those prior works focus on single-object scenes and the authors adapted them to multi-object scenes. However, the proposed method should also be able to handle single-object scenarios, while the experiments of training on single-object scenes, such as ImageNet, and transferring to other single-object datasets, as done in TAP [12], BIA [13], are not included in this paper. I am curious about how GAMA performs compared with those prior works on this standard setting.\n\n2. Typos: L159 and L201: $\\mathcal{L}\\_{\\text{txt}}$ -> $\\mathcal{L}\\_{\\text{img}}$.\n\n 1. See Con 1), How does GAMA perform and transfer compared with prior works on attacking single-object scenes? This result should play an important role in judging the significance and applicability of this paper to the community. If GAMA can still beat those baselines that focus on single-object attacks then in the future one could directly choose to use GAMA to perform attacks without caring about the number of objects in the image.\n\n2. For $\\rho\\_\\text{img}$ and $\\rho\\_\\text{txt}$, are you using the joint image/text embeddings or the embeddings that directly come from the image/text encoder? Do you use any normalization on $z$, $\\hat{z}$, $\\rho\\_\\text{img}$, and $\\rho\\_\\text{txt}$? If not, how do you ensure that $\\hat{z}$, $\\rho\\_\\text{img}$, and $\\rho\\_\\text{txt}$ are within the same feature space so that you can contrast them with each other, considering that the surrogate classifier and CLIP image/text encoder are using different architectures. The authors adequately discussed the limitations and potential societal impact in the last section of the main paper.", " This paper studies the problem of adversarial attacks on multi-object scenes by leveraging a pre-trained vision language model. More specifically, it leverages the joint embedding space learned from cross-modal data to provide self-supervised signals for generating adversarial perturbations on the image domain. Experiments have been conducted on multiple image benchmarks including single-object benchmarks such as PASCAL-VOC and multi-object benchmarks such as MS-COCO. Strengths:\n* [S1] This paper presents a clean solution to generating multi-object attacks. The quantitative results demonstrate the strength of the proposed attacks when compared to state-of-the-art attacks. The reviewer feels that the paper could possibly open up a new research area of using foundation models to craft adversarial attacks and defenses in the future.\n\nWeaknesses:\n* [W1] In Table 3 and Table 5, the proposed GAMA is less effective than TAP [Ref 12] when applied to the VGG19 network. It would be helpful to provide some explanations and high-level insights on this.\n* [W2] Although the quantitative experiments are thorough and solid enough, the qualitative analysis is definitely insufficient (Figure 5). As a scientific study, it would be helpful to understand how GAMA explores the perturbation space compared to other existing attacks (e.g., perturbation/feature visualization).\n* [W3] Missing discussions on mid-level or semantic-level adversarial attacks using generative models ([NewRef1-2]).\n\nReferences \n* [NewRef 1] SemanticAdv: Generating Adversarial Examples via Attribute-conditioned Image Editing. Qiu et al., In ECCV 2020.\n* [NewRef 2] Unrestricted Adversarial Examples via Semantic Manipulation. Bhattad et al., In ICLR 2020. * [Q1] Explain why GAMA is less effective than TAP in some cases as mentioned in [W1]. \n* [Q2] Provide some findings and analysis of the GAMA perturbations (e.g., raw input level and feature level). Limitations have been mentioned in the paper.", " This paper proposes the first generative adversarial attack specifically for multi-object scene classification. It makes use of a CLIP model to provide additional losses during training. Good performance is achieved across a variety of tasks and models in black box and white box settings. Strengths:\n- The paper is well written and the explanation of the method is relatively easy to understand given its complexity.\n- The results are convincing and demonstrate the effectiveness of the method on various tasks, datasets and models, and in both black and white box settings.\nWeaknesses:\n- The method does not outperform the baselines as consistently when trained on MSCOCO as it does on Pascal-VOC 1. Can you explain why you think the method works better when trained on Pascal-VOC vs MSCOCO?\n2. Have you evaluated the adversarial images on the clip model? The discussion of limitations and societal impacts is good." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "CxD35dDmZJ", "2UlfMjkTtu", "f57evnvBH0K", "M7fbwTR7F7K", "nips_2022_DRckHIGk8qw", "Sqn9nNWBfIZ", "qYPh2wQ3Lw1", "mUBB6Yiyouj", "SykELBQK34F", "JFtHaGgFVKO", "nips_2022_DRckHIGk8qw", "nips_2022_DRckHIGk8qw", "nips_2022_DRckHIGk8qw", "nips_2022_DRckHIGk8qw", "nips_2022_DRckHIGk8qw" ]
nips_2022_NjP18IbKKlX
RecursiveMix: Mixed Learning with History
Mix-based augmentation has been proven fundamental to the generalization of deep vision models. However, current augmentations only mix samples from the current data batch during training, which ignores the possible knowledge accumulated in the learning history. In this paper, we propose a recursive mixed-sample learning paradigm, termed ``RecursiveMix'' (RM), by exploring a novel training strategy that leverages the historical input-prediction-label triplets. More specifically, we iteratively resize the input image batch from the previous iteration and paste it into the current batch while their labels are fused proportionally to the area of the operated patches. Furthermore, a consistency loss is introduced to align the identical image semantics across the iterations, which helps the learning of scale-invariant feature representations. Based on ResNet-50, RM largely improves classification accuracy by $\sim$3.2% on CIFAR-100 and $\sim$2.8% on ImageNet with negligible extra computation/storage costs. In the downstream object detection task, the RM-pretrained model outperforms the baseline by 2.1 AP points and surpasses CutMix by 1.4 AP points under the ATSS detector on COCO. In semantic segmentation, RM also surpasses the baseline and CutMix by 1.9 and 1.1 mIoU points under UperNet on ADE20K, respectively. Codes and pretrained models are available at https://github.com/implus/RecursiveMix.
Accept
The manuscript has been reviewed by five reviewers with ratings of 4,5,6,7,7. The reviewers are in general happy with the contributions, novelty, experimental validation, and mostly recommended acceptance. The AC agrees with the majority vote and would like to recommend acceptance. Congratulations!
train
[ "TqJI5jxnrkW", "n8N1RPT6bg", "iYk5iWcDxYc", "2Zkcxg9zL5o", "DvVYBojwvOS", "iN-PRT1Ju_T", "jTfOR9eypO", "VeHVVEm1XJ", "AhPMzMAaSZ", "rA-BpWlq17", "wquMxksVPLm", "1jWiaCzAw9J", "jJZ4y760R2E", "bnP5qpblAct", "9DE_H4KUnG2", "uSJSXFHPgh", "_1p5Mi4FO64" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response, which has addressed my concerns. Thus, I won't change my rating.", " After careful consideration, I decided to increase my rating by one score. \nI hope that our discussion could be carefully resolved in the final revision.\n\nBest", " Thanks for the clarification. I understand the \"recursive mix\" allows the efficient form of self-distillation. But this point is never mentioned in the paper until I raise it. I suggest the authors provide a brief discussion in the paper. \n\nBest Wishes", " **1) Did you try to replace the KL divergence with cross-entropy on the one-hot label?** \\\nIn fact, replacing the KL divergence with cross-entropy on the one-hot label may not be optimal for supervising the RoI Aligned region. Because this area contains multiple classes accumulated from the recursive operation on images, rather than a one-hot label. Therefore, a multi-label target is necessary for optimization. Further, in our case, optimizing the KL divergence is identical to cross-entropy theoretically, since the multi-label target of the historical input is detached from calculating gradients. Specifically, by denoting the prediction at the current iteration as $p_i$, we prove that KL loss is equivalent to CE plus a Const, where the Const can be ignored during optimization.\n\\begin{aligned}\n&\\boldsymbol{C E}=-\\sum_{i} y_{i} \\cdot \\log \\left(p_{i}\\right)\n\\end{aligned}\n\\begin{aligned}\n&\\boldsymbol{K L}=-\\sum_{i} y_{i} \\cdot \\log \\left(\\frac{p_{i}}{y_{i}}\\right)=-\\sum_{i} y_{i} \\cdot \\log \\left(p_{i}\\right)+\\sum_{i} y_{i} \\cdot \\log \\left(y_{i}\\right)=\\boldsymbol{C E}+\\boldsymbol{C O N S T}\n\\end{aligned}\n\n\n**2) Does the new augmentation make the improvement or if it is just the self-distillation works?** \\\nThe proposed recursive augmentation paradigm and the consistency loss (i.e., self-distillation) are not isolated from each other. ***Only in the form of the “recursive mix”, can we achieve self-distillation in a convenient, efficient, and memory-friendly way***. Because in the recursive paradigm, the historical part in the current iteration can be supervised directly by the predictions of the former iteration as they share identical but differentially-scaled contents. It delicately fits the logic of local self-distillation, at a minimal cost of storing only the outputs of the last iteration. On the contrary, to implement local self-distillation without the recursive operation, one needs to additionally forward the inputs through the networks or record the information of the whole data in an entire epoch, which either requires a tremendous computation cost or large memory consumption.\n\nIn fact, they both contribute to the improvement, where the new augmentation accounts for the majority. Table 2 shows the detailed improvements of each component in RM, that the ***recursive augmentation*** and ***KL-divergence*** improve by ***0.78%*** and ***0.16%*** over CutMix, respectively.\n\n", " Thanks for the response. It partially addresses my concerns. \n\nOne more question: Did you try to **replace the KL divergence with cross-entropy on the one-hot label**? Because for now, you are actually doing a variant of **local self-distillation**. I am not sure it is the new augmentation makes the improvement or if it is just the self-distillation works. Thank you again for your hard work.", " Thanks for the response. The response answered my question about the mixed use of augmentations. However, I still think the two hyper-parameters weaken this paper. I would think this is borderline work. ", " **Q6:** Does the performance gain comes from the momentum rather than the mix-up process?\\\n**A6:** \nThe proposed recursive augmentation paradigm and the consistency loss (i.e., self-distillation) are not isolated from each other. ***Only in the form of the “recursive mix”, can we achieve self-distillation in a convenient, efficient, and memory-friendly way***. Because in the recursive paradigm, the historical part in the current iteration can be supervised directly by the predictions of the former iteration as they share identical but differentially-scaled contents. It delicately fits the logic of local self-distillation, at a minimal cost of storing only the outputs of the last iteration. On the contrary, to implement local self-distillation without the recursive operation, one needs to additionally forward the inputs through the networks or record the information of the whole data in an entire epoch, which either requires a tremendous computation cost or large memory consumption.\n\nWe’d like to clarify that ***our improvements are gained both from the recursive mix-up process and the semantically aligned optimization (momentum)***, which shows the superiority of RM jointly. Table 2 shows the detailed improvements of each component in RM. ***Firstly***, The resize strategy and historical mechanism have a positive effect of +0.23% and +0.55% over CutMix, respectively. ***A total of 0.78% accuracy is gained from the mix-up process***. ***Then, adding the KL-divergence further improves +0.16% points, denoting the gain from the optimization process/momentum***.\n\n**Q7:** Even if an instance can still be observed at the current stage, it is hard to ensure that the information loss is aligned with the scale loss.\\\n**A7:** ***Firstly***, compared to the CutMix, we replace the “Cut” operation with “Resize”, which can correctly preserve the consistency and alleviate the information loss (Fig.3). ***Secondly***, the label weight is proportional to the area of instances (scale), ensuring that a visually larger instance gets a higher label weight.\n", " Thank you for the comments and suggestions!\n\n**Q1:** Concern about the large batch training of RM.\\\n**A1:** ***First of all***, we would like to clarify the concerns over the large batch training of RM by answering the following questions.\\\n***1)\tAre there differences between a small batch and a large batch for RM?*** \\\n***RM operates the same under different batch sizes***. Above all, all the recursively resized operations are done between each data pair of the same position within a batch. Therefore, the chance one instance is memorized by the model only depends on the random resized ratio, rather than the batch size.\\\n***2)\tWill RM over-memorize the information (images) that appears in the early batches?*** \\\nRM will not over-memorize the historical information. ***Firstly***, the area and assigned label weight of a historical instance will reduce every iteration and finally become invisible on the input, so it will not be over-memorized by the model (See the visualization of the mixed images in the Supplementary Material). ***Then***, As discussed in 1), a larger batch will not increase the memory process of the information in the early batches.\\\n***3)\tWill there be unexpected semantic dependencies caused by certain information co-occurrence?*** \\\nThe possibility is low. On one aspect, in visual tasks, the possibility of information co-occurrence is quite low due to the rich variety of classes. From our visualization in the Supplementary Material, we hardly observe this certain situation. On the other aspects, information co-occurrence is a potentially common problem that may occur in existing pixel-based mixed augmentations and is not yet ideally solved.\n\n***Secondly***, we conduct experimental comparisons on large batch training with the hierarchical mix-up under 200 epoch settings based on ResNet-18 on CIFAR-100. Notably, we choose a batch of 1024 and 2048.\n\n| ResNet-18| batch=1024 | batch=2048 |\n| :--- | :--------: | :--------: |\n| Baseline| 0.7721 | 0.7467 |\n| Mixup | 0.7755 | 0.7575 |\n| CutMix| 0.7815 | 0.7652 |\n| Manifold Mixup| 0.5965 | 0.5891 |\n| RM| 0.7973 | 0.7690 |\n\nManifold Mixup is originally trained with ***a batch size of 100 for 2000 epochs*** on CIFAR. However, under the common 200 epochs setting with a large batch, severe performance degradation is observed on Manifold Mixup. Meanwhile, from the above table, RM still maintains the best performance.\n\n**Q2:** Will RM decrease the intra-data diversity within an epoch by information paste?\\\n**A2:** ***Pasting information from the last batch will not decrease the intra-data diversity***. On the contrary, RM generates inputs with more diverse samples and benefits the training. To demonstrate this conclusion, we conduct ablation studies on Mixup and Cutmix. Originally, Mixup and Cutmix mixture two random images within the current batch. Then, we make one modification on Mixup and Cutmix that two images are mixed between the last batch and current batch while maintaining their mixing property. The modified version pastes information from the last batch, just like RM. We follow the official 200 epochs training setting under ResNet-18 on CIFAR100. \n\n| ResNet-18 | Top-1 Acc |\n| :------- | :-------: |\n| Baseline | 0.7830 |\n| Mixup | 0.7901 |\n| Mixup* | 0.7932 |\n| CutMix | 0.8039 |\n| CutMix* | 0.8041 |\n| RM | 0.8136 |\n\n“``*``” denotes the modified mix-methods. The table shows that pasting from the current or last batch will not affect the performance and even improves a little. Therefore, ***pasting historical information does not harm the intra-data diversity***.\n\n**Q3:** Is there any possibility that the performance gain comes from the increased model complexity? \\\n**A3:** Based on our paper, the performance gain does not come from the increased model complexity for two reasons. (1) During inference, the employed model is the same as the baseline, there is no “increased model complexity” (Table 11). (2) In Table 4 (a), we conduct ablation studies that even keep the same model parameters during training by a shared classification head, which shows that RM can still surpass CutMix and Mixup.\n\n**Q4:** Which part of the model is optimized for distribution alignment? \\\n**A4:** The entire model parameters, except the normal classifier head, are optimized for distribution alignment.\n\n**Q5:** Will the KL-term affect/prevent the optimization process? Will it result in updating momentum appearing in contrastive learning methods? \\\n**A5:** The KL-divergence will promote the optimization process. Intuitively, it minimizes the distance between two features that are derived by networks with different parameters from an identical instance. Therefore, the model gains better spatial semantical representation ability.\n", " Thank you for the comments and suggestions!\n\n**Q1:** Changing crop-based CutMix to resize-based CutMix and keeping the original label assigning strategy in CutMix seems counter-intuitive.\\\n**A1:** We answer these questions from the following two aspects. \\\n***1)\tWhy do we keep the original label assigning strategy?*** \\\nIdeally, the label assigning strategy of all mixed instances should ***both consider their semantical integrity and the proportion of their composition area***. Although the resized foreground mixture of the image keeps its full semantical content, its area is reduced. Think of an extreme condition where the foreground area is resized to a small ratio like 0.01, which is almost invisible on the input. Then keeping its confidence to 1 will be inappropriate. Therefore, we keep the original label assigning strategy, which considers the actual area of each mixed instance in an image.\\\n***2)\tWhy do we change from crop-based to resize-based?*** \\\nFrom a qualitative perspective, a “Resize” operation will keep the former information, and ensure consistency of the label (Fig.3). From an experimental perspective, the “Resize” strategy surpassed the “Cut” strategy by 0.23% accuracy.\n", " Thank you for the comments and suggestions!\n\n**Q1:** The discussion subsection 3.2 is not informative. Most information has been included in the introduction.\\\n**A1:** Thanks for your suggestion! We will rearrange the content in the revised version.\n\n**Q2:** The paper lacks the experiment in which several basic augmentations, e.g., cropping, scaling, and flipping, have been applied and then further add and compare RM to advanced augmentations.\\\n**A2:** By default, in all experiments, ***we have already applied random cropping, scaling, and flipping***. Also, more augmentations such as label-smoothing, rand-aug,.etc. are employed. In Sec.4.1 – Setup, Sec.4.2 – Setup, and the Supplementary Material, all augmentations are elaborated and precisely attached to corresponding works.\n\n**Q3:** It will be better to give the standard deviation of multiple experiments.\\\n**A3:** We will add the standard deviation in the revised version. Part of the standard deviation is already depicted in the ablation study of Sec.4.1 (Fig.4 & Fig.5).\n\n**Q4:** The performance is sensitive to the hyperparameters considering that the improvements over Mixup and CutMix are usually less than 1%.\\\n**A4:** We would like to discuss the sensitivity of the hyperparameter in turn.\\\n***1)\tThe sensitivity of the resizing ratio α.*** \\\nConsidering that α decides the resize ratio, a small α degenerates RM to the baseline, therefore it is truly normal that the accuracy drops a large margin when α switches from 0.5 to 0. It instead shows RM can significantly improve the baseline. However, when α is 0.5~0.7, the performance is fairly stable (±0.2).\\\n***2)\tThe sensitivity of the consistency loss weight ω.*** \\\nFirstly, the ω is stable on CIFAR-10 (±0.2). Then for CIFAR-100, a less-optimum ω surpasses the CutMix by ~0.5%, and the best ω surpasses by 1%, which are both promising improvements over the CutMix.\n\n**Q5:** Other minor issues.\\\n**A5:** Thanks! We will carefully address them in the revised version.\n", " Thank you for the comments and suggestions!\n\n**Q1:** More fundamental insights or rigorous discussion of RM.\\\n**A1:** About the insights and discussion on the classification pipeline of RM, we answer the following questions in turn.\\\n***1) Why is the current design reasonable?*** \\\nNotably, the proposed RM design is not only an improved version of a data mixing pipeline. ***Firstly***, we design a smart recursive paradigm, that utilizes both the historical ***input-prediction-label triplets***, rather than merely mixing upon the image data. ***Secondly***, multiple-mixed data will provide more diverse training samples, which is already demonstrated by dozens of mixed augmentation strategies. ***Thirdly***, RM creates multi-scale instances during training, which is proved workable by various visual tasks like object detection. ***Finally***, RM managed to learn the spatial semantics via the consistency loss because it minimizes the distance within each semantic pair. However, the existing mixed augments only provide image-level annotations, regardless of the spatial distribution of the mixed semantics. \\\n***2) How does it improve the performance?*** \\\nAs shown in Table 2, the performance is improved by three components. ***Firstly***, by replacing the “Cut” operation with “Resize” in CutMix, we improve +0.23% over CutMix. Because a “Resize” operation will keep the former information, and ensure consistency of the label (Fig.3). ***Secondly***, the historical mixed paradigm improves further improves +0.55% accuracy. Because RM provides more variant instances and multi-scale semantics. ***Finally***, adding the consistency loss additionally improves +0.16%, for it enhances the spatial semantical representation ability. In total, RM surpasses the baseline and CutMix by 2.02% and 0.94%, respectively. \nMore discussions can be found in Sec.3.2 and Fig.2.\n\n**Q2:** No ablation is provided on the resizing ratio λ.\\\n**A2:** ***λ is not a hyperparameter***. It is randomly sampled from the uniform distribution U[0,α] at each iteration, and we have conducted an ablation study on α (Fig.4).\n\n**Q3:** Could the author please compare the computation cost overhead on detection and segmentation?\\\n**A3:** The computation cost for detection and segmentation of RM is exactly ***the same as the baseline***. Because the additional auxiliary head is only adopted during the classification training process. Once we get the pretrained models, we drop the auxiliary head and only employ models with the original head for downstream tasks. Therefore, the computation cost is the same as other methods.\n", " Thank you for the comments and suggestions!\n\n**Q1:** Is there an upper bound on the recursive times? Will more recursive times incorporate too many classes of semantics and degenerate the performance? \\\n**A1:** No, there are no limitations on the recursive time, i.e., it performs the same times of iterations as the whole optimization. ***Generally***, this operation won’t be incorporating too many classes, because the former instance will visually disappear from the input after more than about five iterations since its area and activation signal reduce with each iteration. ***Specifically***, as is illustrated in Sec.4.4 Analyses - Effective Class Number: at most, there are only an average of 5 objects in a single image that is semantically supervised (Fig.8). Therefore, the upper bound of classes at a time is restricted naturally, and the performance will benefit from the richer and more diverse semantics. \n\n**Q2:** Missing the ablation study to verify the effect of the consistency loss on downstream tasks like object detection or semantic segmentation. \\\n**A2:** We have conducted this ablation study in Table 10 (page 8). With the consistency loss, GFL and ATSS improve ~0.5mAP on COCO while PSPNet and UperNet improve ~0.7 mIoU on ADE20K. The results show that ***the consistency loss enhances the performance of the downstream tasks***.\n", " This paper proposes an interesting idea on data augmentation via mixture of samples. Unlike typical ways of mix-based data augmentation methods, this work performs iterative mixtures, which is able to reuse the augmented mixed samples and thereby generates more diverse samples. The idea is simple yet effective, which is validated by extensive experiments. Strengths:\n1. The idea of recursive mixtures of samples is novel.\n2. Extensive experiments demonstrates the effectiveness of the method.\n3. The paper is written well and easy to follow. 1. Is such recursive mixture operation performed the same times of iterations of the whole optimization? Or is there an upper-bound on the recursive times? Intuitively, more recursive times leads to richer and more diverse semantics in the augmented samples, however, incorporating too many classes of semantics may degenerate the performance. Maybe it is better to conduct experiments to investigate the effect of recursive times of mixture operations.\n\n2. It is claimed that the introduced consistency loss can help the model to learn the spatial-correlative semantic representation, which potentially benefit the downstream tasks like object detection or semantic segmentation. Thus, it is better to conduct such ablation study to verify the effect of the consistency loss on these two tasks. Refer to the questions posed above.", " This paper introduces a recursive data mixing augmentation called RecursiveMix. It pastes a historical image patch onto the current training sample to promote the data diversity for image mixing. Furthermore, a consistency loss is introduced to force the local pasted patch prediction is invariant to the background. Extensive evaluation on image classification, object detection, and semantic segmentation shows that RM largely improves performance. # Strength\n1. The method is well-motivated and finely designed. The two key components are tightly coupled: recursive data mixing promotes image diversity, while local patch consistency calls for representation invariance regardless of background. \n2. The experiments are truly comprehensive. The author not only testifies the performance on a wide range of mainstream visual tasks but also provides an extensive ablation study on each component of the method.\n3. The proposed method shows compelling improvements compared with existing methods.\n4. The paper is clearly written and easy to follow.\n\n# Weakness\n1. The current method seems not to provide striking new insights. The current RM design is mainly a improved version of exiting data mixing pipeline, by considering the historical data information. I do not mean the derived approach is \"not novel\". I just want to see more fundamental insights or rigorous discussion on \"why the current design is reasonable?\" and \"How does it improve the performance?\". Good performance itself just indicates the results, but tell nothing about the reason.\n2. No ablation on $\\lambda$. In lines 127-131, the author claims that \"$\\lambda$ denotes the proportion of the historical images, which is not suggested to be quite large\". However, no ablation is provided on resizing ratio $\\lambda$. I strongly suggest providing the results. the proposed method needs an additional task head, which inevitably adds on the training computation. However, the authors only compare the cost of the classification task, with a linear head. In contrast, the task head in detection and segmentation is extremely large compared to the backbone. Could the author please compare the computation cost overhead on detection and segmentation? The limitation is discussed in Line 313-319.", " This paper proposes a novel data augmentation strategy (RecursiveMix). The proposed augmentation recursively resizes the historical input and then fills it into the current batch. Experiments demonstrate that the proposed RecursiveMix consistently outperforms the popular augmentations such as Mixup and CutMix. Strengths:\n1. The proposed augmentation is novel and interesting. \n2. Abundant experiment results are provided. According to the experiment results, RecursiveMix can consistently improve the performance.\n\nWeaknesses:\n1. The discussion subsection 3.2 is not informative. Most information has been included in the introduction.\n\n2. The paper also lacks an experiment in which several basic augmentations, e.g. cropping, scaling and flipping, have been applied and then the authors further add and compare RecursiveMix to advanced augmentations, e.g. mixup and cutmix. In other words, the authors should consider the joint usage of multiple augmentations.\n\n3. It will be better to give the standard deviation of multiple experiments. \n\n4. The proposed method involves two hyper-parameter \\alpha and \\omega. The ablation study shows that the performance is sensitive to the hyper-parameters considering that the improvements over Mixup and CutMix are usually less than 1%. \n\nSmall problems:\n1. The font in figures can be larger for reading easily.\n2. The interval between bars in Figure 1 can be uniform. \n3. Syntax error in Line 129 “Because …”\n Please address the above weaknesses. The authors have discussed the limitations. No potential negative societal impact.", " The authors of this paper propose to increase the model generalizability via historical mix-up. Specifically, the compositional training batch from previous time-step is stored and ultilized in the training process at current time-step. Weaknesses:\n\n1. Is there any experimental comparison with hierarchical mixed-up in large-size batch? Specifically, the comparison with hierarchical mixed-up within single batch, rather than using the last time-step information. Since during the training process of large datasets, the number of batch in each epoch could be large, which may cause two problems: \na) Over-memorizing the information(images) appears in the early batches; \nb) Unexpected semantic dependencies caused by certain information co-occurance.\n\n2) Line38. The proposed recursive-mix improves the inner-data diversity within a batch, while decrease the intra-data diversity within an epoch by information paste.\n\n3) Although the increase of RM's model complexity compared with others is small, is there any possible that the performance gain comes from the increased model complexity? Is there any experiment to verify this assumption?\n\n4) In Eq(5). a) Since H and H' are with different weights, which part of the model is actually optimized for distribution alignment? The encoder ot the projector H?\nb) The network is updated after processed each batch, will the KL-term affacts/prevent the optimization process? Another possible is that it will result in updating momentum appears in contrative learning methods, is there any possible that the performance gain actually comes from the momentum rather than the mix-up process?\nc) The information loss is ignored during the mix-up process. Even an instance can still be observed at current stage, it is hard to ensure that the information loss is aligned with the scale loss. For example, without the tiny boarders, a billboard will be same with the wall, thus its key information may disappear within few time-steps. Please refer to Sec.2. Please refer to Sec.2.", " This paper presents RecursiveMix that applies CutMix iteratively. The proposed method is efficient and effective and shows strong generalization ability for different tasks including image classification, object detection, instance and semantic segmentation. ### Strengths \n\n1. The idea is simple and effective, which makes it easy to follow.\n\n2. The performance it achieves is impressive. It can lead to consistent gain for four different tasks and all of them were evaluated on challenging benchmarks. Meanwhile, it only requires marginal computational cost during training and is completely cost-free during inference.\n\n3. The experiments and ablation studies are comprehensive and extensive. The authors evaluate their methods on a wide range of tasks and datasets, which can help to attract more audiences with different backgrounds.\n\n\n### Weaknesses\n\n1. RecursiveMix changes the previous crop-based CutMix to resize-based CutMix and keeps the original label assigning strategy in CutMix, which seems counter-intuitive. In this way, the foreground area of the crop in the mixture will be always kept since it will resize the whole image and keep all content there. But the crop can cover the foreground area of the other image, which means the valid object can sometimes disappear. Intuitively, the label assignment under such circumstances should always keep the label of the cropped image as 1 since the foreground object in the crop always exists, and the label confidence for the other image in the mixture should be lower. See weaknesses See weaknesses" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5, 4 ]
[ "1jWiaCzAw9J", "iYk5iWcDxYc", "2Zkcxg9zL5o", "DvVYBojwvOS", "wquMxksVPLm", "rA-BpWlq17", "VeHVVEm1XJ", "uSJSXFHPgh", "_1p5Mi4FO64", "9DE_H4KUnG2", "bnP5qpblAct", "jJZ4y760R2E", "nips_2022_NjP18IbKKlX", "nips_2022_NjP18IbKKlX", "nips_2022_NjP18IbKKlX", "nips_2022_NjP18IbKKlX", "nips_2022_NjP18IbKKlX" ]
nips_2022_gc87Cs_V9qR
Differentiable Analog Quantum Computing for Optimization and Control
We formulate the first differentiable analog quantum computing framework with specific parameterization design at the analog signal (pulse) level to better exploit near-term quantum devices via variational methods. We further propose a scalable approach to estimate the gradients of quantum dynamics using a forward pass with Monte Carlo sampling, which leads to a quantum stochastic gradient descent algorithm for scalable gradient-based training in our framework. Applying our framework to quantum optimization and control, we observe a significant advantage of differentiable analog quantum computing against SOTAs based on parameterized digital quantum circuits by {\em orders of magnitude}.
Accept
The paper proposes a differentiable programming framework for analog quantum computing with a specialized forward scheme based on Monte-Carlo sampling to get estimates of gradients. This idea is an exciting avenue for research to broaden the applicability of quantum computing to practical machine learning and computation. There is a clear consensus among referees that this submission constitutes an interesting and important work. In all examples, the proposed framework appears not only efficient but also outperforms previous methods by some orders of magnitudes.
train
[ "Nosf0UqGoFu", "2sB6f-VTeQJ", "KexhInyF2_u", "41hA4kUcZaO", "9IP-e8vZrV", "bs40kD7u1A4", "PjeXpIM0D6C", "VNUp8riLOKj", "cLkf-KqL1UY", "42uz8HJ-_Oo", "nGw5cfNhaf3", "0ibFSo1WjnG", "4cvqUvIhcEN", "IqQVqwyw4DW" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response, which solve my concerns to some extents. I increase my score accordingly. Param shift rule only applies to gates whose unitary matrix has structured eigenvalues such as +1 and -1. It would be good for the authors to explain more on the limitations of the parameter shift on the pulse level.", " Dear reviewer gTD7, \n\nWe wish you had a great weekend! Thanks for reading our papers and asking questions. We have added your references to our paper. Could you please kindly let us know if there is anything we can further do or clarify that might improve your rating? Looking forward to your post-rebuttal discussion.\n\nBest, Authors", " Thanks for your comments. We are also glad that our revision resolves your concerns. Please let us know whenever you have additional questions.\n\nBest, Authors", " I thank the authors for their detailed answers to my comments and questions. I read the modifications which are appropriate. I also read the comments of Reviewer gTD7 and believe that the authors addressed them well in my opinion. Overall I maintain my score: I think this paper could inspire interesting avenues for future work (at least from a differentiable programming perspective where I come from) and the experimental evaluation is sound in my opinion (with even an additional experiment added by the authors). ", " We thank reviewers for your time reading our paper and for giving us many constructive feedbacks. We have revised the paper and the appendix to incorporate the suggestions. The modifications are highlighted in blue. We sincerely invite reviewers to take a look at our revised version. Please feel free to let us know if you have any questions or concerns.", " We sincerely appreciate your time reading and reviewing our paper. \n\nIn this paper, we are 1) proposing to use continuous-time representations for quantum computing; 2) designing an auto differentiation mechanism on quantum computers that can compute gradients for our representations; 3) demonstrating the correctness, effectiveness, and robustness of our framework with both theoretical and empirical analysis. \n\nThe continuous-time representations used in our paper are currently basis functions. But for example, it could possibly be neural networks in future works. We hope this paper could help machine learning and quantum computing (QC) to better benefit each other, where more ML techniques can be introduced to QC and QC can solve more ML problems with its computational resources. Thanks again for your kind support. Please feel free to let us know if you have any questions or concerns during the discussion period.\n\n", " > For parameter shift, the number of circuit runs on real quantum machine is linear to the number of parameters in the circuit so when the size of the circuit is large, the number of gates will growing rapidly. For instance, the pulse length is O(2^(2n)) for quantum optimal control, so the proposed method is not very scalable either.\n\nThe high complexity of evaluating gradients for variational quantum **circuits** is precisely the motivation for our proposal of differentiable analog quantum computing using continuous-time parameterization. With abstract quantum analog machines (AQAMs) as the computational model, we detach the dependency between the number of parameters and the pulse lengths (equivalently, circuit sizes). This enables us to reduce the computational complexity to *polynomial* w.r.t. the numbers of controllable terms and sampling batch sizes, achieving far better scalability than variational circuits. \n\nSpecifically in your instance, pulse length O(2^(2n)) does not imply O(2^(2n)) parameters using our parameterization: you can choose an arbitrary number of parameters with basis functions in the parameterization. On the other hand, if the runtime is polynomial in the number of parameters, our method is as scalable as classical differentiable models. The difficulty in quantum computing is that, without an appropriate auto-differentiation technique, the derivative calculation for even one parameter could require exponential numbers of simulations on classical computers, which is not scalable. \n\n> Figure 1 caption 'an' initial\n\nThanks for your suggestion. We fixed this typo in our revision.\n", " > The proposed work is similar to the existing works on pulse-level variational quantum algorithms and quantum optimal control. [1,2,3]. What are the core differences to those papers?\n> [1] Meitei, Oinam Romesh, et al. \"Gate-free state preparation for fast variational quantum eigensolver simulations: ctrl-vqe.\" arXiv preprint arXiv:2008.04302 (2020). \n> [2] Liang, Zhiding, et al. \"Variational quantum pulse learning.\" arXiv preprint arXiv:2203.17267 (2022). \n> [3] de Keijzer, Robert, Oliver Tse, and Servaas Kokkelmans. \"Pulse based Variational Quantum Optimal Control for hybrid quantum computing.\" arXiv preprint arXiv:2202.08908 (2022).\n\n\nThe main contribution of this paper is on connecting differentiable programming and analog quantum computing. None of the mentioned references has tackled the auto-differentiation technique for analog quantum computing. \nThe contributions of our work can be summarized as follows: (a) we develop a new continuous-time parameterization (with basis functions, which is different from e.g., the parameterization in GRAPE) on analog quantum computers; (b) we design the differentiation pipeline in our framework; and (c) the correctness, effectiveness, and robustness of our framework have been demonstrated with both theoretical and empirical analysis. \n\nAs will be elaborated below, none of the mentioned references [1,2,3] has contributed from the perspective of (a)(b)(c). In particular, they didn’t discuss computing gradients analytically for analog quantum computing by using quantum machines, which our paper addressed for the first time. \n\nThe first paper (Meitei et al., 2020) has been cited in our paper as Ref. [40]. This work uses GRAPE to calculate the gradient and generate the pulse. GRAPE uses classical simulation of a piecewise-constant pulse ansatz, and hence it has a different parameterization from ours. It further suffers from *exponential* cost in classical simulation, as we mentioned in Related Works in Section 2. \n\nThe second paper (Liang et al., 2022) is particularly concerned with solving classical machine learning problems with variational quantum pulses (VQP), focusing on *encoding classical training data into quantum states using pulses*. As for the pulse optimization part, they consider *gradient-free* methods such as Bayesian optimization. Hence, there is no perspective from differentiable programming in this work. Compared with our work, we develop *gradient-based* optimization in pulse training. In addition, we further consider a wider range of applications that are not limited to classical machine learning, in contrast to (Liang et al., 2022).\n\nThe third paper (de Keijzer et al., 2022) employs piece-wise constant pulses and the gradient is calculated by the existing “simultaneous perturbation stochastic approximation” (SPSA) method, which is drastically different from our *new* Monte Carlo gradient estimation subroutine (see our algorithm 1). The gradients estimated by SPSA are much less accurate than ours, where an added comparison is in the NEW Figure 2. (NOTE: We add more comparisons against SPSA on VQE and QAOA experiments in the revised paper in Fig. 2, as asked by reviewer FN4c.) Again, no perspective from differentiable programming in this work. Furthermore, we briefly discuss the application of our method to superconducting qubits machines (e.g., the IBM machine), while de Keijzer et al. is particularly interested in Rydberg atoms. \n\nWe hope that our differences from [1,2,3] are clear. Our major contributions do not overlap with theirs. We did cite and compare with [1] in the paper and [2,3] are very recent non-peer-reviewed papers that we are happy to incorporate into our revision. However, according to the NeurIPS 2022 FAQ, \"Authors are not expected to compare to work that appeared only a month or two before the deadline.\"\n\n> The idea of using parameter shift to obtain gradient is not new.\n\nChain rule was nothing new either when the neural networks were proposed, but neural networks can still be quite effective for many applications. The parameter shift rule is essentially a tool for evaluating commutators on quantum computers. We leverage it to *evaluate gradients for analog quantum computing*, while previous works focused on gradient calculation for quantum circuits. The underlying computational models are drastically different. Unlike the discrete-time circuit model, the analog model poses unique challenges for gradient evaluation because of its continuous nature. We solve this problem by the MCI technique (see our algorithm 1).\n\nWe hope these explanations address your questions and we appreciate your comments. Please let us know if you have further feedback.\n\nBest, Authors\n", " > While this paper appears to be a strong contribution and an interesting bridge between differentiable programming and quantum computing, it somehow quickly passes over some technical details that could be better explained as asked in the questions section\n\nThanks for your suggestions. We have added more technical details to the revised paper, as indicated in blue print in the revision. Details of the changes are explained in the responses below.\n\n\n> One simple, yet powerful addition to the experiments would be to write down exactly the optimization problem (such as $min_x f(x)$ with some constraints) to help the reader understand how quantum computing is used for classical problems and what are the challenges compared to classical optimization. This would greatly help other communities understand how the framework proposed by the authors differ from traditional algorithms.\n\nWe added more intuitive explanations of the problem setting connecting classical and quantum ML communities in Sec. 3.2 (p.4) of the revision.\n\n> I cannot find any readme file to run the code. I appreciate that the code is commented and a simple readme/tutorial to navigate the code would also be a simple yet great improvement for the paper.\n\nThanks! We will add a more detailed README file and release the code on GitHub.\n\n> In the definition of the AQAM: what does evolving under $H_j$ at time t means?\n\n“Evolving under $H_j$ for time $t$” means to apply a unitary transformation $e^{-iH_j t}$, which is the time-evolution operator described by the Schrodinger equation with a constant Hamiltonian $H_j$ for time duration $t$. We improved the statement in Sec 3.2 (p.5) of the revision.\n\n> For non-experts, it could be good to either provide a quick introduction to the parameter shift rule though reference [50] is good.\n\nWe added a short introduction to the parameter shift rule in Sec. 3.4 (p. 6) of the revision.\n\n> I do not understand what the authors mean by \"weak\" in \"multi-qubit interactions are not tunable and weak compared to tunable single-qubit Halmitonians\" or in \"may be imprecise due to weak non-tunable terms in $H_c$\".\n\nFor most realistic machines, the strength of multi-qubit interactions is orders of magnitudes weaker than the single-qubit oscillation (Rabi) frequency and the driving amplitudes. Two-qubit gates are implemented by specifically designed pulses so that the driving signals and single-qubit oscillations “cancel out”, and two-qubit interactions become the major effect. \n\n> A reference for the claim \"Almost on every architecture of quantum devices, the number of control signals m is at most quadratic in the number of qubits n\" would be appreciated.\n\nWe added more references to survey papers on different architectures in Sec. 3.5 (p. 6) of the revision.\n\n> For non-experts, it may be good to explain why equation (7) is presented up to Hermitian conjugate terms.\n\nThanks for noting this. The presentation style in (original) Equation (7), i.e., folding the Hermitian conjugate terms, is common among physicists. We added an explanation when first using it (now in Sec. B.2 of the appendix). \n\n> A formal statement for equation (7) with assumptions and proof would be appreciated. Generally, a pass on the Appendix from lines 624 to 631 with detailed reasoning would greatly help clarify the theoretical contributions of the paper.\n\nThanks for the suggestion. We added Lemma 3.3 in Sec. 3.6 (p. 6) of the revision, with detailed proofs in the appendix.\n\n> Maybe, recall that the dagger symbol is the hermitian operator for non-experts.\n\nWe added the dagger symbol in the quantum preliminaries in Sec 3.1 (p.4) of the revision.\n\nWe sincerely appreciate your comments. Please let us know if you have further feedback.\n\nBest, Authors\n", " > The VQE experiments compare only against finite differences, what about SPSA?\n\nThanks for your suggestions! We have added experiment results of SPSA on VQE and QAOA experiments in Fig. 2 of the revised paper.\n", " This work introduces a general technique to evaluate gradients of time-evolved states, in the analog quantum computing setting. This is a very interesting and conceptually (very) important work, because it allows to compute exact gradients on large analog quantum computers. While some of the technical tools have been heavily borrowed from ref 3, I believe that the extension to the analog case is crucial. Also, the numerical experiments seem to consistently indicate a superior performance to existing approaches (say, CRAB, etc). \n\n The VQE experiments compare only against finite differences, what about SPSA? Yes", " The paper proposes a differentiable programming framework for analog quantum computing with a specialized froward scheme based on Monte-Carlo sampling to get estimates of gradients. The rationale of the gradient estimation is justified theoretically. Moreover a sensitivity analysis to modeling errors of the quantum system is provided to handle real applications. Finally, the framework is illustrated on several examples, ranging from quantum optimization and quantum control. In all examples, the proposed framework appears not only efficient but also outperforms previous methods by some orders of magnitudes. Strengths: \n- The subject of the paper itself, namely proposing a differentiable programming framework for quantum computing is an exciting avenue for research to broaden the applicability of quantum computing. \n- The approach taken by the authors, namely, considering analog quantum computing systems appears clearly driven by a good understanding of the underlying system and is original to me, though I'm not an expert in quantum computing. The authors could explain better the previous approaches. However, their experimental evaluation clearly demonstrates empirically the benefits of their approach. \n- I personally come from a differentiable programming viewpoint, for which the framework posed by the authors exhibits exciting challenges, namely, computing gradient information without access to the intermediate states of the computations. The authors present a simple Monte-Carlo estimator that is easily implemented and already efficient. The framework proposed by the authors can serve as a strong baseline to build upon and provide alternative gradient estimators for analog quantum computing.\n- The authors link their approach to well-known frameworks in differentiable programming such as differentiable physics which help understand the differences and challenges in this setting. \n- Last but not least, the experimental evaluations are well selected and the comparisons show clearly the strength of the proposed approach compared to previous work. \n\nWeaknesses: \n- While this paper appears to be a strong contribution and an interesting bridge between differentiable programming and quantum computing, it somehow quickly passes over some technical details that could be better explained as asked int he questions section - One simple, yet powerful addition to the experiments would be to write down exactly the optimization problem (such as min_x f(x) with some constraints) to help the reader understand how quantum computing is used for classical problems and what are the challenges compared to classical optimization. This would greatly help other communities understand how the framework proposed by the authors differ from traditional algorithms. \n- I cannot find any readme file to run the code. I appreciate that the code is commented and a simple readme/tutorial to navigate the code would also be a simple yet great improvement for the paper. \n\n- In the definition of the AQAM: what does evolving under H_j at time t means? \n- For non-experts, it could be good to either provide a quick introduction to the parameter shift rule though reference [50] is good. \n- I do not understand what the authors mean by \"weak\" in \"multi-qubit interactions are not tunable and weak compared to tunable single-qubit Halmitonians\" or in \"may be imprecise due to weak non-tunable terms in H_c\".\n- A reference for the claim \"Almost on every architecture of quantum devices, the number of control signals m is at most quadratic in the number of qubits n\" would be appreciated. \n- For non-experts, it may be good to explain why equation (7) is presented up to Hermitian conjugate terms. \n- A formal statement for equation (7) with assumptions and proof would be appreciated. Generally, a pass on the Appendix from lines 624 to 631 with detailed reasoning would greatly help clarify the theoretical contributions of the paper. \n- Maybe, recall that the dagger symbol is the hermitian operator for non-experts. The authors have properly described the limitations of their work, namely the fact that the current experiments are not very large scale although the framework had been designed to be scaled. As the authors say, the current experiments already demonstrate the potential of the framework and the \"limitations\" of the paper can clearly be seen as future work and not dead-ends. ", " The paper studies the problem of variational quantum algorithms and proposes to tune the parameters on the pulse level to achieve better efficiency than optimizations on the gate level. Results on both quantum optimization (QAOA) and quantum control show large advantages. \n Strengths:\n1. Good motivation of optimizations the algorithms on the pulse level to better leverage the current quantum machines. \n2. Good explanations on the parameter shift method to obtain the gradients.\n\nWeekness:\n1. The proposed work is similar to the existing works on pulse-level variational quantum algorithms and quantum optimal control. [1,2,3]\n2. The idea of using parameter shift to obtain gradient is not new. \n3. For parameter shift, the number of circuit runs on real quantum machine is linear to the number of parameters in the circuit so when the size of the circuit is large, the number of gates will growing rapidly. For instance, the pulse length is O(2^(2n)) for quantum optimal control, so the proposed method is not very scalable either.\n\n[1] Meitei, Oinam Romesh, et al. \"Gate-free state preparation for fast variational quantum eigensolver simulations: ctrl-vqe.\" arXiv preprint arXiv:2008.04302 (2020).\n[2] Liang, Zhiding, et al. \"Variational quantum pulse learning.\" arXiv preprint arXiv:2203.17267 (2022).\n[3] de Keijzer, Robert, Oliver Tse, and Servaas Kokkelmans. \"Pulse based Variational Quantum Optimal Control for hybrid quantum computing.\" arXiv preprint arXiv:2202.08908 (2022). 1. What are the core differences to the previous pulse-based variational learning methods besides the parameter shift method to obtaining gradients?\n\n2. Figure 1 caption 'an' initial No significant negative societal impact.", " This papers introduces the first differentiable analog quantum computing framework with a quantum SGD algorithm which optimizes the analog pulse control singla on quantum computers. Strengths:\n- The paper looks serious and does what it claims to achieve.\n- The benchmarks are convincing.\n- I liked the fact that there was a robustness analysis.\n- The paper is well written and structured.\n\nWeaknesses:\n- I am not knowledgeable enough in the field to spot weaknesses. Sorry, I don't know enough in the field to ask relevant questions and initiate fruitful discussions. No negative societal impact" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 1 ]
[ "2sB6f-VTeQJ", "VNUp8riLOKj", "41hA4kUcZaO", "cLkf-KqL1UY", "nips_2022_gc87Cs_V9qR", "IqQVqwyw4DW", "4cvqUvIhcEN", "4cvqUvIhcEN", "0ibFSo1WjnG", "nGw5cfNhaf3", "nips_2022_gc87Cs_V9qR", "nips_2022_gc87Cs_V9qR", "nips_2022_gc87Cs_V9qR", "nips_2022_gc87Cs_V9qR" ]
nips_2022_QXLue5WoSBE
NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input. To decouple the learning of underlying scene geometry from dynamic motion, we represent the scene as a time-invariant signed distance function (SDF) which serves as a reference frame, along with a time-conditioned deformation field. We further bridge this neural geometry representation with a differentiable physics simulator by designing a two-way conversion between the neural field and its corresponding hexahedral mesh, enabling us to estimate physics parameters from the source video by minimizing a cycle consistency loss. Our method also allows a user to interactively edit 3D objects from the source video by modifying the recovered hexahedral mesh, and propagating the operation back to the neural field representation. Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches, and we provide extensive examples which demonstrate its ability to extract useful 3D representations from videos captured with consumer-grade cameras.
Accept
After rebuttal the new version of the paper reads much better and all reviewers were positive, despite some remaining criticisms. Hence the paper should be accepted.
test
[ "ebzMgFLDRw-", "wQl5Jw4fiE5", "4BX9OnyYjuO", "hykuF0AXK0w", "WIgwaLlI-QY", "bcIqCjZIGIS", "24rXIAozY8D", "lLbKWHX2_xw", "EHobpGn8b4t", "jj-rab_nd7l", "nYnifai3D4L", "FKthJJInRDZ", "Y4jGMoyxjSG", "3toCMXd--Zj", "RO3fZkQ-MO", "bGGBmePHRK", "uESZGMEi6kz" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for your constructive feedback and for kindly increasing the score! We will switch the order of the figures. More discussions and failure cases about our experience of connecting the rendering and simulation modules will also be included.\n\nWe actually found that sequential training instead of joint training can mitigate inheriting limitations (error accumulation) from both sides. In our strategy, the trained NeRF would not be influenced by the suboptimal physics simulation. If we train them together, it is much more challenging to finetune the learning rate if we train physics parameters and rendering parameters *together*. Also, a bad physics module would break down the rendering module as well. We will show those finding qualitatively and quantitatively. Another potential limitation for both sides might be the running speed. It's an important future work in our calendar to improve the efficiency (e.g. faster GPU-based simulation, CUDA kernels for NeuS rendering)", " We would like to sincerely thank all reviewers for your review and discussion! Your suggestions help us better articulate the contributions and technical details. \n\nAll reviewers are asking questions about the connection between the rendering and simulation, which is a key design in our pipeline. We appreciate those great questions and have added several experiments during the rebuttal period to more clearly highlight our contributions. Reviewer 1MJg proposed a new experiment during the discussion period. Until now, we have not observed significant improvement in that training. A much larger optimization space (with significantly higher dimensionality) would lead to much slower convergence if converging at all within a reasonable amount of time. We will add the results to the final version of the paper.\n\nBased on that, we would further add one more experiment to further analyze and verify how the simulation part DiffPD can potentially benefit the NeRF: the rendered image from the physics simulation can actually be compared with the ground truth image. The closed-loop RGB loss in the image space is another way to update NeRF and this experimental result will also be included in the revision.\n\n\nConnecting NeRF and differentiable physics modules is not as easy as it seems to be. There are some subtle details we are willing to validate and share. More experiments will be added in a later version.\n\nOnce again, we sincerely appreciate all reviewers' comments and discussions to help improving the exposition of this work. THANK YOU ALL.\n", " The reviewer thanks the authors for the explanation and clarification of several details. The reviewer has read the modified submission and is willing to increase the rating to borderline accept for the added experimentation. \n\nThe reasons for not increasing to a clear accept or higher are due to concerns on presentation and novelty. \n1. Section 3 is divided into a \"Rendering pipeline\" and a \"Simulation pipeline\", with the first part talking about NeRF and the second about DiffPD. The integration of the two is perhaps demonstrated in Figure 2, which by itself is confusing and complicated. Figure 1 appears after Figure 2 and is also very hard to read. \n2. At the end of the day, this paper boils down to NeRF+DiffPD. The techniques developed in this paper are to integrate these two modules. On one hand, the paper did a good in job presenting the benefits by combining the two for soft dynamic objects. But on the other hand, it is unclear and probably true that it inherits limitations from both ends, which is not discussed in the paper.", " Dear reviewer,\n\nThanks for your helpful and practical advice! \n\nWe are starting to add the joint training comparisons as you suggested. Results will be added to the revised paper if we can catch the 8/9 deadline, or at least it will be in a future version.\n\nThank you for bringing the two papers to our attention. Those two are using multiview videos with articulated assumptions. In contrast, our method uses a single video for general objects (e.g. balls, human body, face, hands). We will add the discussions to our paper. \n\nBut for sure, we would tone down our claims, since NeRF is growing very fast and it's very likely there are some concurrent and similar works just like these two CVPR 2022 papers. CVPR 2022 is in this June after the Neurips deadline so it's hard for us to make the comparison.\n\nBest, Authors\n\n", " I appreciate the effort the authors made in the rebuttal. Overall, I like the idea of combining differentiable simulation with NeRF, but the analysis and solution is still not satisfactory.\n\n- Joint optimization. Given the additional information, it is still not clear whether the NeRF and physics parameters are jointly optimized or not. I would guess they are not after reading the response to other reviewers. \"Sequential vs from scratch\" is a different problem than \"joint vs separate\" optimization. It is fine to train sequentially due to efficiency considerations, but lack of joint optimization often gives suboptimal solutions. Coordinate descent-like strategies may allows joint optimization but in a sequential manner. It may be worth to study how to improve the speed of differentiable simulator and the joint optimization.\n\n- Overclaim/Missing references. The point of using sdf representation in dynamic reconstruction is rather incremental. It is also not fair to claim \"there is no paper known that can reconstruct high-quality dynamically changing surface mesh as this work does\". For instance, [A,B] use sdf representation and show good results on dynamic objects from rgb videos.\n\n[A] Noguchi, Atsuhiro, et al. \"Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[B] Yang, Gengshan, et al. \"Banmo: Building animatable 3d neural models from many casual videos.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.", " Dear reviewers,\n\nWe wish you had a great weekend! Thanks to your constructive suggestions, our paper has been revised with additional ablation studies and technical details. \n\nFor both rendering and simulation researchers, our method provides an easy-to-use pipeline reconstructing the geometry and dynamics. We also hope this technique can enable more general users to build and edit dynamic 3D models simply with their phones.\n\nCould you please kindly let us know if there is anything we can do or clarify that might improve your rating? Looking forward to your post-rebuttal discussion.\n\nBest, Authors", " \n> The challenges in combining differentiable rendering and differentiable physics are not clearly presented. As a result, the technical contribution appears weak. l183-l196 sparsely discussed an alternative solution but lacks in-depth analysis. How much does the proposed solution out-perform the alternative solution in computing and accuracy?\n\nThanks for your question. The challenge is to make such combination both efficient and differentiable. We have added an experiment in NEW Section 4.5 to show that our way of combining differentiable rendering and differentiable physics is 75% more efficient than the alternative naive solution. NEW Figure 2 is added to demonstrate how to differentiate the cycle-consistency physics loss.\n\nThis comparison is added (in NEW section 4.5 and Figure 8) where we 1) extract the hex mesh sequence from our rendering module; 2) simulate another mesh sequence using the simulation module; 3) compute the pairwise Chamfer distance between the two sequences and update the physics parameters.\n\nThe results indicate that the disconnected workflow has comparable performance to ours in terms of loss convergence. However, it takes 75% more computational time than ours because it reconstructs the mesh for each frame, while our method avoids the expensive reconstruction by querying the motion network to compute the cycle-consistency physics loss.\n\nThis experiment also verifies the correctness of our designed loss function, since it is almost as effective as the Chamfer distance, yet much more computationally efficient.\n\nThere are two more potential advantages of using our loss instead of such a disconnected workflow: (1) A mesh reconstructed purely from the SDF cannot reflect the inside stress and deformation gradients, as the reconstructed mesh elements are uniformly distributed. Our proposed cycle-consistency physics loss takes advantage of the learned motion fields, so it is capable of finding the point-wise correspondence and characterizing the deformation gradients. (2) Our cycle-consistency physics loss queries the motion networks, and can therefore pass gradients to the rendering module. The decoupled workflow, however, leads to a non-differentiable module for rendering.\n \n\n> Joint optimization for geometry and motion recovery, as well as physics parameter recovery. \n\nJointly training the entirely differentiable modeling-simulation-rendering pipeline looks elegant and was actually our initial goal. However, we can only do sequential optimization due to the performance issue. It could be an interesting and challenging future work to study the benefit of joint optimization for geometry and motion recovery, as well as physics parameter recovery. \n\nOur method is indeed trained in a sequential optimization manner for two reasons. (1) The joint optimization is too expensive. We have added an experiment and some profiling numbers to show this difference clearly in NEW Section 4.5 and Figure 7. (2) The physics simulation is only well-defined after the geometry reconstruction converges reasonably well.\n\n> The baseline comparison only shows the proposed method is better but does not provide insight and analysis. What is the difference between the proposed method and the baselines? What does the results suggest?\n\nThanks for asking! To the best of our knowledge, no existing NeRF methods can reconstruct high-quality dynamic geometry sequences from monocular RGB videos as ours does. Previous Dynamic NeRF papers (D-NeRF, NR-NeRF) use density for geometry representation. However, the mesh reconstruction using density values requires an ad-hoc cut-off threshold, making the surface mesh noisy. Our use of SDF + unbiased weighting scheme for volume rendering leads to much smoother geometry results. Our method also outperforms the original NeuS, as shown in the images and videos, since we have extended the static SDF representation to make it work well for dynamic scenes. \n\nIn addition, the use of a physics engine also helps us edit the videos in an intuitive and convenient manner. Existing NeRF editing tools cannot perform physically-plausible editing (or otherwise require manual interaction), as they do not have an underlying physics module to create natural and realistic deformation due to interaction. \n\nMoreover, previous learning-based video prediction methods do not adopt a powerful differentiable physics engine as ours does. Their prediction often lacks interpretability, while our video prediction (like adding a horizontal velocity in NEW Figure 7) is totally understandable, explainable, and predictable, because the governing physics laws are explicitly encoded inside the physics engine.\n", " > There is no ablation study. For example, I'd like to understand how important to jointly solve physics parameters together with differentiable rendering. How important are the loss terms?\n\nThank you for your suggestions. We have now included ablation studies to analyze the design and effects of the physics component in NEW Section 4.5. We compare our designed cycle-consistency physics loss with the chamfer distance. Ours is as effective as the widely used chamfer distance but is 75% more efficient than the alternative naive solution. Other vanilla NeRF losses are already studied in some previous papers. We would recommend [57] as a reference.\n\nWe also add an ablation study regarding the training strategy in NEW Section 4.5 on “Sequential Vs. Joint Optimization”. It clearly illustrates that joint training is too time-consuming while the benefit is limited. \n\n> l194: b(li, mi) needs to be explained. Otherwise, it's unclear what's the relation between mi and the reference frame.\n\nThanks for your suggestions. We have modified the description in NEW Equation 4. $b(I_i, m_i)$ is the motion field at the i-th frame that can map the mesh $m_i$ to the canonical frame. \n\n\n> l210-225 appears out of the blue. It will be helpful to start with the goal of the whole paragraph than diving into technical descriptions.\n\nThank you for your advice. We rewrite this paragraph by starting with our intention as below:\n\n“When rendering a point $p'$ for the edited image, we must first check if this point is being moved by our editing operation. An equivalent question is whether this point is located within the hexahedral mesh $m_i'$. The inside-outside-test would be trivial if all hexahedra in $m_i'$ remained in a regular grid”\n\n> Scene editing is not clearly defined in the context of this paper. Why is the physics simulator necessary for scene editing? Can't we edit the scene in an interactive manner?\n\nThanks for asking this question. Scene editing in our paper includes adding, deleting, moving, deforming objects, or even simulating deformation using different physics parameters in an existing video. We show the examples in our supplementary video and NEW Figure 5 and 6.\n\nLet’s take the editing of a bouncing-ball video as an example. Without our learned physics simulator, if an artist wants to edit the material with only interactive editing, they would first find the frames where the ball contacts the ground; then they would attach some handles to the ball and drag the handles to modify the shape. Most importantly, the user needs to deform the ball **every frame** and ensure the physical correctness and consistency. This process would take a significant amount of time and manual efforts of a very skilled animator and/or artist – yet the results and visual quality may still appear unrealistic and/or inconsistent.\n\nConversely, in our framework, users can simply modify the physics parameters in the learned simulator and our simulator produces reasonably plausible, realistic animation results automatically. Our proposed editing pipeline in NEW Figure 1 could make video editing and generation much, much easier and faster in VR and/or 3D interactive metaverse applications.\n\n> Why it is necessary to have a rigidity network in addition to a motion network, given the magnitude of motion represents rigidity?\n\nThis is an excellent question. We have added the comparison in NEW Section 4.5 on “Rigidity Map” and in Figure 9. The rigidity network can disambiguate moving foreground vs. static background. An intuitive alternative is to use the magnitude of the motion field as a criterion to separate dynamic foreground, i.e. large motion area corresponds to dynamic objects (and the converse). However, in some frames, dynamic parts might only contain small offsets from the canonical frame. For example, a cyclic bouncing ball could overlap with the canonical position and thus have offset values of zero, even though it should be classified as a dynamic area. \n\nNEW Figure 9 (a, b) filters the scene using the motion magnitude. It is not a good criterion, since a large portion of the background is still left, while the ball is already incomplete in (b). The separation using the rigidity map in Figure 9 (d) is much better. The reason is that the rigidity map collects the motion information from all frames and it is not restricted to a single frame.\n\n*We sincerely appreciate your comments. Please let us know if you have further feedback.*\n\nBest, Authors", " > The motivation of the differentiable physics simulator. In fact, most of the demonstration should be used for the simulator, but the current version went over this part hastily. \n\nThanks for your suggestions. We have added more details about the physics system in NEW Section 3.2 and Equation 3. More ablation studies relating to the physics component are conducted in NEW Section 4.5.\n\nAs for the motivation, our paper aims to contribute to both NeRF and differentiable physics.\n \nOn the NeRF side, there is no paper we know that can reconstruct high-quality dynamic surface mesh as ours does, which is shown in the video and in NEW Section 4.1. Moreover, the physics engine can greatly assist NeRF’s editing and interaction capabilities. A differentiable physics simulator can learn editable digital twins and animate dynamic objects in the scene automatically. Without such a differentiable physics engine, previous NeRF editing techniques would otherwise have to heavily rely on user interaction (manually designing motion trajectories and deformation handles), which is tedious and time-consuming, most probably physically incorrect and/or implausible. \n\nFor differentiable physics, our geometry reconstruction module can provide accurate, dynamically changing 3D models for the simulator. In our limited experience dealing with differentiable physics, previous works are largely constrained by the modeling technique and mainly start with predefined, relatively simple, and fixed meshes. NeRF, as both a modeling and rendering module, can introduce new opportunities in applications of differentiable physics, since it is well-suited to handle unconstrained, out-of-domain, and/or real-world data.The input videos in our paper are either captured in the real world or synthesized in Blender. While the governing equations behind the source videos are different from those in DiffPD, we show that differentiable physics can still work well regardless of the model mismatch. \n\n> The physical parameters to learn were unclear. The $L_{physics}$ is only L-2 distance between two meshes with bending operation. \n\nThanks for your question. The cycle-consistency physics loss, $L_{physics}$, is an important bridge between the rendering and simulation pipeline. We have added a diagram in NEW Figure 3 to show what parameters are learned by minimizing this loss. We also rewrite NEW Equation 4 to show how the rigidity network, motion network, and physics parameters are coupled and inter-connected in this loss function.\n\n> The mesh extraction method (Line 177) is unclear. How were the meshes extracted and how were they used as supervision signals for the physics simulator? \n\nThanks for letting us know the confusion. More details are added in New Section 3.2 Geometry Reconstruction: We sample over a 3D grid to find vertices within volume $\\mathcal{A}_i$. If a grid point $p$ satisfies NEW Equation 2, we add this voxel to the hexahedral volume mesh $m_i$. \n\nWe also add a visualization in NEW Figure 9 to show how we extract the mesh using the rigidity map and a region-of-interest bounding box. The extracted mesh is used to initialize the simulation. Most of the supervision signals come from the learned motion field. We show the computation of the cycle-consistency physics loss, $L_{pyhiscs}$, as defined by Equation 4\nin NEW Figure 2.\n", " > It seems that the simulator can be viewed as a post-processing step after D-NeRF optimization. (Line 242, 243) The integration of all the components was not well demonstrated.\n\nEach part of the rendering-simulation combination takes advantage of each other for editing, system identification, and digital twinning. NEW Figure 2 shows how they are coupled.\n\nDuring training, we are indeed performing a sequential optimization for the performance issue. We have added an ablation study in NEW Section 4.5 Sequential Vs. Joint Optimization to show that joint training is too time-consuming while the benefit it provides is limited. \n\nA comparison of actually just using the simulator as post-processing is demonstrated in NEW Section 4.5 ​​Physics Module. In this experiment, we (1) extract the hex mesh sequence from our rendering module; (2) simulate another mesh sequence using the simulation module; (3) compute the pairwise Chamfer distance between the two sequences and update the physics parameters. Results indicate that the disconnected workflow has comparable performance to ours in terms of loss convergence. However, it takes 75% more computational time than ours because it reconstructs mesh for each frame, while our method avoids the expensive reconstruction by querying the motion network to compute the cycle-consistency physics loss.\n\n\n> The abstract (Line 7) stated a design of a \"two-way conversion between the neural field and corresponding mesh representation...\". Where was the two-way explained in the paper?\n\nThanks for pointing out the need for clarification on this.\n\nNeural fields -> Mesh. The neural fields define a signed distance function that can be used to extract the mesh by the marching cube algorithm (OLD L171). We added more explanation in New Section 3.2 Geometry Reconstruction and New Figure 9 to visualize the process.\n\nMesh -> Neural fields. On the other hand, the mesh representation is used as a tool to edit the neural fields as described in OLD L197 (also NEW Figure 1). The deformed mesh can induce a piecewise linear bending field by interpolation.\n\nThe two-way conversion enables our method to take advantage of both the implicit and explicit representation, making it easier to edit and animate the neural field.\n\n\n> In Line 144, $t$ was not defined.\n\nThanks! We have added the definition to the revised version. $t$ is the distance from the camera origin center ($o$), to a point along the ray in the direction $v$.\n\n> The connection between the simulator and the previous parts (SDF+motion+rigidity networks) was not clear. Does the simulator also have static vs motion decomposition?\n\nWe have added a diagram in NEW Figure 2 to better show the connection. Yes, the simulator also has decomposition and only simulates the moving parts. The decomposition is from the rigidity networks. Its visualization is also added in NEW Figure 9.\n\n> The removal editing video showed remaining sparse object pieces. Is there an explanation?\n\nGood question! Yes, we have a conjecture about the artifacts. Since the MLP fields are continuous, there are some close-to-zero SDF values (i.e. high probability density) near the deleted object area. Therefore, the rays can terminate in those regions randomly due to the sampling strategy in volume rendering, even if the main object is already gone. It could be an interesting future work on how to improve the editing quality.\n\n*We sincerely appreciate your comments. We hope that our response has addressed your concerns, and turns your assessment to the positive side. If you have any additional questions, please feel free to let us know during the rebuttal window.*\n\nBest, Authors", " > I would appreciate it if you could make your contribution clear. What is the proposed method solves/enables for physics reasoning,\n\nOur paper exploits the model and representation from NeRF as input to the differentiable physics. This step enables us to couple NeRF and differentiable physics together and apply them to more complex real-world applications, by estimating geometry, appearance, and physical properties all from one monocular video.\n\nBased on our prior experience working with differentiable physics, the domain and data are very important. Previous differentiable simulation works usually demonstrate results using simple scenes as input, and they assume that the optimization starts with an existing initial mesh, whereas we make no such assumptions. This paper aims to provide an easy way to create high-quality 3D models directly from out-of-domain videos, such that the powerful differentiable physics engine can be applied and adopted to more diverse real-world scenarios. Although we only study soft body dynamics in this paper, other possible future avenues on fluids, cloth, and rigid bodies can also be very important for creating digital twins. \n\nFurthermore, this coupling of differentiable physics and NeRF to video prediction and editing can reproduce the physics environment directly from an existing video, and then generate physically-plausible variations automatically.\n\nWe design a differentiable rendering-modeling-simulation pipeline, as shown in the computational graph (NEW Figure 2). In its current state, our training strategy does not fully utilize the gradients to improve the geometry reconstruction using the cycle-consistency physics loss (NEW Section 4.5 and Figure 7) and is left for future work (OLD Line 244) to explore how such a closed-loop approach can further improve the visual appearance. We hope that the power of physics reasoning will introduce new directions for NeRF-based rendering and modeling going forward.\n\n> The method section (particularly the physical component) lacks many details, making it hard to evaluate the technical contribution there.\n\nThanks for pointing this out. The objects in our demos are soft bodies with co-rotated materials integrated by projective dynamics. We have added more details about the physics module in NEW Section 3.2. \n\n\n> It's hard to evaluate the contribution of the physical component in this case. It seems we can complete the second stage by directly calling DiffPD once reconstruction is done. There needs to be a controlled experiment, i.e., dynamic nerf + surface extraction + DiffPD.\n\nThanks for your proposal. A controlled experiment is added (in NEW section 4.5 and Figure 8) where we (1) extract the hex mesh sequence from our rendering module; (2) simulate another mesh sequence using the simulation module; (3) compute the pairwise Chamfer distance between the two sequences and update the physics parameters.\n\nThe results indicate that the uncoupled workflow has comparable performance to ours in terms of loss convergence. However, it takes 75% more computational time than ours because it reconstructs mesh for each frame, while our method avoids the expensive reconstruction by querying the motion network to compute the cycle-consistency physics loss.\n\nThis experiment also verifies the correctness of our designed loss function, since it is almost as effective as the Chamfer distance and significantly reduces the overall computation time. There are two more potential advantages of using our loss instead of such a decoupled workflow: (1) A mesh reconstructed purely from the SDF cannot reflect the inside stress and deformation gradients because the reconstructed mesh elements are uniformly distributed. Our proposed cycle-consistency loss takes advantage of the learned motion fields, so it is capable of finding the point-wise correspondence and describing the deformation gradients. (2) Our cycle-consistency loss queries the motion networks, and can therefore pass gradients to the rendering module (which is added in NEW Section 4.5 and Figure 7). The decoupled workflow however leads to a totally non-differentiable module for rendering unlike ours.\n", " > The method seems to be a two-stage optimization procedure according to OLD Line 242-243. Following the previous comment, it's not convincing to me if we should call the proposed work the first approach that simultaneously optimizes the shape, appearance, and physical parameters (OLD Line 76). The entire pipeline seems to be a dynamic extension of Neus [61] + hexahedral mesh extraction + diffPD.\n\nWe have added a computational graph in NEW Figure 2 to illustrate the forward and backward flow of our entire pipeline. Our method is indeed trained in a sequential optimization manner for two reasons. (1) The joint optimization is too expensive. We have added an experiment and some profiling numbers to show this difference clearly in NEW Section 4.5 and Figure 7. (2) The physics simulation is only well-defined after the geometry reconstruction converges reasonably well. We believe that it would be an interesting but challenging future work to improve the physics training strategy and explore how physics priors can further improve or refine geometry reconstruction (OLD Line 244). \n\nAs for the claim in OLD Line 76, The word 'simultaneously' is used because we treat our pipeline as a whole, where the geometry, appearance, and dynamics parameters are all the results of the sequential optimization. We now realize that this wording might be confusing, and have deleted it in the new version. To the best of our knowledge, our method is the first approach that estimates the shape, appearance, and physical parameters using merely a single monocular RGB video. However, we also tone down the claim to avoid any confusion. Now our claim for contribution is “We present a framework capable of estimating and editing the geometry, appearance, and physics parameters directly from a single monocular video.” \n\n\n\n> Line 171 said the SDF to hexahedral mesh conversion is done through marching cube, which is not a differentiable operator. \n\nMarching cube and sampling-based mesh reconstruction are indeed non-differentiable. \n\nThanks for your questions. We will add a diagram to show the computational graph and how the gradients flow (in NEW Figure 2). We do not utilize the gradients of the mesh reconstruction. The differentiable path connecting the rendering module and the simulation module is the rigidity and motion network and cycle-consistency loss described in OLD L189-L196 (NEW Equation 4). In this paper, we mainly use the differentiable physics engine to perform physics parameter estimation and scene editing. We hope this work will inspire other follow-up work on the use of differentiable physics to improve both NeRF-based reconstruction and rendering. \n\n\n> The physics part is not clear in both the method description and experiments. \nA: Thanks for this suggestion. We have added more details about the physics engine in NEW Section 3.2. \n\nMore training details are included in New Sections 3.3 and 4.3. We choose to use co-rotated material to model the bouncing ball and estimate Young's modulus and vertical acceleration in two separate optimization tasks. The cycle-consistency physics loss is minimized for 100 epochs using Adam optimizer with a learning rate of 0.01. The optimization starts with Young's modulus of $2\\times 10^5 Pa$ and acceleration of $0m/s^2$. The estimated Young's modulus is $2.96\\times 10^6 Pa$ and acceleration is $3.78m/s^2$.\n\n\n> There needs to be a quantitative measurement for the physics component as a part of the major contributions. \n\nWe have added the experiments as suggested and measure the Chamfer Distance as a quantitative metric in NEW Section 4.5. This experiment verifies the correctness of our designed loss function. It also shows that our physics component is significantly more efficient than a completely decoupled and disconnected workflow, where a chamfer loss is computed against each pair of sampled meshes and/or simulated meshes.\n", " > the physics parameters to optimize are only one-dimensional in both \"gravity\" and \"material\" case - - does your code support joint optimization of multiple physics parameters? \n\nA: As a proof of concept, the experiments shown in our paper only optimize and edit one parameter at a time sequentially, as in standard sequential optimization that achieves faster convergence in practice. We made the description more accurate in NEW Section 4.3. “We estimate Young's modulus and vertical acceleration respectively in two separate optimization tasks.” \n\nThanks for your suggestions! We are happy to have better documented demo scripts and include multiple parameters to estimate when releasing our repository.\n\n> It seems you are initializing the gravity parameter to be 0.0378, which coincides with the final reported results of 3.78 $m/s^2$.\n\nA: The code snippets with 0.0378 are used to render the results of editing horizontal velocities (video 2:40). We first estimate the gravity and then use it as a known variable for editing. During editing, we set the physics_epoch=1. The 0.0378 -> 3.78 is due to the time scaling. We set $v_{i+1} = v_i + acc$ for each time step. The time step is 0.01 so the acceleration in one second is 0.0378 / 0.01 = 3.78. We also provide the learning curve in NEW Figure 8.\n\nThanks for pointing this out. The submitted code was packaged when we also used it for making the video. We apologize for the poor version control. In our released code, we will avoid reusing the same function name for both editing and training to improve the readability. Detailed tutorial scripts for training and editing will also be provided in the public release.\n\n*We sincerely appreciate your comments. Please let us know if you have further feedback.*\n\nBest, Authors\n", " We sincerely thank all reviewers for their insightful and helpful comments, and are glad that they found our work to be “tackling an interesting and challenging problem” and “having impressive results”. Their constructive suggestions help us identify confusing expressions or inadequate explanations due to our assumptions. We have added a few more qualitative and quantitative experiments to explain how our design choices benefit the physics part, as well as the entire pipeline. More visualization and technical details are also added to improve the presentation of our paper. We cordially invite the reviewers to read our newly uploaded revision to the paper. To avoid confusion, we use “NEW Section xxx” when referring to the rebuttal revision and “OLD Line xxx” for the prior reviewed version.\n\nFirst, we would like to better motivate our work. The goal of this paper is to bridge two recent topics of significant and growing interest: NeRF and differentiable physics. Together they enable us to explore the potential applications of ML in creating digital twins and metaverse applications. This paper aims to demonstrate that the two techniques can take advantage of each other.\n\nOn the NeRF side, there is no paper known that can reconstruct high-quality *dynamically changing* surface mesh as this work does, which is shown in the video and in NEW Section 4.1. Moreover, the physics engine can greatly help NeRF’s editing and interaction capabilities. A differentiable physics simulator can learn editable digital twins and animate dynamic objects in the scene automatically. Without such a physics engine, previous NeRF editing techniques heavily rely on user interaction (manually designing motion trajectories and deformation handles).\n\nFor differentiable physics, our geometry reconstruction module can provide accurate, dynamic 3D models for the simulator. In our limited experience dealing with differentiable physics, previous works are largely constrained by the modeling technique and mainly start with predefined, relatively simple, and fixed meshes. NeRF, as both a modeling and rendering module, can introduce new opportunities in applications of differentiable physics, since it is well-suited to handle unconstrained, out-of-domain, and/or real-world data. The input videos in our paper are either captured in the real world or synthesized in Blender. While the governing equations behind the source videos may be different from those in DiffPD, we show that differentiable physics can still work well regardless of the model mismatch. \n\nThis paper explores the concept of integrating a NeRF-based representation with differentiable physics. We hope that this will inspire more follow-up works along this line of investigation – to model, render, and recreate dynamic scenes with highly complex physical behaviors, such as splashing water, flowing willows in the wind, and deformable and articulated robots from the real world. We hope that this direction will open many new possibilities to create dynamic digital twins and Metaverse using ML.\n\nThere are also many great questions. For example, reviewer zGis asked what our physics system is. This work only deals with soft body dynamics. But there could be other future work dealing with cloth, rigid bodies, fluids, granular materials, etc. They are all very interesting and challenging. Reviewer CXx9 asked about the artifacts in the editing. We believe these are caused by the SDF-based volume rendering, which could possibly be another interesting NeRF problem to investigate. Reviewer 1MJg asked about the potential benefit of joint optimization. Jointly training the fully-differentiable modeling-simulation-rendering pipeline looks elegant and was actually our initial goal. However, the runtime performance is a major issue. A faster simulation or smarter training tricks are needed to fully exploit this system. Therefore, we did not claim the physics prior can help the geometry reconstruction, and leave the joint optimization as future work in our paper (OLD Line 243). Therefore, our current system opts for sequential optimization instead, in favor of higher performance.\n\nIn response to the reviews, we have added some extra explanations. Here is a summary of the key changes.\n\n- Reword and clarify the claims about our contributions in NEW Section 1. \n- Add more details about the physics engine in NEW Section 3.2.\n- Add a diagram to illustrate the computational graph in NEW Figure 2.\n- Add more details on training in NEW Sections 3.3 and 4.3.\n- Add three ablation studies to show the effectiveness of the physics module, our sequential training strategy, and the foreground selection process in NEW section 4.5.\n\n*We hope that our response has addressed your concerns. If you have any additional questions, please feel free to let us know during the rebuttal window.*\n\n*Thank you again for your time and consideration.*\n\n", " The paper presents a method that jointly optimizes shape, appearance, and foreground segmentation from a monocular video based on neural fields. The author then fits the physical parameters of the scene. The experiments demonstrate that the method works better than dynamic nerf and nonrigid nerf in nonrigid neural rendering. Further, the authors demonstrate interesting video editing experiments based on the learned physics. \n\n Overall the quality of dynamic reconstruction and neural rendering from the author-provided data shows superior performance compared to prior SOTA. That said, I have a few concerns about the physical component as well as the full pipeline. I am currently on the borderline and open to going on either side based on whether the rebuttal addresses my major concerns on \"physics\" well\n\n**Pros**: \n- The paper tackles a very interesting yet challenging problem; \n- The solution is technically sound\n- Experiments show superior results to strong competing algorithms. \n- Code is provided and the video shows impressive editing results. \n\n**Cons**: \n- The method section (particularly the physical component) lacks many details, making it hard to evaluate the technical contribution there. \n- The experiment section lacks a thorough study of the quality of the physical component. \n- The two-stage framework is inconsistent with what is claimed in the intro. \n **Two-stage procedure**: the method seems to be a two-stage optimization procedure according to Line 242-243. The first stage is optimizing the shape and appearance, and the second stage to optimize the physical parameters. This raises a few concerns. \n- Following the previous comment, it's not convincing to me if we should call the proposed work the first approach that simultaneously optimizes the shape, appearance, and physical parameters (Line 76). The entire pipeline seems to be a dynamic extension of Neus [61] + hexahedral mesh extraction + diffPD. \n- It's hard to evaluate the contribution of the physical component in this case. It seems we can complete the second stage by directly calling DiffPD once reconstruction is done. I would appreciate if you could make your contribution more clear. What is the proposed method solves/enables for physics reasoning, \n- Line 171 said the SDF to hexahedral mesh conversion is done through marching cube, which is not a differentiable operator. Hence, it is not clear if the entire pipeline \"can\" be learned end-to-end. \n- Following the previous comment, there needs to be a controlled experiment, i.e., dynamic nerf + surface extraction + DiffPD.\n\n**Physics parameter estimation**: the physics part is not clear in both the method description and experiments. Hence it's hard to evaluate the technical innovation and the performance. \n- The method section for the physics component is not clear. First, it's unclear what physical parameters the method optimizes and what are parameters the author needs to provide. Besides, it is unclear what the physical model is (e.g., rigid body physics or soft-body physics). Thirdly, the author should provide some detail on optimizing physical parameters to make it self-inclusive and reproducible (e.g., initialization and optimizer) \n- There needs to be a quantitative measurement for the physics component as a part of the major contributions. The current experiment does not seem to provide such. One could do this either in simulation or partially in the real world (e.g., measuring the acceleration or the mass of the basketball based on a video sequence) \n\n**Minor**: \n- Since the physics component is not very clear in the paper, I briefly checked the implementation provided by the author (thank you for doing this) and have two questions: 1) the physics parameters to optimize are only one-dimensional in both \"gravity\" and \"material\" case - - does your code support joint optimization of multiple physics parameters? 2) It seems you are initializing the gravity parameter to be 0.0378, which coincides with the final reported results of 3.78 m/s^2. See above", " This paper presented a method for learning the geometry and physics parameters of a dynamic scene. The input source is a monocular RGB video. The paper decoupled the learning of dynamic scenes into a static reference neural field and a deformation field. The main structure of the learning part drew similarity to [57], including the offset field, the divergence field with rigidity masks. On top of these, the paper performed mesh reconstruction for the video sequence and send those meshes to a differentiable physics simulator to optimize between the extracted and reconstructed meshes. The method also allows scene editing. **Strength**\n1. The results presented are pretty impressive. Both quantitative and qualitative results were shown with comparison to previous approaches.\n\n**Weakness**\n1. The paper is hard to follow and not well organized, especially the motivation of the differentiable physics simulator. In fact, most of the demonstration should be used for the simulator, but the current version went over this part hastily. Therefore, it left much confusion to me. For example, \n1.1 The physical parameters to learn were unclear. The L_{physics} is only L-2 distance between two meshes with bending operation. \n1.2 The mesh extraction method (Line 177) is unclear. How were the meshes extracted and how were they used as supervision signal for the physics simulator?\n2. It seems that the simulator can be viewed as a post-processing step after D-NeRF optimization. (Line 242, 243) The integration of all the components was not well demonstrated. 1. The abstract (Line 7) stated a design of a \"two-way conversion between the neural field and corresponding mesh representation...\". Where was the two-way explained in the paper?\n\n2. In Line 144, \"t\" was not defined.\n\n3. The connection between the simulator and the previous parts (SDF+motion+rigidity networks) was not clear. Does the simulator only also have static vs motion decomposition? \n\n4. The removal editing video showed remaining sparse object pieces. Is there an explanation? This paper does not have limitations listed.", " The paper proposes a method to recover geometry, motion, and physics parameters from a single video. It recovers geometry and motion with differentiable volume rendering similar to Dynamic NeRF. To recover physics parameters, it uses Diff-PD. It shows results on synthetic and real data with relatively small motion. It also demonstrated applications of user editing. **Originality and Significance**\n- (+) This work is the first to combine prior works of dynamic NeRF and differentiable physics in a joint optimization framework. \n- (-) However, it is not well-motivated why we need this joint optimization. Why not running differentiable rendering and simulation in a step-by-step manner? Where does joint optimization help in geometry and motion recovery, as well as physics parameter recovery? To give readers a better idea, experiments and a thorough analysis is needed. \n- (-) The challenges in combining differentiable rendering and differentiable physics is not clearly presented. As a result, the technical contribution appears weak. l183-l196 sparsely discussed an alternative solution but lacks in-depth analysis. How much does the proposed solution out-performs the alternative solution in compute and accuracy?\n\n**Clarity**\n- (-) The paper is difficult to follow in general. Some sentences are not logically connected. See questions. \n\n**Experiments**\n- (+)The quantitative results have nice details, benefiting from NeuS representation. The material parameter estimation experiments is interesting. \n- (-) The baseline comparison only shows the proposed method is better but does not provide insight and analysis. What is the difference between the proposed method and the baselines? What does the results suggest? \n- (-) There is no ablation study. For example, I'd like understand how important to jointly solve physics parameters together with differentiable rendering. How important are the loss terms? - l194: b(li,mi) needs to be explained. Otherwise, it's unclear what's the relation between mi and the reference frame.\n- l210-225 appears out of the blue. It will be helpful to start with the goal of the whole paragraph than diving into technical descriptions.\n- Scene editing is not clearly defined in the context of the this paper. Why is the physics simulator necessary for scene editing? Can't we edit the scene in an interactive manner? \n- Why it is necessary to have a rigidity network in addition to a motion network, given the magnitude of motion represents rigidity? Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "4BX9OnyYjuO", "nips_2022_QXLue5WoSBE", "bGGBmePHRK", "WIgwaLlI-QY", "lLbKWHX2_xw", "3toCMXd--Zj", "uESZGMEi6kz", "uESZGMEi6kz", "bGGBmePHRK", "bGGBmePHRK", "RO3fZkQ-MO", "RO3fZkQ-MO", "RO3fZkQ-MO", "nips_2022_QXLue5WoSBE", "nips_2022_QXLue5WoSBE", "nips_2022_QXLue5WoSBE", "nips_2022_QXLue5WoSBE" ]
nips_2022_17KCLTbRymw
SGAM: Building a Virtual 3D World through Simultaneous Generation and Mapping
We present simultaneous generation and mapping (SGAM), a novel 3D scene generation algorithm. Our goal is to produce a realistic, globally consistent 3D world on a large scale. Achieving this goal is challenging and goes beyond the capacities of existing 3D generation or video generation approaches, which fail to scale up to create large, globally consistent 3D scene structures. Towards tackling the challenges, we take a hybrid approach that integrates generative sensor model- ing with 3D reconstruction. Our proposed approach is an autoregressive generative framework that simultaneously generates sensor data at novel viewpoints and builds a 3D map at each timestamp. Given an arbitrary camera trajectory, our method repeatedly applies this generation-and-mapping process for thousands of steps, allowing us to create a gigantic virtual world. Our model can be trained from RGB-D sequences without having access to the complete 3D scene structure. The generated scenes are readily compatible with various interactive environments and rendering engines. Experiments on CLEVER and GoogleEarth datasets demon- strates ours can generate consistent, realistic, and geometrically-plausible scenes that compare favorably to existing view synthesis methods. Our project page is available at https://yshen47.github.io/sgam.
Accept
The paper explores generation of a volumetric (voxel map via hashing, a la KinectFusion et seq) from a sequence of 2D images. This is achieved by synthesizing sensor images, and feeding them into a mapping module (like KinectFusion). As the reviewers note, this is an interesting goal, and the approach is reasonable, and no novelty of the overall system was questioned. However, the reviewers concur that the synthetic dataset is not sufficient to give confidence that the system is effective in practice. The rather more limited Google Earth (GE) examples, do, however, provide evidence that this strategy is effective. As a reader, the most important figure for me is Fig 3 in the supmat - a qualitative view of the rerendered GE scene. It is quite clear that the rerendered scenes have learned the characteristic 3D structures of the GE datasets, indicating that the scene-specific training data is a strong contributor to the results. The paper would have been stronger if such qualitative results had been shown for ACID scenes, either trained on Google maps (even if the training needed to be e.g. coastline specific), or trained on depth-from-stereo as indicated in the rebuttal.
val
[ "hix-9MqaCIp", "GSa13V8dXXs", "R0aJ5pGnN2e", "VFt-hKF_tLJ", "84d8L3LSx7u", "QPdKLA5aWdM", "y-JwMIVf4YT", "zAC5VWhLkNr", "1SvEP7yVVYz" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank Reviewer QFY4 for the prompt response! Please see the clarifications below.\n\n**Wordings:** We agree with Reviewer QFY4 that the wording is confusing. We will change from \"*accurate depth*\" to \"*complete geometry*\". Thanks for pointing it out!\n\n**Further clarifications on geometry requirement:** In this paper, we focus on the challenging task of *large-scale, long-term, globally consistent 3D scene generation*. In contrast to prior art where most of them targeted 2D image generation and only considered 2D metrics, \nwe are interested in not only generating appealing, realistic appearances (in the form of 2D images), but also reconstructing a coherent 3D structure (in the form of 3D maps). Therefore, *in order to properly evaluate the effectiveness of our model*, both in terms of 2D and 3D metrics, we must rely on datasets that have complete geometry. Unfortunately, both ACID and RealEstate10K lack GT geometry, preventing us from assessing the quality of our 3D generations. We thus build two new large-scale, diverse dataset with GT geometry so that we can evaluate the 3D metrics objectively.\n\n**Extending SGAM to ACID/RealEstate10K:** Thanks for the suggestion! We agree with Reviewer QFY4 that applying SGAM to the two existing benchmarks is feasible. While both datasets do not provide GT geometry and are designed for 2D generative models and view extrapolation, we can still exploit muli-view stereo to compute proxy depth maps and use them to supervise SGAM. As discussed in the original rebuttal, SGAM is robust and is able to learn an effective codebook even when facing noise in geometry. We, however, note that even if we have trained such a model, we will still only be able to assess the perceptual quality of 2D RGB images on these datasets, which is only one of the two primary goals of this paper (the other is realistic, consistent and large-scale 3D scene generation).\nDue to resource constraints, we do have the capacity to start the training right now. We will try out best to include such results in the final version! ", " I appreciate the authors for the detailed response. \n\nI may have been confused with line 216 \"Unfortunately, existing static, large-scale 3D datasets, such as ACID [40] and RealEstate10K [84], do not provide accurate depth.\". Could the authors ellaborate the meaning of this sentence? I thought this was to justify the reason of creating the CLVEVR-Infinite dataset, which other real-world datasets (ACID, RealEstate10k) do not provide. I acknowledge that CLEVR-Infinite seems to be a great way of computing the generative metrics because we can obtain the ground truth geometry, but the authors mentioned \"accurate depth\" instead of \"ground truth geometry\", which confuses me.\n\nAlso, if no exact depth is required, could the authors also explain why the proposed method was not evaluated on either ACID or RealEstate 10k dataset? These seem to be the standardized dataset for evaluating 3D scene generation tasks. If the answer is to focus on \"large-scale, long-term, globally consistent generation\", I do not think that achieving SOTA results on these datasets are crucial. However, for future works, I think adding the results can improve the paper by showing. This may be the weakness of the proposed method, but as long as the mentioned strength exists clearly, I don't think it will hurt the contribution of this work. \n\n ", " **Simultaneous generation and mapping is critical:** We strongly believe SGAM is a critical and innovative step towards perpetual 3D scene generation.Through this paper, we also hope to convey the importance of explicitly 3D modeling in large-scale scene generation. While we indeed exploit VQ-GAN and KinectFusion in SGAM (*i.e.*, leverage KinectFusion for volumetric map building, adopt VQ-GAN for generative sensing, etc), *why they are used* and *how they are used* are all carefully designed. The resulting framework is generic, interpretable, and can be applied to various setup. It is not just a simple extension. Also, exploiting existing algorithms to realize a novel idea does not mean there is no technical contribution. We hope the reviewers, in particular **Reviewer GmLE**, can acknowledge this.\n\n\n**Repeatedly applying VQ-GANs will not work:** We want to highlight our task is a full 3D generation instead of perpetual 2D image generation on the image plane. We are **NOT** repeatedly applying VQ-GAN, which will not work for this task. \nWe need at least two critical components to achieve our goal of simultaneously perpetual sensor and 3D shape generation: 1) render novel views at a new pose by reflecting the stereo-parallax. 2) building a 3D representation ensure consistency over the long run. Solving these two challenges is the core intellectual merit of this paper. \n\n**Experiments on real-world data:** We want to highlight that GoogleEarth-Infinite is a challenging real-world dataset that we collect from GoogleEarth. The FID scores shown below demonstrate our performance for long-term video sequence generation. \n\nFID scores on generated GoogleEarth-Infinite dataset (unrolling 60 frames) \n| InfiniteNature | GFVS-implicit | GFVS-explicit |Ours |\n| -------- | -------- | -------- |-------- |\n| 182.6 | 160.4 | 133.1 | **79.26** |\n\n\n**Camera poses beyond near nadir views:** The scenes shown in the paper are indeed generated with near nadir views. To demonstrate that SGAM is able to generate scenes from less structured trajectories and different camera viewpoints, we exploit two new setup: (i) we adopt a spiral trajectory and rotate the cameras along the yaw axis such that they always \"look at\" the center of the scene; (ii) we fixed the origin of the camera and rotate 360 degrees along its roll axis. Due to computational resource limits, we did not re-train our models with various camera angles. Instead, we simply adopt the model trained with fixed angles and perform scene generation. Surprisingly, despite such a domain gap, SGAM is still able to produce coherent, reasonable 3D scenes. We refer the readers to the revision for the qualitative results (see supp **Figure 2** ).\n\n\nIn addition, following the reviewers suggestion, we are also running SGAM on the KITTI-360 dataset, an urban self-driving dataset from a first-person perspective. Due to time and resource limits, the model is currently training. Validation performance shows promising results in the generative sensing model.", " **Limitation on voxel scene representations:** We agree that the resolution of the voxels will impact how much fine-grained details we can capture. It is a hyper-parameter that needs to be determined. The voxel-hashing we used for mapping is a popular strategy for scaling up volumetric representation at high precision. One can potentially adopt other special data structures such as Octotree to reduce memory usage and computational cost, besides voxel hashing. Thanks for the suggestions! We will include relevant discussion in the final version.\n\n\n**Results on complex scenes:** To validate whether our method can handle more challenging scenarios and first-person perspectives, we train SGAM on KITTI-360 dataset. KITTI-360 is a challenging self-driving dataset. It contains rigid objects (*e.g.,* cars), fine-grained geometry (*e.g., trees), reflections, and most importantly noisy sensory data. This allows us to evaluate how robust SGAM is. Our model is currently still training. \nBased on the the intermediate checkpoint, we can achieve **PSNR 23.80, SSIM 0.838, LPIPS 0.122, FID 37.88** on one-step prediction. Some preliminary qualitative results are shown in the revision. Due to computational resource limits, we will include the results on indoor synthetic scenes in the final version. We thank R2 for the great suggestion. We choose KITTI-360 since it allows us to verify multiple aspects at the same time (*e.g.,* robustness to the depth measurement).\n\n\n**On the convergence of training:** As mentioned in Sec. 3.4 and Sec. 4.3, we adopt a two-stage training strategy. Each stage is crucial. Missing either stage could have a detrimental effect. The color biases in CLVER arises from the environmental map that we used, rather than the random seed. We will change the environmental map and re-train our model to mitigate the issue.\n\n**Generating scenes from less structured trajectories:** The scenes shown in the paper are indeed generated with fixed camera angles. To demonstrate that SGAM is able to generate scenes from less structured trajectories and different camera viewpoints, we exploit two new setup: (i) we adopt a spiral trajectory and rotate the cameras along the yaw axis such that they always \"look at\" the center of the scene; (ii) we fixed the origin of the camera and rotate 360 degrees along its pitch axis. Due to computational resource limits, we did not re-train our models with various camera angles. Instead, we simply adopt the model trained with fixed angles and perform scene generation. Surprisingly, despite such a domain gap, SGAM is still able to produce coherent, reasonable 3D scenes. We refer the readers to the revision for the qualitative results (see supp **Figure 2** ).", " **Robustness to depth measurement:** Our approach *does not* require accurate depth, even during training. In fact, the \"GT depth\" of our GoogleEarth-Infinite dataset contains noise. The \"GT depth\" is obtained by rasterizing the coarse meshes (which are built from SfM and MVS point clouds) that we crawled with the Google API. Therefore by nature it is merely a proxy geometry of the real world and is far from accurate. Fortunately, with the help of the VQ-based generative sensing module, we can learn to de-noise with the quantized codebooks and produce perpetual 3D scene without drifting. We also note that we use the same \"GT depth\" to train all baselines as well as our method. To further showcase the robustness of our approach, we are conducting experiments on KITTI-360. Specifically, we use the stereo estimation from deep nets to serve as the \"GT\" to train our model. Due to computational constraints, the training is still in progress. We, however, still provide some promising preliminary results in the revision for the one-step prediction results. Please see the response to Reviewer SFD9 for more details on KITTI.\n\n**Results on real world dataset:** We stress that our goal is to enable large-scale, long-term, globally consistent, perpetual 3D scene generation. Our method can generate at a much larger scale without domain drift. To verify our claim, we compute the FID score across all competing on GoogleEarth-Infinite by unrolling the perpetual generation for 60 steps with different initial images. All the methods follow the same predefined trajectory. Results show that our method significantly outperforms other methods: \n\nFID scores on generated GoogleEarth-Infinite dataset (unrolling 60 frames) \n| InfiniteNature | GFVS-implicit | GFVS-explicit |Ours |\n| -------- | -------- | -------- |-------- |\n| 182.6 | 160.4 | 133.1 | **79.26** |\n\nWe also would like to highlight the qualitatitive results shown in Fig. 7. \n\nWe also note that InfiniteNature predicts the next frame by directly warping pixels and refining the results. In the short term, such an inpainting-like scheme looks slightly faithful (since it aims to fill the residual) compared to the images generated from the discrete bottleneck, resulting in superior one-step prediction performance. However, we find the artifacts will thus accumulate over time and results in severe drifting compared to GFVS and ours. In contrast, ours uses a vector quantization in latent space, inducing strong prior and constraints. This expressiveness vs. prior trade-off makes our approach slightly worse than Infinitenature in one-step prediction (PSNR: 23 vs. 24, higher is better). Nevertheless, this discrete bottleneck helps prevent SGAM from domain drift, resulting in significantly better results long-term image generation quality (FID: 79 vs. 182, lower is better). \n\n\n\n**Flickering in videos:** Thanks for pointing this out! It is simply a visualization bug when we generate the video. The actual generation is continuous and the produced scene changes smoothly and coherently.", " We thank the reviewers for their insightful comments and valuable suggestions. We are very excited that the reviewers appreciated the novelty and soundness of our approach (*i.e.,* integrating deep generative models with prior art on mapping) [**Reviewer QFY4, Reviewer SFD9**], found the paper interesting and well-written [**Reviewer QFY4, Reviewer SFD9**], and acknowledged our extensive evaluation and impressive results on the large-scale synthetic dataset [**Reviewer QFY4, Reviewer SFD9**].\n\n---\n\n**Novelty and technical contributions**\n\nAs demonstrated in the paper (and further below), *simultaneous generation and mapping is the key to producing a large-scale, realistic, globally consistent 3D world*. Specifically, by grounding scene generation with mapping, one can generate a diverse set of scenes that *are coherent with* existing appearance and structure; it also allows one to reproduce mapped regions *with consistency*. Through iteratively updating the map, one can further expand the generation process to an extremely large scale *without drifting*. \n\nWe strongly believe SGAM is a critical and innovative step towards perpetual 3D scene generation.Through this paper, we also hope to convey the importance of explicitly 3D modeling in large-scale scene generation. While we indeed exploit VQ-GAN and KinectFusion in SGAM (*i.e.*, leverage KinectFusion for volumetric map building, adopt VQ-GAN for generative sensing, etc), *why they are used* and *how they are used* are all carefully designed. The resulting framework is generic, interpretable, and can be applied to various setup. It is not just a simple extension. Also, exploiting existing algorithms to realize a novel idea does not mean there is no technical contribution. We hope the reviewers, in particular **Reviewer GmLE**, can acknowledge this.\n\n---\nWe now address the concerns of each reviewer individually. We have also included new experimental results per reviewers' request in the revision (**highlighted in blue**) and supplementary video. We strongly encourage the reviewers to take a look at **our revised supplementary material**.", " The work tackles the problem of 3D scene generation, learning from sequences of RGB-D images and its poses. The main contribution of the work comes from proposing an algorithm that uses a generative sensing module and a mapping module. The proposed method was evaluated on CLEVR and GoogleEarth dataset and achieves SOTA results on the former. The method is also more efficient compared to previous results.\n Strengths: \n\n- The proposed method SGAM uses generative sensing module and mapping module, where the former is a VQGAN and the latter is mostly KinectFusion using volumetric representations. The formulation is a sound integration of the neural network with previous methods. \n\n- The paper is well written and easy to understand\n\n- The proposed method achieves SOTA results on CLEVR dataset on image based metrics and also on 3D generative metrics (MMD, JSD, 1-NNA…). \n\n- I enjoyed watching the results in video. I appreciate the authors for the hard work. \n\n\nWeaknesses: \n\n- My main concern lies in the experimental section, especially on real-world datasets. The main experiment was mainly on the CLEVR dataset. As the authors mentioned, the essence of scene generation comes from generating diverse but realistic results. Although the CLEVR dataset is a good way to show the distributional similarity between the generated set and the real distribution, the main limitation is that this is a synthetic dataset. The appendix in Table 4 shows that the method shows worse results on real dataset. I would like to hear from the authors regarding the discrepancy on the evaluation metrics on real and synthetic datasets.\n\n- Compared to other methods [1, 2], the method requires accurate depth. What happens if the method is trained on ACID dataset, where the accurate depth information is unavailable? I regard this as an important problem, since many of the real-world applications do not provide the exact depth information. \n\n\nSummary:\nThe proposed method that leverages generative sensing module and a mapping module makes sense. However, my main concern lies in the experimental section, where the method achieves best results only on synthetic dataset and requires accurate depth. I would like to hear the response from authors regarding the experimental section and listen to other reviewers before making the final decision.\n\n===============================================================================\n\nAfter rebuttal, I'm convinced that the proposed method has the strength of generating long-term, globally consistent scenes. Therefore I'm willing to increase the score. However, I strongly encourage the authors to compare the method against previous methods such as ACID and RealEstate10k datasets to show the strength/weakness in real/standardized datasets. I do not think achieving SOTA results on these datasets is crucial since the method has clear strength, but the method should at least be competitive to verify that the method works on real-world datasets. - In the video there seems to be flickers in GoogleEarth dataset around 1:54, where the scene changes abruptly? Could the authors explain why this happens? Or is this just another video and not a continuous scene change.\n Yes, the authors addressed the limitations of the work. But I recommend adding the comments regarding the method requires exact depth information.", " This paper presents a large scale scene generation method which promotes global 3D scene consistency. The method works by taking as input a random view of a scene, and then moving through the scene continually using the current understanding/representation to create incomplete observations from new perspectives, using a generative model to in-paint the incomplete regions, and then using the newly generated information to update the understanding/representation of the scene. With this setup they demonstrate impressive scene generations on a dataset made up of simples shapes scattered around a white room, and on google street view images. This paper is interesting and the results are impressive. The method is strong, and integrates lots of different techniques from different areas to produce a novel and successful solution to a hard problem. \n\nWith respect to the method I am slightly skeptical how well it will scale to more complex scenes. The underlying 3D representation is a voxel grid, and while hashing is performed to make it more efficient, I am not sure if this will support complex detail that natural scenes poses. It might be good to speak to this in the limitations section. \n\nConnected to this I comment I would have also liked to have seen applications of this method to complex indoor scenes, as shown in the GFVS paper. I realize you require depth which is not available in general for indoor image datasets, however there exists some simulated indoor datasets for which you could again capture depth in blender. This would really help to understand how well your model scales, as at the moment you only demonstrate performance on practically 2D maps. How difficult is this process to train? Have you observed consistent convergence to attractive scenes over multiple seeds? Does this green color bias effect all seeds similarly? \n\nIf I understand correctly the camera angle is fixed throughout the trajectory? Have you explored at all with random trajectories through the space with the camera orientation changing as you move? I would be interested how the method performed with less structured passes through the search space. Yes ", " The paper proposes a method to generate a large-scale RGBD image in a near NADIR view. It uses a standard image generative model VQGAN while using the already-generated scenes as an input constraint. It qualitatively and quantitatively evaluate on the Clever dataset. For Google earth data, only qualitative evaluations are given. To me, this paper is over-selling. The paper is just a 4 channel image generation by repeatedly applying a standard VQGAN. Simultaneous generation and mapping does not make much sense.\n\nThe core of the method is an application of an existing technique and the technical contribution is weak. I also do not like the way authors present their method to handle a general camera pose, while in reality they just use near nadir views.\n\nAlso, CLEVER scene is a bit too simple and through evaluations should be given on real data (Google Earth).\n\n I want answers to my criticism (weakness analysis) above. The limitation description is OK. I cannot think of a better one." ]
[ -1, -1, -1, -1, -1, -1, 5, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "GSa13V8dXXs", "84d8L3LSx7u", "1SvEP7yVVYz", "zAC5VWhLkNr", "y-JwMIVf4YT", "nips_2022_17KCLTbRymw", "nips_2022_17KCLTbRymw", "nips_2022_17KCLTbRymw", "nips_2022_17KCLTbRymw" ]
nips_2022_2nWUNTnFijm
Learning Substructure Invariance for Out-of-Distribution Molecular Representations
Molecule representation learning (MRL) has been extensively studied and current methods have shown promising power for various tasks, e.g., molecular property prediction and target identification. However, a common hypothesis of existing methods is that either the model development or experimental evaluation is mostly based on i.i.d. data across training and testing. Such a hypothesis can be violated in real-world applications where testing molecules could come from new environments, bringing about serious performance degradation or unexpected prediction. We propose a new representation learning framework entitled MoleOOD to enhance the robustness of MRL models against such distribution shifts, motivated by an observation that the (bio)chemical properties of molecules are usually invariantly associated with certain privileged molecular substructures across different environments (e.g., scaffolds, sizes, etc.). Specifically, We introduce an environment inference model to identify the latent factors that impact data generation from different distributions in a fully data-driven manner. We also propose a new learning objective to guide the molecule encoder to leverage environment-invariant substructures that more stably relate with the labels across environments. Extensive experiments on ten real-world datasets demonstrate that our model has a stronger generalization ability than existing methods under various out-of-distribution (OOD) settings, despite the absence of manual specifications of environments. Particularly, our method achieves up to 5.9\% and 3.9\% improvement over the strongest baselines on OGB and DrugOOD benchmarks in terms of ROC-AUC, respectively. Our source code is publicly available at \url{https://github.com/yangnianzu0515/MoleOOD}.
Accept
All reviewers agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the motivation of the paper, its clarity, and added explanation and experiments included during the rebuttal. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. To these suggestions I urge the authors to add another: I strongly suggest removing section 3.2. This data generating process is not validated and is not at all necessary for your approach. The SCM is never referred to again outside of this section. All that is necessary is that one can view molecules as coming from different environments or contexts and predicting this context is useful to improve generalization. Finally, with the added space I suggest expanding figure 1 to add more examples of “environments” and make this clearer in the figure: right now you only mention briefly in the caption that different scaffolds can be thought of as different environments. If you could include more / better examples that align with your experiments this will make the motivation clearer. Once these changes are made the paper will be a nice addition to the conference!
test
[ "R-Z4y7hmRJ5", "BXRwhl1nzs", "pZpvRrE6qEb", "Gqcgq5F8vuZ", "UkgkyBZZkTt", "tCIlpBhEJz-", "XD5NyOQKoxy", "gHpGy2VpTmf", "2kGjKTP0kg", "rpF918npdDM", "DKAwl6Nbs8", "iY4zfZyraVT", "1zPkExBJjSl", "_4BFN_Q9d_v", "4VPjzFH-936" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThanks again for your valuable comments and nice suggestions. We are still sincerely looking forward to your feedbacks. It is really a good chance for us to engage in the discussion to help us improve this paper.\n\nSincerely, Authors", " Dear Reviewer MgvC,\n\nThanks again for your time and valuable comments. Since the discussion deadline is approaching, we would be glad to hear from you whether our response has addressed your concerns.\n\nSincerely, Authors", " Dear Reviewers! Thank you so much for your time on this paper so far.\n\nThe authors have written a detailed response to your concerns. How does this change your review? \n\nPlease engage with the authors in the way that you would like reviewers to engage your submitted papers: critically and open to changing your mind.\n\nLooking forward to the discussion!", " Dear Reviewer MgvC,\n\nThanks again for your time and thorough review.\n\nIn our early response, we have included detailed answers to your questions in the initial review. It would be grateful if you can confirm whether our response has addressed your concerns.\n\nIf you have any further questions, please let us know, so that we can provide follow-up response timely.\n\nSincerely, Authors", " We thank the reviewers for their time and valuable comments. Overall, the reviewers found our work well-motivated (WE4q) and novel (pgdS, WE4q), and appreciated the clear and well-organized presentation (WE4q), technically soundness (pgdS, WE4q), extensive and thorough experiments (WE4q), as well as good connection with NeurIPS community (MgvC, WE4q). To facilitate the reviewing process towards a comprehensive evaluation of our work, we first restate our contributions below:\n\n- **Methodology** We leverage the invariance principle as an effective prior and devise a new learning objective to learn robust molecular representations for out-of-distribution generalization purpose. To our knowledge, we are the pioneering work in this direction. Also, our model is free from manual specifications of environments and can incorporate off-the-shelf molecular encoders to improve their robustness against distribution shifts.\n- **Theoretical analysis** We also provide theorectical analysis to back up our proposed method. Theoretical justifications reveal that optimizing the proposed objective forces the learned molecular representation to satisfy the invariance principles, thus guaranteeing a valid solution for OOD problem.\n- **Empirical performace** We conduct extensive and comprehensive on ten publicly available datasets. Results demonstrate that our proposed model shows a superior generalization ability than state-of-the-art models. In particular, our method achieves up to 5.9% and 3.9% improvement over the strongest baselines on OGB and DrugOOD benchmarks in terms of ROC-AUC, respectively.\n\nWe will sincerely appreciate it if you could post some comments so that we can improve this paper accordingly.", " Thank you for the valuable comments. We are glad that you found our work well written, and appreciated our elegant approach, reasonable motivation, theoretical support and solid experimental evaluation. Below we provide detailed responses.\n\n**Q1: Sensitivity of our model to different decomposing strategyies.**\n\nWe supplement new experimental results. For all experiments in our original paper, we all adopt *breaking retrosynthetically interesting chemical substructures* (BRICS) to segment molecule into substructures, which is widely used in other works related to molecules [1,2,3]. To investigate the sensitivity of our method to different decomposing strategies, we adopt another molecule segmentation method called *retrosynthetic combinatorial analysis procedure* (RECAP)[4], which is also available as an API in RDKit package. RECAP and BRICS decompose molecules based on two different rules. Due to limited time, we only conduct experiments on EC50-assay/scaffold/size three datasets and the comparisions are summarized in table below. We can see that RECAP and BRICS show competitive performance on our model and both outperform the baselines by large margins.\n\n| | **EC50-assay** | **EC50-scaffold** | **EC50-size** |\n|:-------------- |:--------------------------:|:--------------------------:|:--------------------------:|\n| **ERM** | $69.35\\pm7.38$ | $63.92\\pm2.09$ | $60.94\\pm1.95$ |\n| **IRM** | $69.94\\pm1.03$ | $63.74\\pm2.15$ | $58.30\\pm1.51$ |\n| **DeepCoral** | $69.42\\pm3.35$ | $63.66\\pm1.87$ | $56.13\\pm1.77$ |\n| **DANN** | $66.97\\pm7.19$ | $64.33\\pm1.82$ | $61.11\\pm0.64$ |\n| **MixUp** | $70.62\\pm2.12$ | $64.53\\pm1.66$ | $62.67\\pm1.41$ |\n| **GroupDro** | $70.52\\pm3.38$ | $64.13\\pm1.81$ | $59.06\\pm1.50$ |\n| **Ours-RECAP** | $\\underline{72.72\\pm3.94}$ | $\\underline{66.34\\pm0.52}$ | $\\mathbf{65.48\\pm1.10}$ |\n| **Ours-BRICS** | $\\mathbf{73.25\\pm1.24}$ | $\\mathbf{66.69\\pm0.34}$ | $\\underline{65.09\\pm0.90}$ |\n\n**Q2: “What if the model is trained in a simple multi-task learning setting? In other words, setting environment prediction as an auxiliary task? Would this have equivalent performance to the proposed method? Why it is necessary to design environment prediction in a VAE learning way?”**\n\nThis is an insightful question. But our method is a little bit different from multi-task learning. $\\mathcal{L}\\_{elbo}$ only influences the parameters of environment inference model while $\\mathcal{L}\\_{inv}$ only influences the parameters of molecule encoder. Thus, we adopt a simple two-stage training strategy here. However, training the model in a multi-task-learning way can be a potiential direction, which we leave for future investigation. As mentioned in our paper, we wants to maximize the log-likelihood of $p_{\\tau}(\\mathbf{y}|\\mathbf{G})$ and then obtain the posterior $p_{\\tau}(\\mathbf{e}\\vert \\mathbf{G},\\mathbf{y})$, which are parameterized by $\\tau$. Since there is no analytical solutions to the true posterior, we adopt variational inference (VI) to approximate it as an initial attempt and have proved the correctness of the objective in Eqn. 6 in Appendix A. There might exist alternative methods to realize environment inference, which we believe can be explored by future works.\n\n**Reference (all are new and will be added in the main paper):**\n\n[1] [Improving Molecular Contrastive Learning via Faulty Negative Mitigation and Decomposed Fragment Contrast.](https://pubs.acs.org/doi/pdf/10.1021/acs.jcim.2c00495)\n\n[2] [SafeDrug: Dual Molecular Graph Encoders for Recommending Effective and Safe Drug Combinations.](https://www.ijcai.org/proceedings/2021/0514.pdf)\n\n[3] [An Evolutionary Fragment-based Approach to Molecular Fingerprint Reconstruction.](https://dl.acm.org/doi/pdf/10.1145/3512290.3528824)\n\n[4] [RECAPRetrosynthetic Combinatorial Analysis Procedure:  A Powerful New Technique for Identifying Privileged Molecular Fragments with Useful Applications in Combinatorial Chemistry.](https://pubs.acs.org/doi/pdf/10.1021/ci970429i)", " **Reference (only [4,9,10,11,12] are new and will be added in the referene of the paper):**\n\n[1] [WILDS: A Benchmark of in-the-Wild Distribution Shifts.](http://proceedings.mlr.press/v139/koh21a/koh21a.pdf)\n\n[2] [Open Graph Benchmark: Datasets for Machine Learning on Graphs.](https://arxiv.org/pdf/2005.00687.pdf)\n\n[3] [DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery -- A Focus on Affinity Prediction Problems with Noise Annotations.](https://arxiv.org/pdf/2201.09637.pdf)\n\n[4] [ChEMBL: towards direct deposition of bioassay data.](https://pdfs.semanticscholar.org/5584/f0f28d6054fd04ec9d8d066b67966825fa54.pdf)\n\n[5] [Chemical substructures that enrich for biological activity](https://academic.oup.com/bioinformatics/article/24/21/2518/192573)\n\n[6] [Privileged substructures for anti-sickling activity via cheminformatic analysis](https://pubs.rsc.org/en/content/articlehtml/2018/ra/c7ra12079f)\n\n[7] [DGDFS: Dependence Guided Discriminative Feature Selection for Predicting Adverse Drug-Drug Interaction](https://ieeexplore.ieee.org/abstract/document/9023472)\n\n[8] [A substructure-based screening approach to uncover N-nitrosamines in drug substances](https://pubmed.ncbi.nlm.nih.gov/35647726/)\n\n[9] [Discovery of novel non-cytotoxic salicylhydrazide containing HIV-1 integrase inhibitors.](https://www.sciencedirect.com/science/article/abs/pii/S0960894X07011559)\n\n[10] [Salicylhydrazine-Containing Inhibitors of HIV-1 Integrase:  Implication for a Selective Chelation in the Integrase Active Site.](https://pubs.acs.org/doi/abs/10.1021/jm9801760)\n\n[11] [Estimation of ADME Properties with Substructure Pattern Recognition.](https://pubs.acs.org/doi/abs/10.1021/ci100104j)\n\n[12] [A classification model for blood brain barrier penetration.](https://www.sciencedirect.com/science/article/pii/S1093326319303547)\n\n[13] [Invariant Risk Minimization.](https://arxiv.org/pdf/1907.02893.pdf)\n\n[14] [Invariance, causality and robustness.](https://projecteuclid.org/journals/statistical-science/volume-35/issue-3/Invariance-Causality-and-Robustness/10.1214/19-STS721.full)\n\n[15] [Invariant models for causal transfer learning.](https://www.jmlr.org/papers/volume19/16-432/16-432.pdf)\n\n[16] [Out-of-distribution generalization via risk extrapolation (rex).](http://proceedings.mlr.press/v139/krueger21a.html)\n\n[17] [Environment inference for invariant learning.](https://proceedings.mlr.press/v139/creager21a.html)", " **Q2-3: \"If it is the case that the assumption holds, if we have enough data and enough representation power, isn't the predictive model should learn to predict the property from the casual structure and ignoring the rest? which means the representation will be naturally pushed to learn such invariant representation. Unless we are really in a data scarcity case, and if we are in the data scarcity case then learning to infer the environmental factor (unsupervised) would be also hard.\"**\n\nFor OOD generalization, the model performance is more related to the richness of environments it has seen in training instead of the quantity of data samples [13,14,15,16,17]. If the training data only contains a few environments, even though the training data is sufficient, the model is quite likely to fail to filter out irrelevant or spurious features, thus not robust to those test data from unseen environments. In contrast, if traing data involves more diverse environments, even if the number of all training data is relatively small, the model could better learn a stable relation bewteen invariant part bewteen label across environments. \n\n**Q3: Could the second term of Eqn. 7 be further simplified?**\n\nYes. Mathematically, the second term in Eqn. 7 in our paper and $\\beta\\frac{1}{|\\mathcal{G}|}\\sum_{(G,y)\\in\\mathcal{G}}-\\log q_\\theta(y|G)$ are both equivalent to $\\beta\\mathbb{E}_{(G,y)}[-\\log q_\\theta(y|G)]$. \n\nFor practical implementation, the two formulas are slightly different. The expectation $\\mathbb{E}_{(G,y)}[-\\log q_\\theta(y|G)]$ is hard to calculate directly, thus Monte Carlo estimation is applied to approximate this value. Our implementation first uses the samples under each specific environemt for approximating the environment-specific risk and then calculate the average across different enviroments. The second term in Eqn. 7 is exactly what we have done in our implementation. Therefore, we kept this form in the paper instead of using the simplified one to stay consistent with our implementation.\n\nIt should be mentioned that there is a absolute value symbol $\\vert\\cdot\\vert$ in the first term of Eqn. 7. Hence, even if the second term of Eqn. 7 is simplified, the two terms are still completely different.\n\n**Q4: \"Intuitively, it seems like what the model does is given the graph and property, learn to infer the environment, and a predictive model that learns to predict the property y for the graph given environment e (objective 6). Then use this inference and predictive model to learn another predictive model that is not restricted to the environment. But my question is if the model learned from objective 6 can infer the environment and predict the property given that environment is the problem solved? so even if we have a distributional shift we can use it to predict the property for the graph from he new environment?\"**\n\nOnly using environment inference model learned from the objective in Eqn. 6 is insufficient for solving the challenging OOD problem. The reasons are as follows. First, during training stage, the environment inference model is to partition the training data into $k$ environments. But in out-of-distribution problem, the environments of testing data are often unseen during training. Therefore, the well-trained environment inference model could not properly map the testing instance to those $k$ training environments. Second, the environment classifier requires the label $y$ as its input to preidict the corresponding environment. But for the testing data, label $y$ is not available and exactly what we need to predict.\n\n\nWe hope this response could help to address your concerns. As we believe, our work is one of the early efforts to study an important problem, out-of-distribution molecule representation learning, with novel methodology and promising results. We sincerely hope that you could reconsider your assessment.", " **Q2-1: \"The main idea is that there is spurious structure and casual structure wrt the property. First of all, is this assumption holds for most of the properties (at least the ones that they use here).\"**\n\nIn general, as illustrated in our Sec 1, the hypothesis (i.e., invariance assumption) in our paper is rooted on a widely-observed phenomenon that there exist a few invariant substructures w.r.t. certain property, which is well-recognized by a surge of molecular literature across bioinformatics, pharmacy and data mining [5,6,7,8]. And, in datasets we used in the paper, such a phenomenon indeed exists. Taking **HIV** dataset as an example, *salicylhydrazide* substructure displays potent HIV-1 integrase (IN) inhibitory activity, which has been identified by previous studies [9,10]. Additionally, for **BBBP** dataset, as pointed out by recent studies [11,12], some substructures are closely related to brain-blood barrier penetration. \n\nOn top of this, we formulate our invariance assumption in the context of molecule representation learning (i.e., existence of spurious and causal substructures wrt certain properties) as a cornerstone for theory and problem solving for OOD generalization purpose. Despite the motivation from such an invariance principle and the above evidence for the robust correlation accross environments, in fact our designed model is technically free from specific domain knowledge about substructures-property relations, like that substructure *hydroxy* displays good water solubility across all environments (which is purely used as a motivating example). To say the least, our model will not crash when this assumption does not hold, but instead it can benefit from such relation and can work smoothly as a general framework for molecule representation which can learn stable relations between some substructures and target labels in a fully data-driven manner.\n\n**Q2-2:\"Can we really identify substructures that determine the property and is totally disentangled from the rest of the structure (spurious structure). What if the property is related to the global structure of the molecule?\"**\n\nThanks for your questions which are worth discussion and we will put it in our final version. It is really difficult (from the molecule science perspective) to ensure that there exist totally disentangled substructures that determine the property (on the used datasets), let alone perfectly discovering them, though this paper explores such an invariance learning direction and empirically find its effectiveness for OOD generalization. \n\nTechnically speaking, we have proved in our theory that optimizing the new objective can guide the model to capture stable relations between environment-invariant substructures and the labels across different environments, thus ensuring a valid solution for OOD problem in principle. This result can be further justified by its consistency with other related works [13,14,15,16,17] in broad areas.\n\nIn practice, our goal is to make the molecule encoder (which can be seen as a black-box function) to capture stable relations between environment-invariant substructures and the labels, i.e., we expect the encoder to extract causal features from input molecules to obtain the representations. The model is not designed to totally disentangle environment-invariant substructures from spurious ones. Instead, it's more like a kind of 'soft' identification for causal substructures. Since noises or biases might exist in the dataset and it's unpractical for the model to see all environments during training, it would be hard for the model to totally disentangle invariant features from spurious features in reality (though we can consider it as an ideal state to pursue for model design).\n\nWhen the property is related to the global structure of the molecule, our model design is expected to automatically discover the global structure (a special case where all substructures stably relate with the labels across environments).", " Thank you for the time and valuable feedback. In the response below, we provide answers to your questions to resolve some potential misunderstandings and address the lingering points of concerns, which we hope could be helpful for the re-evaluation of our work.\n\n**Q1: How do we measure there is a distribution among molecules and how are the datasets split?**\n \nThe concept of distribution in molecules datasets has reached some concensus in recent literature [1,2,3]. Specifically it is usually measured or determined by certain criteria e.g. a scaffold pattern corresponds to a certain environment whose underlying data distribution can differ from another environment with its own distribution. To be more concrete, we provide some example protocols in peer works as follows:\n\n1. WILDS [1] provides a curated benchmark of 10 datasets reflecting a diverse range of distribution shifts, with a protocol saying: \"each environment corresponds to a distribution $P_{e}$ over data points which are similar in some way, e.g. molecules with the same scaffold\". In other words, for example, molecules with different scaffolds can be regarded as being sampled from different distributions.\n2. OGB [2], a widely-used benchmark in molecule representation learning, also assumes molecules with different scaffolds are from different distributions. It should be mentioned that the official default train/val/test data split in OGB is based on scaffold splitting, which can provide a more realistic estimate of model performance in prospective experimental settings. Thus, for the four datasets BACE, BBBP, SIDER and HIV from OGB, we directly use the default data split in our experiments. \n3. DrugOOD [3], which is a newly realeased benchmark for out-of-distribution molecule representation learning, provides two extra splitting strategies, assay and size. The original paper clearly states that molecules in the same assay or with the same number of atoms can been treated as being from the same environments, i.e., the same distribution (see Sec. 3.4.1 of DrugOOD paper). For the other six datasets we used from DrugOOD , we also adopt the official default data splits for all. \n\nThe setting and used datasets (especially the four datasets from OGB) of our paper just follow the above works, and thus, to save space, we omitted some detailed descriptions for used datasets and the background information for the distribution/environment in our original version. Now we provide detailed information below and supplement them in Appendix E in the uploaded revision. \n - **BBBP** is a dataset of Brain-Blood Barrier Penetration. Each molecule has a label indicating whether it can penetrate through brain cell membrane to enter central nervous system.\n - **BACE** is a dataset of binding affinity against human beta-secretas 1. Each molecule has a label indicating whether it binds to human beta-secretase 1.\n - **SIDER** is a dataset of marked drugs and adverse drug reactions (ADRs). Molecules are grouped into 27 system organ classes.\n - **HIV** is a dataset of HIV antiviral activity. Each molecule has an active or inactive label.\n - **IC50/EC50-scaffold/assay/size** are datasets generated by the automated dataset curator provided by DrugOOD from the large-scale bioassay deposition website ChEMBL [4]. The suffix specifies the splitting scheme. These six datasets target on ligand-based affinity prediction (LBAP). Each molecule has an active or inactive label.", " **Q4: Incorporating the idea that bio-chemical properties may be affected by interactions between substructures into the design of the Molecule Encoder.**\n\nTo verify your hypothsis, we supplement new results of our tentative exploration in the table below. To encode interactions between substructures into the final learned molecular representation, we utilize the permutation equivariant Set Attention Block (SAB) proposed in Set Transformer [3]. SAB takes a representation set of any size as input and outputs a representation set of equal size. SAB is able to encode pairwise and higher-order interactions between elements in input sets into outputs. We add such a SAB after the Substructure Encoder. For each molecule, we feed the representions of its substructures to SAB to obtain new substruture representations. In this way, the final molecule representation could model interactions between substructures. Due to limited time, we only conduct experiments on EC50-assay/scaffold/size to examine the performance of adding such a SAB. As demonstrated in the table, we can see that adding such a SAB further improves our model on EC50-scaffold. This design is a naive attempt but brings us some valuable insights. We can put the current results in appendix and leave further exploration for future directions.\n\n| | **EC50-assay** | **EC50-scaffold** | **EC50-size** |\n|:------------- |:--------------------------:|:--------------------------:|:--------------------------:|\n| **ERM** | $69.35\\pm7.38$ | $63.92\\pm2.09$ | $60.94\\pm1.95$ |\n| **IRM** | $69.94\\pm1.03$ | $63.74\\pm2.15$ | $58.30\\pm1.51$ |\n| **DeepCoral** | $69.42\\pm3.35$ | $63.66\\pm1.87$ | $56.13\\pm1.77$ |\n| **DANN** | $66.97\\pm7.19$ | $64.33\\pm1.82$ | $61.11\\pm0.64$ |\n| **MixUp** | $70.62\\pm2.12$ | $64.53\\pm1.66$ | $62.67\\pm1.41$ |\n| **GroupDro** | $70.52\\pm3.38$ | $64.13\\pm1.81$ | $59.06\\pm1.50$ |\n| **Ours** | $\\mathbf{73.25\\pm1.24}$ | $\\underline{66.69\\pm0.34}$ | $\\mathbf{65.09\\pm0.90}$ |\n| **Ours+SAB** | $\\underline{73.15\\pm2.69}$ | $\\mathbf{67.26\\pm1.54}$ | $\\underline{64.83\\pm1.07}$ |\n\n**Reference (only [2,3] are new and [3] will be added in the reference of the main paper):**\n\n[1] [Open Graph Benchmark: Datasets for Machine Learning on Graphs.](https://arxiv.org/pdf/2005.00687.pdf)\n\n[2] [Graph Adversarial Self-Supervised Learning.](https://proceedings.neurips.cc/paper/2021/file/7d3010c11d08cf990b7614d2c2ca9098-Paper.pdf)\n\n[3] [Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks.](http://proceedings.mlr.press/v97/lee19d/lee19d.pdf)", " Thank you for the valuable comments and suggestions. We are encouraged that you appreciated our technical contributions including the problem significance, novelty, soundness and solid experiments. Below we respond to your specific comments.\n\n**Q1: Missing exact definition of $\\mathcal{G}^{e}$.**\n\nThanks for pointing this out. $\\mathcal{G}^{e}$ denotes the set of graph instances (each consisting of a molecule $G$ and corresponding label $y$) under environment $e$, i.e., $\\mathcal{G}^{e}= \\lbrace(G,y)|(G,y)\\sim p(\\mathbf{G},\\mathbf{y}\\vert\\mathbf{e}=e)\\rbrace$. We have supplemented this definition in our revised paper.\n\n**Q2: Could the second term of Eqn. 7 be further simplified?**\n \nIdealy, the second term in Eqn. 7 in our paper and $\\beta\\frac{1}{|\\mathcal{G}|}\\sum_{(G,y)\\in\\mathcal{G}}-\\log q_\\theta(y|G)$ are both mathematically equivalent to the simplified form $\\beta\\mathbb{E}_{(G,y)}[-\\log q_\\theta(y|G)]$. \n\nYet for implementation, the two formulas are slightly different. The expectation $\\mathbb{E}_{(G,y)}[-\\log q_\\theta(y|G)]$ is hard for direct computation, thus we use Monte Carlo estimation for approximation. Our implementation first uses the samples under each specific environemt for approximating the environment-specific risk and then calculate the average across different enviroments. The second term in Eqn. 7 is exactly what we have done in our implementation. Hence, we kept this form in the paper instead of using the simplified one to stay consistent with our implementation.\n\n**Q3: Lower baseline performance on BBBP benchmark compared to the mentioned paper.**\n\nOur experiments of baselines (GCN, GIN and GraphSAGE) are conducted under the official default train-valid-test data split given by OGB benchmark, using the implementation provided by OGB (see Appendix C). According to the OGB original paper [1], the dataset is split by scaffold, which already fits the OOD setting. Our final results of baselines on BBBP are consistent with those demonstrated in OGB original paper (see Table 24 in Appendix A of [1]). \n\nFor the compared baselines, to our best knowledge, please note that only the mentioned paper shows a higher performance than ours. For example, the baselines in the NeurIPS'21 paper [2] also shows close performance to ours in terms of ROC-AUC. Thus, we believe we have tried our best in making a fair comparison.\n\nWe noticed that the detailed experimental settings on BBBP seems to be unclearly presented in the mentioned paper, and even with authors' codes publicly released, the original detailed splitting information is missing. We suspect that they adopted a different train/val/test data split rather than the widely-used default split of OGB.", " ## Summary\n\nThis paper introduces techniques to enhance the robustness of molecule representation learning against distribution shifts. The authors made the observation that bio-chemical properties of molecules are usually invariantly associated with certain substructures across different environments such as scaffolds, sizes, etc. There are three important pieces in their modeling process. One is architecture-wise they use graph-level embeddings to attend a list of substructures. Second they device a new learning objective from mutual information and invariant learning to help select causal substructures. Third they use a latent representation for environment to mitigate the current issue of dealing molecular environments (e.g., often man-made, not always available, too many if using scaffolds). Overall, it's a solid paper. ## Strengths\n- the paper addresses an important topic in molecule representation learning (distribution shift).\n- it introduces interesting innovative techniques to help mitigate issues of current methods\n\n## Weaknesses\n- some key notations are not clearly defined - care is needed for the writing ## Questions\n\n- what's the exact definition of $\\mathcal{G}^e$, can you add text and math formula to explain?\n- it seems that the second term of eqn 7 could be simplified to $\\beta \\frac{1}{|\\mathcal{G}|} \\sum_{(G, y) \\in \\mathcal{G}} -log q_{\\theta}(y | G)$, can you confirm?\n- BBBP benchmark ROC seems much lower than typical methods would produce, is it some mistake? reference: https://arxiv.org/abs/2111.12951 appendix G table s1.\n- Often times bio-chemical properties are affected by interactions between substructures, would be interesting to see if adding self-attention in the molecule encoder would help NA", " This work aims to address the problem of the distributional shift between training data and test data when solving the property prediction task. The idea is to learn a representation that is invariant with respect to the environment as the environment change results in the distributional shift. The main assumption is that, regarding the property that we want to predict, there are specific structures that determine the property and other substructures that are not relevant (so no matter how that structure changes the property does not change ) and to avoid the distributional shift one needs to learn the representation that is invariant with respect to the change of the environment. strength: the problem they try to address is interesting and valuable to the community\nweakness: The assumptions and proposed model are not really convincing. The experiments section is not clearly described. Hard to follow for readers who are not familiar with the dataset, so at least in appendix explains for each data set what is the property they are predicting, and what is the environment they considering here, and how the dataset is split to train test sets to model the distributional shift. \n 1. The definition of distribution shift here is defined as having a different scaffold of the molecules from training to test set. In this case, is each molecule regarded as coming from a different environment as they all have different structures, it is not clear how we measure there is a distribution shift (how different the structure of the molecule should be to be considered as there is a distributional shift? is the size of the molecules? the number of the atom, the type of the atom, to which degree) \n\n2. The main idea is that there is spurious structure and causal structure wrt the property. First of all, this assumption holds for most of the properties (at least the ones that they use here). Can we really identify substructures that determine the property and are totally disentangled from the rest of the structure (spurious structure)? What if the property is related to the global structure of the molecule? \n\nIf it is the case that the assumption holds, if we have enough data and enough representation power, isn't the predictive model should learn to predict the property from the casual structure and ignoring the rest? which means the representation will be naturally pushed to learn such invariant representation. Unless we are really in a data scarcity case, and if we are in the data scarcity case then learning to infer the environmental factor (unsupervised) would be also hard. \n\n3. In equation 7, the first term and last term exactly the same except one is the sum over G^e and the other one is over G , G^e represents the graph G and y sampled from environment e, so if you take the expectation at the last term with respect to e (E_e) as you have there isn't this two-term exact the same?\n\n4. Intuitively, it seems like what the model does is given the graph and property, learn to infer the environment, and a predictive model that learns to predict the property y for the graph given environment e (objective 6). Then use this inference and predictive model to learn another predictive model that is not restricted to the environment. But my question is if the model learned from objective 6 can infer the environment and predict the property given that environment is the problem solved? so even if we have a distributional shift we can use it to predict the property for the graph from the new environment? The assumptions and proposed model are not really convincing. \nThe experiment section is a bit confusing and does not provide very structured information. For example, in table 2, what are the properties that are being predicted, and what elements are considered as the environment (still scaffold? ). If scaffold, size, and assay are considered as the environment then what is the property where it is being predicted? Also if the environment label is accessible, does the model still do inference or use it directly? ", " This paper is mainly designing a learning framework to learn causal-invariance molecular representations, improving the generalization of MRL in OOD settings. This paper observes that the molecule usually consists of two parts: one part is causal substructure that determines the behaviors of molecular properties, another part is spurious substructure that constitute the molecular structures but has no impact on molecular behaviors. Based on this observation, this paper builds a structural casual model to learn more robust molecular representation learning. Specifically, new model includes substructure-aware attention-based molecular encoders and VAE-based environment inference model. To enable two-stage optimization, this paper proposes a new learning objective function consisting of the ELBO objective and the ERM objective. Strengths:\n(1) This reviewer thinks that this paper has good novelty. It proposes an important but overlooked observation in molecular representation learning. In short, the problem addressed in this paper is significant, the proposed solution is sufficiently novelty and the motivation is reasonable; \n(2) The presentation of each section is quite clear and well-organized. It is not hard for this reviewer to understand the novel idea and elegant design of new MRL paradigm;\n(3) Sufficient experiments over different public benchmarks under OOD settings and ablation studies demonstrate the effectiveness of each component;\nWeaknesses:\n(1) Decomposing strategies (BRICS) and scaffold introduction as environment actually implicitly introduce additional knowledge to MRL. The decomposing step and scaffold introduction both follow hand-engineered rules by domain experts and may have non-trivial impacts on model performance. (1) Does different decomposing strategy affect the model performance?\n(2) What if the model is trained in a simple multi-task learning setting? In other words, setting environment prediction as an auxiliary task? Would this have equivalent performance to the proposed method? Why it is necessary to design environment prediction in a VAE learning way? This reviewer thinks the proposed method in this paper has the major limitations that the scaffold and decomposing strategy may not be robust to OOD setting although adding this additional knowledge could improve robustness of MRL. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "nips_2022_2nWUNTnFijm", "XD5NyOQKoxy", "nips_2022_2nWUNTnFijm", "XD5NyOQKoxy", "nips_2022_2nWUNTnFijm", "4VPjzFH-936", "gHpGy2VpTmf", "2kGjKTP0kg", "rpF918npdDM", "_4BFN_Q9d_v", "iY4zfZyraVT", "1zPkExBJjSl", "nips_2022_2nWUNTnFijm", "nips_2022_2nWUNTnFijm", "nips_2022_2nWUNTnFijm" ]
nips_2022_PYnSpt3jAz
Lethal Dose Conjecture on Data Poisoning
Data poisoning considers an adversary that distorts the training set of machine learning algorithms for malicious purposes. In this work, we bring to light one conjecture regarding the fundamentals of data poisoning, which we call the Lethal Dose Conjecture. The conjecture states: If $n$ clean training samples are needed for accurate predictions, then in a size-$N$ training set, only $\Theta(N/n)$ poisoned samples can be tolerated while ensuring accuracy. Theoretically, we verify this conjecture in multiple cases. We also offer a more general perspective of this conjecture through distribution discrimination. Deep Partition Aggregation (DPA) and its extension, Finite Aggregation (FA) are recent approaches for provable defenses against data poisoning, where they predict through the majority vote of many base models trained from different subsets of training set using a given learner. The conjecture implies that both DPA and FA are (asymptotically) optimal---if we have the most data-efficient learner, they can turn it into one of the most robust defenses against data poisoning. This outlines a practical approach to developing stronger defenses against poisoning via finding data-efficient learners. Empirically, as a proof of concept, we show that by simply using different data augmentations for base learners, we can respectively double and triple the certified robustness of DPA on CIFAR-10 and GTSRB without sacrificing accuracy.
Accept
The reviewers agree that this work proposes an interesting conjecture which is likely to inspire further research. Congrats! During the discussion period the following two points were raised by the reviewers: - The paper should emphasize more strongly that this is just a conjecture (I at minimum have doubts have how well it generalizes). They should make clearer the alternate hypotheses/explanations in the main paper and discuss them. - Nascent researchers look at archived OpenReview discussions and may adopt similar styles as successful authors. I do not think the authors' approach of ending every post with a plea to increase the score is an appropriate or healthy style for peer review. I further advocate it definitely should not be emulated/copied. If reviewers believe scores should be raised (as I did), they should be trusted to do so without pressure from authors.
train
[ "oDjLZazbYj", "7tBM42zTi65", "tevPFoOfuP", "GYrTIJ6Hzz-", "R3HNHAd-4Vw", "iADEwrvxXUU", "9KutH6bzTQW", "snmEK91-UZc", "plp52xoxJVo", "LZsCF016AwM", "KG4CSzISFQI", "h8LBeC6_W7f", "eQ6JD-5qMW", "RjUsKWmQRV_", "M3eLJ5nIbsd", "EfIevcvvWwp", "-6u3ZObdlBk", "AXwHg8sPbPX", "IpfL2hJEPE7", "Xwm2zsO3QNj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for this clear, detailed and interesting discussion.\nIn my opinion, the verbal description of the score 6 is most suitable for the paper.\nOverall I think that the paper contains significant scientific contributions to the field, and as such will be a good contribution to NeurIPS.", " Thanks for the running examples --- my questions are all addressed. I'm looking forward to seeing more follow-up work in this field.", " Thank you for letting us know! \n\nWe appreciate all your comments, questions, and services!\n\nMeanwhile, just to let you know, you are welcome for any follow-up questions~", " We will check and see if we can figure out an easier proof from the paper you provided~\n\nRight now I only get to take a quick look at that paper but I feel I may learn a lot from it!\n\n**Thank you for your feedback and the reference!**\n\n**Meanwhile, if your original concerns are addressed or alleviated, please consider updating your rating for our work~**\n\n", " Sorry for this late reply, and thank you so much for the discussion!\n\n**(1)** For your concern regarding the usefulness of Lethal Dose Conjecture:\n\n**Answer:**\nYes, since a constant variation is possible, when one has $N=10000$ samples and one assumes $n=500$ samples are needed by the most data-efficient learners (given one's prior), the conjecture will not be able to tell whether $20$ or $21$ poisoned samples can be tolerated. However, it for sure raises a flag when some unknown data source contributes hundreds of samples to one's training set. **Admittedly, the current conjecture is not perfect, but given its generality and flexibility, we do believe it is a very promising step towards mitigating the threat of data poisoning attacks!**\n\n**(2)** For your question regarding examples that a (test) sample $x_0$ is very 'difficult' such that the corresponding $n$, i.e. the number of samples required to predict accurately on it, is large:\n\n**Answer:**\nIndeed, $n$, the number of samples required by the most data-efficient learner, does **not** depend on $N$, the actual number of samples in a training set.\n\nIt depends on the underlying distribution (as you mentioned), and importantly, **the prior information** one has regarding the distribution. Prior information refers to anything that one knows prior to seeing any data (which we characterize through the set of plausible learners $\\mathcal{F}$ in definition 1, definition 2, and definition 3). For an extreme example, if one knows already the exact data distribution as the prior information, no sample is needed at all by the most data-efficient learner and any number of poisoned samples can be tolerated!\n\n\n\nHere is **an example**: It is a binary classification and the prior information is that data from each class follows an isotropic multivariate Gaussian distribution with an unknown mean. In this case, we know that the optimal decision boundary is simply a linear one (i.e. a hyperplane). In this very simple example, most of the (test) samples $x_0$ can be easy as they are far away from the optimal decision boundary. However, when a sample $x_0$ is close to the optimal hyperplane, a large number of training samples will be required to determine its corresponding maximum likelihood prediction (**This is a direct implication of our theoretical analysis in Section 6, by letting $(d_2 - d_1)\\to 0$ in our Lemma 5, Lemma 6 and Lemma 7**). In another word, $n$ scales very fast when the $x_0$ is getting closer to the optimal boundary and the 'difficulty' is increasing.\n\nFor **real-life examples**, one may consider the many tasks in natural language processing, for which dealing with the long tail distribution of data can be particularly challenging. Large corpora are available for high resource languages like English while limited data are collected for low resource languages such as Swahili and Urdu. Even within the same language, there are many corner cases or expressions that are valid but extremely rare. As a result, $n$, the number of samples required by the most data-efficient learner, can be huge for samples of low resource languages or even uncommon cases in common languages. Lethal Dose Conjecture suggests that they are also more vulnerable to data poisoning.\n\n**Please let us know if you want any further explanations! Thanks again for all your feedback~**\n", " I appreciate your detailed and thoughtful response not only to me but to the other reviewers. While I still have some concerns about some of the claims, I think the paper's contribution is durable even if the conjecture does not always hold and insights into the task of improving (certified) robustness. \n\nI have increased my score and believe this paper merits inclusion at NeurIPS. ", " Response to answer for question #2:\n\nOK, this definition makes sense. I think that the notation $X^{\\mathbb{N}}$ usually refers to countably infinite vectors with values from a set $X$, so perhaps consider changing the notation.\n\nResponse to answer for question #3:\n\nNice discussion and example! I think it will really help clarifying the conjecture if this discussion will appear close to it.\n\nResponse to answer to question #6:\n\nI'm sorry, I think I had a mistake while thinking about this proof attempt. However, there might be an easier proof by using the DS dimension which was recently shown to characterize multiclass classification in the following paper:\n\nhttps://arxiv.org/abs/2203.01550\n\nI am really not sure about it, but you may have a look if you are interested.", " Thank you for the responses. They are clear and I still lean towards accept the paper as it is.\n\nI just want to follow up my first question. I understand that the lethal dose is $\\Theta(1/n)$ and the total number of poison examples scales with $N$. I want to understand, however, the complexity of $n$ itself in practice. First of all, $n$ does not necessarily scale with $N$. Rather, it is influenced by the underlying distribution, or the classification task itself. As I said in my question, if the true underlying distribution is linearly separable with a margin $\\epsilon$, then $n$ will be determined by $\\epsilon$ and the input dimension $d$. That is, $n$ is a constant w.r.t. $N$. The lethal dose poisoning fraction $\\Theta(1/n)$ would simply mean the attacker needs to poison a constant fraction of the training set. It's true but not very useful because the $\\Theta$ bound also allows variations up to a constant factor.\n\nSo my question is, is there an example such that for a common type of model, 1) $n$ scales up very fast with the \"difficulty\" of the underlying tasks, and 2) $\\Theta(1/n)$ diminishes very fast as a result? In particular, is there a \"bad\" underlying distribution such that the poisoner can easily achieve its goal with very few examples for common classifiers? (For example, a checker board where cells of different colors have different labels?)", " Now the reviewer-author discussion phase will end in a couple of days, \nwe want to thank you again for the valuable reviews and services!\n\nCould you please let us know if our responses address your concerns or do you have follow-up questions?\n\nWe truly appreciate and value any feedback from you!\n\n", " **(4)** Question: ‘In definition 1: What does \"plausible learners\" mean? In what sense are they plausible?’\n\n**Answer:**\nThe set of plausible learners $\\mathcal{F}$ is a task-dependent set and we introduce it to make sure that the learner indeed depends on and learns from training data.\n\nHere we explain in detail the set $\\mathcal{F}$ in definition 1.\n\nFirstly, we quote the entire sentence from our paper: ‘The set of plausible learners $\\mathcal{F}$ contains all learners $f$ such that $Pr [f_{D}(x_0) = y] = Pr[f_{T_{y\\leftrightarrow y'}(D)}(x_0) = y']$ for all $y,y'\\in Y$ and $D \\in \\Omega^\\mathbb{N}$.’\n\n‘Plausible learners’ simply refers to learners $f$ such that $Pr [f_{D}(x_0) = y] = Pr[f_{T_{y\\leftrightarrow y'}(D)}(x_0) = y']$ for all $y,y'\\in Y$ and $D \\in \\Omega^\\mathbb{N}$. Intuitively, it says that if one rearranges the labels in the training set, the output distribution will change accordingly. \n\nFor example, say originally we define class 0 to be cat, and class 1 to be dog, and all dogs in the training set are labeled 0 and cats are labeled 1. In this case, for some cat image $x_0$, a learner $f$ predicts 0 with a probability of 70% and predicts 1 with a probability of 30%.\n\nWhat happens if we instead define class 1 to be cat, and class 0 to be dog? Dogs in the training set will be labeled 1 and cats will be labeled 0 (i.e. label 0 and label 1 in the training set will be swapped). If $f$ is a plausible learner, meaning that it learns the association between inputs and outputs from the dataset, we expect the output distribution to change accordingly, i.e. now $f$ will predict 1 with a probability of 70% and predict 0 with a probability of 30%.\n\nHere is an example of a learner that is not plausible: A learner that always predicts 0 regardless of the training set, regardless of whether we associate 0 with dog or with cat.\n\nDue to page limits, we have added the explanations to the revised draft in Appendix K. We will move them into main body of our paper for the camera-ready version, where an additional page will be allowed.\n\n**(5)** Question: ‘The paragraph that comes after definition 1 is not clear to me. What are the \"classes\" here? In what sense is this setting the easiest?’\n\n\n**Answer:**\nSorry for the confusion. Classes are associated with labels. Each class has a label and each label corresponds to a class.\n\nThis setting is intuitively ‘easy’ because the input space or the feature space given has nice properties that are helpful for classification: Samples are already perfectly clustered in the input space according to labels. Samples with the same label stay close while samples with different labels are away from each other, so that for every class/label, a **single** clean training sample from that class will allow one to identify **all** samples from that class.\n\nFor example, imagine you want to solve a classification task and you are given a feature extractor that puts and only puts samples with the same label close to each other in the feature space. You will be able to solve the classification using only one sample from each class. For a test sample, you simply find the closest one in the training set and that will be the correct label. This is an easy setting because the feature extractor given is powerful.\n\n\n\n**(6)** Question: ‘Isn't the proof of Lemma 1 is just by the fact that the VC-dimension of the corresponding hypothesis class is k? Just think of the k labels as all possible binary labelings of $\\log_2(k)$ data points.’\n\n\n**Answer:**\nSorry we haven’t figured out your constructions, but we are all very interested! Do you mind elaborating a little bit more on what the insight is for ‘think of the k labels as all possible binary labelings of $\\log_2(k)$ data points’?\n\n\n**(7)** Question: ‘The paragraph that comes after definition 2 is not clear to me. Why is this setting so much harder compared with the setting of definition 1? Also, definition 2 seems like a generalization of definition 1, and if that is indeed the case, perhaps it is good to mention that.’\n\n**Answer:**\n\nSimilar to our answer for Question (5), this is a ‘difficult’ setting because the input space or the feature given is terrible in a sense that there is no correlation between labels corresponding to different inputs, so that one needs to see *all* samples in order to identify exactly *all* samples from a class.\n\nFor an extreme example, imagine you want to do classification based on only hash values of images. This is truly a poor choice of features as similar (but not identical) hash values may correspond to completely unrelated samples, and it is for sure a hard task, all because the feature extractor (in this case it is the hashing function) is so terrible.\n\n\n**Once again, thank you for your insightful comments! If we help address your concerns, please do consider raising your score for our work!**\n", " Thank you so much for spending time reviewing our work! We value every feedback from you and will try our best to answer your questions.\n\n**(1)** Weakness: ‘The absence of some standard notions in machine learning that seems related to the paper makes the paper and its contribution harder to understand. For example, it seems that the terms \"sample complexity\" and \"realizability\" should have been integrated in the basic definitions. ’\n\n**Answer:**\nThanks for the suggestion! We will be considering how we can incorporate them to improve the presentation of our work.\n\n\n**(2)** Question: ‘In the definition of a learner, and poisoned learning: The domain of T\n and f seems to be defined as infinite vectors where I guess it should be finite vectors?’\n\n\n**Answer:**\nGood question! \nHere $\\mathbb{N}$ denotes the set of all natural numbers and $\\Omega^\\mathbb{N}$, the domain of T and f, is the set of all *finite* datasets. In other words, we consider that the training set can contain an arbitrary but finite number of samples, meaning that the size of the training set can be 10, 10^4, 10^10… **it can be arbitrarily large, but not infinite**.\n\n\n**(3)** Question: ‘The formulation of the formal statement of the conjecture in page 3 is not justified enough, in my opinion. It seems that the conjecture is formulated with respect to a specific given data point $x_0$. I guess that this what a \"specific task\" (as written in the introduction) means? However, a \"specific task\" might be understood as drawing the test point from a specific hidden marginal distribution over instances, as usually done in PAC learning. Also, isn't this suggested formulation might be better? For example, think of a point that can only suffer attacks of a very small size, but on the other hand is not likely to be drawn as a test point. Isn't it better to define the lethal dose to be higher, than what reflects in the conjecture, in this case? (because a wrong prediction on this point is not lethal).’\n\n**Answer:**\nGood comment. Indeed, a ‘task’ is more often interpreted as a distributional argument rather than the pointwise one we present. However, the pointwise formulation is in fact **more desirable and more powerful**. \n\nFirstly, a pointwise argument can be easily converted into a distributional one, but the reverse is difficult. Given a distribution of $x_0$ and the (pointwise) ‘lethal dose’ for each $x_0$, one can define the distribution of the ‘lethal dose’ and its statistics as the distributional ‘lethal dose’. However, it is hard to uncover the ‘lethal dose’ for each $x_0$ from distributional arguments.\n\nSecondly, samples are not equally difficult in most if not all applications of machine learning: To achieve the same level of accuracy on different test samples, the number of training samples required can also be very different.\nFor example, on MNIST, which is a task to recognize handwritten digits, samples of digits ‘1’ are usually easier for models to learn and predict accurately, while those of digits ‘6’, ‘8’ and ‘9 are harder as they can look more alike.\nIn consequence, we do not expect them to be equally vulnerable to data poisoning attacks. Compared to a distributional one, the pointwise argument better incorporates such observations.\n\n\nDue to page limits, we have added discussions about this to the revised draft in Appendix J. We will move the discussion to the main paper for the camera-ready version, which allows an additional content page.", " Thanks for your time reviewing our work! We truly appreciate your comments and will do our best to answer your questions.\n\n**(1)** Question: ‘How should we interpret n --- the minimum amount of data required by the most sample efficient learner? Given a distribution and a model class for the base learner, n would become a constant, wouldn't it? For example, for a linear separable data distribution with margin ϵ, n would just be a constant for a given ϵ. Now, notice the bound is asymptotic, a constant n means the data poisoner always need to poison a constant fraction of the data set. Could you give an example that 1) the model class has a variable \nk, e.g. size of neurons in an NN, 2) n scales with k, and 3) the amount of poisoning examples needed has lower order than constant portion given increasing k?’\n\n**Answer:**\nFirstly, Lethal Dose Conjecture suggests that **a certain fraction** will be the ‘Lethal Dose’. In another word, the maximum tolerable number of poisoning samples scales linearly with the size of the entire training set $N$. **But more importantly**, the conjecture offers a characterization of the fraction, i.e. the fraction will be $\\Theta(1/n)$, where $n$ is the minimum number of samples required by the most data-efficient learner.\n\n**Please correct us if we do not understand the second half of the question accurately:** If one is using DPA and use neural networks (e.g. CNN, Transformers…), the training set for each base model (i.e. each partition) will typically have a smaller size and simply increasing model size can often lead to reduced performance. In this sense, when the size of models increase, the base learner can be less data-efficient (i.e. n increases when models get larger), and therefore the number of poisoned samples tolerated will decrease to a lower portion.\n\n\n**(2)** Question: ‘Correct me if I'm wrong: the hypothesis suggests that a more complex base learner may be more prone to data poisoning attack. On the other hand, a more complexity model (e.g. deep learning models) has the potential to fit both the poisoning data and the clean data separately, while a simple model (e.g. linear classifier) cannot. How do these two view reconcile with each other? This is out of the scope of the paper, and will not be the ground of my acceptance/rejection. But I'm curious about your opinion. Thanks.’\n\n**Answer:**\nEven assuming that a more complex base learner is more data-efficient, the conjecture does not imply that such a base learner is itself more resilient to data poisoning. \nAn important implication of the conjecture is that DPA is nearly optimal in converting base learners to defenses against data poisoning, with **no robustness requirement** on base learners.\nWe agree that in modern paradigms complex models are usually easier to overfit and may be more vulnerable to data poisoning attacks, but it is still too early to say that such correspondence is inevitable. \n\n**Finally, thanks again for your insights and questions! Could you please consider raising your score if we do help address the concerns?**\n", " **(5)** Question: ‘After line 5, (lets label them 5.1 - 5.5) I don't see how to get from 5.1 to 5.2.’\n\n**Answer:**\nFrom line 5.1 to 5.2 in the appendix, what we do is to divide the probability into two cases and bound them separately. Recall the definition of $E$ in line 6 where $E$ denotes the event that all other $k-1$ labels appear in the training set $D_n$.\nCase 1 is when $E$ happens, where we simply upper bound the probability that $f_{D_n}(x_0)=y_0$ by 1.\nCase 2 is when $E$ does not happen, meaning that there is some $y_1 \\neq y_0$ that does not appear in $D_n$. By Definition 1, we have $Pr [f_{D_n}(x_0) = y_0] = Pr[f_{T_{y_0\\leftrightarrow y_1}(D_n)}(x_0) = y_1] = Pr[f_{D_n}(x_0) = y_1]$ thus $Pr [f_{D_n}(x_0) = y_0]\\leq \\frac{1}{2}$. \n\nWe have added further explanations about this step in the revised draft.\n\n\n**(6)** Question: ‘On line 19, it wasn't clear to me why E[| T(D_N) - D_N |] = 2N / k.’\n\n**Answer:**\nNote that here $T=T_{y_0\\leftrightarrow y_1}$, which is a transform swapping labels $y_0$ and $y_1$ in the training set. Thus $E[|T(D_N) - D_N|]$ is in fact the expected number of samples with a label of $y_0$ or $y_1$, which is $\\frac{2N}{k}$. We have added further explanations about this step in the revised draft.\n\n**(7)** Question: ‘I also marked that I didn't understand steps 46.2 and 46.3.’\n\n**Answer:**\n\nFor 46.2: When $u_i = v_i$ for all $ i $, we have $ f( \\\\{u_i\\\\}) - f( \\\\{v_i\\\\}) = 0$ ; When there exists $u_i\\neq v_i$ for some $i$, we have $f( \\\\{u_i\\\\}) - f( \\\\{v_i\\\\}) \\leq 1$ because the output of $f$ is $\\\\{0,1\\\\}$.\n\nFor 46.3, we use the union bound. The probability that for at least one $i$ we have $u_i\\neq v_i$ is upper bounded by the sum of probability that $u_i\\neq v_i$ for all $i$. \n\nWe have added further explanations about these steps in the revised draft.\n\n\n**(8)** Question: ‘I was unable to follow lemma 6 and 7. They would benefit from more explanation. What is the intuition for taking ε -> 0?’\n\n**Answer:**\nIntuitively, what we do is to construct a second, perfectly legit distribution that is not far from the original one (measured with the total variation distance), so that any classifier must either fail on the original one or fail on the one we construct.\n\nIf it fails on the original one, the adversary achieves its goal even without poisoning the training set. If it fails on the one we construct, the adversary can still succeed by poisoning only a limited fraction of the training set because the distribution we construct is close to the original one (measured with total variation distance).\n\nRegarding the intuition for taking $\\epsilon \\to 0$: When $\\epsilon$ is actually 0, the distributions we construct for different classes will be ‘symmetric’ to $x_0$, meaning that there will be a tie in defining the maximum likelihood prediction. For any $\\epsilon >0$, the tie will be broken. By letting $\\epsilon \\to 0$, we find the tightest bound of the number of poisoned samples needed from our construction. \n\nWe have added further explanations about these proofs in the revised draft.\n\n**Again, thank you for checking our proofs. Please consider raising the score if we do help address your concerns!**\n", " Firstly we want to thank you for reviewing and especially checking our proofs. We really appreciate feedback regarding how our proofs are presented! Our responses to your questions are as follows.\n\n**(1)** Weakness: ‘I think the biggest weakness is the choice of datasets and experiments. I think what is presented is sufficient given the quality of the theoretical analysis, but I would have liked to see the claim tested against a large (image-net sized) dataset with a large model, even if the result is: it doesn't follow the theoretically predicted trend, but that could just be suggesting we haven't found efficient enough learners.’\n\n**Answer:**\nThank you for the suggestion. Our code (included in Supplementary Material) is built from the public, official implementation of DPA (https://github.com/alevine0/DPA), which is why we evaluate empirically on datasets that the DPA work uses, i.e. CIFAR-10 and GTSRB. Notably, our theoretical analysis in Section 7.1 shows that the design of DPA implies the same scaling rule as the one in Lethal Dose Conjecture, meaning that it will follow the same rule on ImageNet as long as we are still using DPA. \nNevertheless, we do agree evaluation on ImageNet can be interesting to our community, despite the fact that we may not have enough time to do so right away.\n\n\n**(2)** Question: ‘Line 33: I find the \"by doing nothing\" somewhat unclear.’\n\n**Answer:**\nThanks for pointing this out. It means that the dataset is clean and not poisoned. For example, if the goal of the adversary is to mislead the model into predicting ‘cat’ for a ‘dog’ image, the assumption by Mahloujifar et al.[21] says that when training on a clean dataset, the classification algorithm must predict the wrong label ‘cat’ with a non-negligible probability (i.e. the probability is at least 1/poly(N), where N is the number of samples).\nThis is in fact a very strong assumption, as we can almost always construct classifiers with negligible errors (i.e. the probability reduces exponentially with the number of samples N).\nFor instance, considering a binary classification, if a classifier f has an accuracy of 50.1% using 1000 samples, then when you have N samples, you can divide them into groups of 1000 samples each and use the majority votes of f trained on individual groups as final predictions (similar to DPA). With the Chernoff bound, we know this construction gives an algorithm with exponentially decaying (respect to the number of samples N) error rate, which is negligible and does not fit in their assumption.\n\n\n**(3)** Question: For the claims and assumptions regarding ‘Gaussian Classification’.\n\n**Answer:**\nIndeed we assume an isotropic Gaussian for each class for simplicity of the proofs, mostly because there are no simple forms (or none that we know of) for the total variation distance between two arbitrary Gaussian distributions. **However**, our results generalize to unbalanced settings, which we will discuss in our response to the next question~\nMeanwhile, we used ‘Isotropic Gaussian Classification’ as the new name for the ‘Gaussian Classification’ setting in the updated version of our paper.\n\n**(4)** Question: ‘What happens in an unbalanced setting?’\n\n**Answer:**\nFirst let us see how Lethal Dose Conjecture applies to an unbalanced setting. \nNotably, Lethal Dose Conjecture is a **pointwise** statement rather than a distributional one: For a (test) sample $(x_0, y_0)$, we uncover the relationship between the difficulty of learning how to predict accurately **on** $\\mathbf{x_0}$ and the portion of poisoned samples that one can possibly tolerate while ensuring accuracy **on** $\\mathbf{x_0}$. \n\nThis is consistent with our intuitions as empirically we always observe that samples are not equally difficult, and naturally they are not equally vulnerable under poisoning attacks. **When the training distribution is unbalanced**, some $x_0$ may become easier as we may need less clean samples drawn from that distribution to learn how to predict $x_0$, and therefore we may tolerate more poisoned samples while ensuring accuracy on $x_0$; Some $x_0$ may become harder and therefore more vulnerable under data poisoning attacks.\n\n**As for the ‘Gaussian Classification’**, an unbalanced setting will not be **geometrically** as interpretable as the one we present, because now the maximum likelihood prediction $y_0$ does not directly correspond to the closest center of Gaussian distributions. Our proofs, however, generalize to the unbalanced setting because we can still compute how far a class center needs to be shifted for the poisoning attack to succeed and how large the corresponding total variation distances are.", " **(6)** Limitation: ‘I believe the finding about data efficiency affecting DPA/FA's performance is an obvious one and already known.’\n\n**Answer:**\nThis is only partially true.\nIt is indeed obvious that more data-efficient base learners can improve DPA/FA’s performance. **However**, previously we knew only that this is one way towards our destination, now we know that this can be one of the optimal ways!\n\nIn addition, this offers a new motivation for advancing machine learning with small training sets and/or few-shot learning (even in domains where data is abundant), where the techniques differ greatly from learning with large datasets. We hope the conjecture will also facilitate the advancement of relevant fields.\n\n\n**(7)** Limitation: ‘ The relationship of your work to Gao et al. [9] to your work merits a longer discussion than the partial sentence at the end of Section 2. As more general feedback, I appreciate when in related work the authors include in the text the author(s)'s names with the citation number.’\n\n**Answer:**\n\nThanks for the suggestion regarding citation formats! We updated it in the revised version of the paper. Due to page limits, we included further discussion with regards to the Gao et al. [9] in Appendix in the updated version and will include it in the main paper for the camera-ready version which allows an additional content page. \n\nIn Gao et al. [9], authors make a very creative step towards understanding how the budget of data poisoning attacks and the size of the training set interact and affect whether we can defend the attacks.\n\nHowever, the main results of Gao et al. [9] can in fact be implied by Lethal Dose Conjecture. In this sense, the conjecture is stronger and more general. \n\nFor example: Let $N$ be the size of the training set and $m$ be the number of poisoned samples. Lethal Dose Conjecture implies that the 'Lethal Dose' (the threshold for when poisoning attacks can be too strong to be defended) is $m/N \\approx \\Omega(1/n)$, where $n$ is the number of samples needed by the most data-efficient learners to achieve accurate predictions. Meanwhile, Theorem 3.2 and 3.3 of Gao et al. [9] only suggest when $m=o(N)$, i.e. $m/N \\to 0$, the poisoning attacks are defendable.\n\n\n**Thank you for reading! Please consider raising the score if we do help addressing your concerns~**\n", " Thank you for the review! We especially appreciate the insights you shared--that is the spirit of openreview! Now we will answer your questions and address your concerns.\n\n**(1)** Concern: ‘I understand that the argument is made that the conjecture applies to the two extremes of learnability -- hardest and easiest -- so in theory it applies elsewhere. I would have hoped had the conjecture been demonstrated on even a simple parametric model class (e.g., a linear model); I also understand that this easier said than achieved. Overall, I am concerned the claims may be overstated.’\n\n**Answer:**\nWe agree that the claims may be overstated **if** the two cases in Section 4.2 were the only theoretical supports provided, but they are **NOT**. While these two are more intuitive, stronger theoretical supports are provided in Section 5 and Section 6.\n\nIn section 5, we provide a ‘distribution discrimination’ view, showing that the very same scaling rule of Lethal Dose Conjecture applies when one wants to discriminate any distributions.\n\nIn section 6, we prove Lethal Dose Conjecture for classification, assuming data from each class follow an isotropic Gaussian distribution.\n\nIt is the union of the analysis in Section 4.2, Section 5 and Section 6 providing evidence that the scaling rule in Lethal Dose Conjecture is no coincidence.\n\n\n**(2)** Question: ‘In Sec. 7.2, the argument is made that since DPA is asymptotically optimal, then improving robustness \"reduces developing stronger defenses to finding more data-efficient learners.\" I understand the origin of this claim, but it seems overbroad. Could it not also be argued that a better/alternative approach is better ways to determine $\\ell_0$\n robustness of the individual models beyond the assumption that a single insertion/deletion can arbitrarily change the prediction?’\n\n**Answer:**\nGood question! The rationale behind that argument is that we want to simplify the defense problem through reduction. In particular, to defend against data poisoning, we are trying to design algorithms/models with $\\ell_0$ robustness (with respect to the training set) overall. Here, Lethal Dose Conjecture implies that DPA is a nearly optimal reduction from designing $\\ell_0$ robust models to designing data-efficient models, **simplifying** a problem with robustness requirements to a problem with none. \n\nThis is desirable as now we can focus on a simpler task. Meanwhile, in formulations, making base models $\\ell_0$ robust is not easier than making the whole model $\\ell_0$ robust. \n\nHere is another way of looking at this: When the base models are already robust against data poisoning, it implies that one can also increase robustness by using more base models with less training data each. In fact, in some sense, an example of this is presented in Section 7.3 of our paper, where we show that a base learner for DPA can be derived from nearest neighbor, an approach with intrinsic robustness. DPA using the derived base learner offers similar robustness as the nearest neighbor method.\n\n\n\n\n**(3)** Question: ‘Under your claim the number of models may need to grow to n which affects inference time efficiency. If there was some way -- say even an oracle -- to quantify the intrinsic robustness of each submodel, would that not be similarly as good? If not why?’\n\n**Answer:**\nYes, it is totally possible that a method with intrinsic robustness may be as robust as DPA (using the most data-efficient learners) while offering a faster inference. We believe improving inference time can also be a valuable direction for future research. Our conjecture focuses on the extremes of robustness but not inference time.\n\n\n**(4)** Question: ‘In other words, is partitioning merely a way to dilute the effect of \"overwhelmingly lethal\" instances?’\n\n**Answer:**\nIn some sense, yes. The intuition behind DPA is no mystery and it is fair to say that it is some sort of dilution. What is impressive and non-trivial about the Lethal Dose Conjecture is that it implies that such simple dilution is surprisingly good and, as shown in the paper in several cases, is nearly optimal.\n\n\n**(5)** Question: ‘How do your \"baseline\" experimental results correspond to the published DPA implementation? Does it take out the data augmentation already in the implementation and compare against that?’\n\n**Answer:**\nThe baseline results are consistent with the published DPA implementation. DPA_baseline uses the very **same** augmentations and hyperparameters as the published DPA implementation and the results in our Figure 2 matches the corresponding settings reported in the original paper of DPA. We do **not** take out the augmentation already in the implementation. It is surprising but one can indeed double or triple the reported robustness of one of SOTAs! This is because our community has not put much effort into improving base learners and the potential from more data-efficient base learners remains undiscovered.\n\n\n\n", " This work proposes the *Lethal Dose Conjecture* regarding learnability under training-set attacks -- in particular on the amount of data needed by an attacked (sub)model. The authors demonstrate their conjecture on simple model classes (e.g., instance based-learners). They also discuss approaches to approve the performance of existing certified defenses generally. Off the top, there are definitely things I like about this paper; at the same time, I have concerns. My score is currently set to borderline, and the score changed multiple times over the course of thinking about the paper and writing the review. It is my *a priori* expectation this score will change based on discussions with the authors and other reviewers.\n\nThe paper is overall very well-written. The arguments are laid out clearly. It uses the right amount of notation to maximize understanding. Overall, I enjoyed reading it. I recognize the obvious level of care and thought in the writing. \n\nI also recognize that there is implicitly some level of courage in writing a conjecture paper and that reviewers should be understanding that even a wrong conjecture can advance science. Nonetheless, I think the bar for such papers should be quite high, and most conjecture papers may be better suited as a preprint unless especially incisive. I am not convinced this paper meets that latter bar in particular given related work.\n\nI appreciate the demonstration of the conjecture on simple model classes (e.g., bijective learner, instance-based memorized learner). There are special properties to those learners that make me concerned of if/how readily the conjecture generalizes. I understand that the argument is made that the conjecture applies to the two extremes of learnability -- hardest and easiest -- so in theory it applies elsewhere. I would have hoped had the conjecture been demonstrated on even a simple parametric model class (e.g., a linear model); I also understand that this easier said than achieved. Overall, I am concerned the claims may be overstated.\n\nI thought the point made in Section 7.3 \"*If assuming the conjecture is true, when some defense against data poisoning is clearly more robust than DPA (even in restricted cases), there should be a more data-efficient learner.*\" was well-stated and effective. I think the paper would be improved if that point was made more clearly earlier.\n* As a note, I think there may be a typo at the beginning of that sentence.\n\nI appreciate that the authors consider both DPA and FA. I think the exposition would be clearer focusing exclusively on DPA and having a statement in the related work about how the arguments made about DPA generalize to FA. DPA is the older method and the gains from FA are often quite marginal and focused mostly in the tail of the certification bound curve (Wang et al. 2022 Table 1). \n* Granted FA is newer and can lead to improvements at the expense of increased training and inference costs. I do not think repeatedly mentioning FA here achieves much beyond making FA better known.\n In Sec. 7.2, the argument is made that since DPA is asymptotically optimal, then improving robustness \"reduces developing stronger defenses to finding more data-efficient learners.\"\n* I understand the origin of this claim, but it seems overbroad. Could it not also be argued that a better/alternative approach is better ways to determine $\\ell_0$ robustness of the individual models beyond the assumption that a single insertion/deletion can arbitrarily change the prediction?\n* Under your claim the number of models may need to grow to $n$ which affects inference time efficiency. If there was some way -- say even an oracle -- to quantify the intrinsic robustness of each submodel, would that not be similarly as good? If not why? \n\nAssuming the conjecture is true, in what case should an ensemble-based method like DPA be used? For simplicity of discussion, consider DPA (not FA) with disjoint partitions.\n* By partitioning the training set, an attacker with perfect knowledge can ignore $\\frac{k}{2}$ of the submodels or in other words about $\\frac{1}{2}$ of the training data. If only $n$ instances are needed to learn the distribution, does this not reduce the best case bound by (about) half specifically if we can certify the robustness of an individual submodel above 1 (as possible with KNN)?\n * In other words, is partitioning merely a way to dilute the effect of \"overwhelmingly lethal\" instances?\n * I think this distinction and discussion is more and to some degree glossed over in Section 7.\n* Granted the claim in your paper is that they are only optimal up to a constant factor and what I describe is a (constant) factor of 2.\n* I think clarity on this point is particularly important for me and would improve the paper.\n\nHow do your \"baseline\" experimental results correspond to the published [DPA implementation](https://github.com/alevine0/DPA)? Does it take out the data augmentation already in the implementation and compare against that? I believe the finding about data efficiency affecting DPA/FA's performance is an obvious one and already known. Described very briefly, an ensemble's performance sits on top of the performance of its submodels. If the submodels are made better, the ensemble will improve. \n* For example, without the base augmentation already in [DPA's implementation](https://github.com/alevine0/DPA/blob/ce25c36721f7529350700b071b9a5e1281c31009/FeatureLearningRotNet/dataloader.py#L97), DPA performs much worse.\n* I am open to be convinced if others think this finding is less obvious.\n\nGao et al. [9] study and provides bounds under instance targeted poisoning attack. The relationship of your work to Gao et al. [9] to your work merits a longer discussion than the partial sentence at the end of Section 2.\n* As more general feedback, I appreciate when in related work the authors include in the text the author(s)'s names with the citation number. I missed that you even noted Gao et al. the first time I read it because it only had the citation number.", " This paper introduces the Lethal Dose Conjecture (LDC) which speculates on the\nminimum amount of bad or poisoned data samples that a learning algorithm can\ntolerate (or an attacker can manipulate) before its ability to predict breaks\ndown. They conjecture that this number is bounded by Θ(N / n) (using \"Big\nTheta\" asymptotic notation) for datasets of size N and problems where a minimum\nof n correctly labeled examples are needed. They seek to prove this in three\nrestricted settings and provide evidence suggesting it may be true in general.\n\nThey relate this conjecture to two previously published ensembling techniques\n(DFA and FA) that provide robustness when learning on a \"poisoned\" dataset.\nThey show that if LDC is true, then DFA and FA are the best mitigations against\npoisoned data in the limit as the dataset size grows.\n\nA recurring theme is that data efficiency (the ability to fit a pattern with\nfewer examples) is linked to how well a learning algorithm can withstand\npoisoning. Experiments use data augmentation to increase the data efficiency of\nnetwork-in-network learners on CIFAR-10 and GTSRB.\n Please provide a thorough assessment of the strengths and weaknesses of the\npaper, touching on each of the following dimensions: originality, quality,\nclarity and significance. You can incorporate Markdown and Latex into your\nreview. See /faq.\n\nThe paper is generally well written, well motivated, and provides a good amount\nof evidence in support of the claim.\n\nI'm not deeply involved in the related literature on data poisoning, so I can't\ncomment on how original the work is, but it seems to be a logical (but\nnon-trivial) next direction given the previous works cited in section 1 and 2.\n\nApart from having a well-chosen name, the actual conjecture provides a useful\nconceptual framework for practitioners to think about data poisoning (or as a\nlimiting case relating to learning on dirty datasets in general). The N/n\nfigure makes intuitive sense but nailing it down to exactly that case is\ntheoretically import.\n\nThe code to reproduce the experiments is provided.\n\nI think the biggest weakness is the choice of datasets and experiments. I\nthink what is presented is sufficient given the quality of the theoretical\nanalysis, but I would have liked to see the claim tested against a large\n(image-net sized) dataset with a large model, even if the result is: it doesn't\nfollow the theoretically predicted trend, but that could just be suggesting we\nhaven't found efficient enough learners.\n Please list up and carefully describe any questions and suggestions for the\nauthors. Think of the things where a response from the author can change your\nopinion, clarify a confusion or address a limitation. This can be very\nimportant for a productive rebuttal and discussion phase with the authors.\n\nThere are several nitpicks and typos.\n\nLine 33: I find the \"by doing nothing\" somewhat unclear.\n\nThe use of the term \"certified robustness\" is used several times, but isn't\ndefined. Adding that definition in the introduction would be helpful. It looks\nlike line 79 and 80 are getting at the definition, but I think it would be best\nto make the term crystal clear.\n\nI'm fairly sure the term \"asymptotically\" refers to the growth of the dataset,\nbut that should be stated.\n\nIn Figure 1, you forgot a `\\` in front of `mu_1`.\n\nIn Section 6, (and in the abstract) the upfront claim is that you are verifying\nthe conjecture in \"Gaussian Classification\", but upon further reading it is \nmore restrictive than that. The Gaussian's have equal co-variance and it seems\nthat the learning is happening in a balanced setting where each class has\nroughly the same number of examples. It isn't clear if the result holds beyond\nthis case, but the proof seems to specifically address this case. I think this\n\"balanced equal co-variance\" or some other wording to that effect is needed when \nintroducing this claim (at least in the list of contributions and in the title\nof section 6 or the first sentence).\n\nThis also raises the question: What happens in an unbalanced setting? How does\nthat relate to `n` the minimum number of clean examples needed? When discussing\nthe number `n` there seems to be a presumption that the minimum number of\nexamples. I'm not sure if this is worth stating in the paper or not.\n\nIn the appendix, I did my best to verify the proofs. Nothing stood out as\nobviously wrong, but there were places that I think could benefit from further\nexplanation (at least for me, perhaps its clear to others).\n\nAfter line 5, (lets label them 5.1 - 5.5) I don't see how to get from 5.1 to\n5.2.\n\nOn line 8, it would be more formally correct to say ∈ Θ(k) as Θ represents a\nset of functions of which log(2 - 2τ)/log(1 - 2/k) is a member. This comment\napplies elsewhere. This is minor as the intention should be generally\nunderstood, but I wanted to point that out.\n\nOn line 19, it wasn't clear to me why `E[| T(D_N) - D_N |] = 2N / k`.\n\nI also marked that I didn't understand steps 46.2 and 46.3.\n\nIt may be of some value to put some of these proofs or parts of these proofs\ninto a proof validator and provide that encoding to provide readers like myself\nwith marginal theoretical ability to feel more confident about their\ncorrectness. You could codify difficult to encode arguments as assumptions,\nwhich would mean only the truth of the important concepts would need to be\nverified.\n\nI was unable to follow lemma 6 and 7. They would benefit from more explanation.\nWhat is the intuition for taking ε -> 0?\n\n The authors do not discuss the potential negative impact of this work, but I\nthink this potential is fringe. In some sense the conjecture could give a\nmalicious actor a hint on the amount of work they need to do, but it also lets\nothers know what they need to guard against. I don't think there is any\nimmediate or zero-day exploit that this enables, so in general I think making\npractitioners aware of this (potential) Θ(N / n) relationship is a net social\ngood.", " This paper suggests a hypothesis for the necessary and sufficient amount of malicious samples needed asymptotically for successful data poisoning. Most notably, the amount is inversely proportional to the minimum amount of data needed to learn the concept for the chose model class. The hypothesis is proven on some learning scenarios. Empirically, it is shown that learning pipelines using more data efficient base learner can achieve higher certified robustness against poisoning. This paper proposes a wholistic idea with ample theoretical and empirical evidence. The message is clear and thought-provoking. I learned something new. Thank you.\n\nIn terms of clarity, this paper is of high quality. The main concepts are mostly clear stated, and the theoretical insights is coupled with intuitive explanation, making it easy to understand by audience with different levels of theoretical background. \n\nFor significance, although the paper is only a hypothesis, the hypothesis statement is thought provoking. The implication to real-world scenario is also impactful: data efficient base learner can achieve higher robustness against poisoning. The empirical evidence further increases the significance.\n\nFor originality, the paper stems from previous work of DPA and FA, the angle is new. Whether the hypothesis is eventually proven true or false, the idea is worth presenting to the machine learning community.\n\nOverall, this paper is of high quality. However, I do have a few lingering problem about the strength of the statement, and hope the authors can clarify. I have two main questions.\n\n1. How should we interpret $n$ --- the minimum amount of data required by the most sample efficient learner? Given a distribution and a model class for the base learner, $n$ would become a constant, wouldn't it? For example, for a linear separable data distribution with margin $\\epsilon$, $n$ would just be a constant for a given $\\epsilon$. Now, notice the bound is asymptotic, a constant $n$ means the data poisoner always need to poison a constant fraction of the data set. Could you give an example that 1) the model class has a variable $k$, e.g. size of neurons in an NN, 2) $n$ scales with $k$, and 3) the amount of poisoning examples needed has lower order than constant portion given increasing $k$? \n\n2. Correct me if I'm wrong: the hypothesis suggests that a more complex base learner may be more prone to data poisoning attack. On the other hand, a more complexity model (e.g. deep learning models) has the potential to fit both the poisoning data and the clean data separately, while a simple model (e.g. linear classifier) cannot. How do these two view reconcile with each other? This is out of the scope of the paper, and will not be the ground of my acceptance/rejection. But I'm curious about your opinion. Thanks. The limitations have been adequately addressed.", " The paper suggests a conjecture that quantifies what is the maximal fraction of training data that might be poisoned, while achieving a given required accuracy rate $\\epsilon$. Specifically, the conjecture is that roughly $1/n$ of the training data is the maximal poisoned data fraction, where $n$ is the sample complexity of the task, that is, the sample size required to achieve accuracy $\\epsilon$ in a \"clean\" environment (without presence of an adversary). The conjecture is proved for some special cases such as instance memorization. Another theoretical aspect of the paper is a general view of the poisoning problem through distribution discrimination. Practically, the paper summarizes results of some experiments supporting the conjecture: On one hand, robust learners can be derived from data-efficient learners, and on the other hand, data-efficient learners can be derived from robust learners. Strengths:\n\n1. The conjecture sounds reasonable. The informal arguments as well as the experiments supporting it are quite convincing.\n2. The \"distribution discrimination\" view of the problem is intuitive and insightful.\n3. The practical technique of using data augmentation to increase accuracy is interesting and insightful.\n4. The idea of deriving data efficient-learners from robust learners is interesting as well.\n\nWeaknesses:\n1. The absence of some standard notions in machine learning that seems related to the paper makes the paper and its contribution harder to understand. For example, it seems that the terms \"sample complexity\" and \"realizability\" should have been integrated in the basic definitions. Another example is Lemma 1 which seemingly can be simply proved using the well known VC-dimension, which is not mentioned anywhere in the paper (elaboration in question 5).\n2. The theoretical contribution of the paper is not clear enough to me: The formal statement of the conjecture is not justified enough in my opinion (elaboration in question 2). It is also not explained why the special cases described in Section 4 are interesting, and in what sense they are \"easy\" or \"hard\", as written in the paper (see elaboration in questions 4,6). \n3. There are some unclear parts in the text, which also makes it hard to evaluate the paper's contribution. Examples can be found in questions 1,3. 1. In the definition of a learner, and poisoned learning: The domain of $T$ and $f$ seems to be defined as infinite vectors where I guess it should be finite vectors?\n2. The formulation of the formal statement of the conjecture in page 3 is not justified enough, in my opinion. It seems that the conjecture is formulated with respect to a specific given data point $x_0$. I guess that this what a \"specific task\" (as written in the introduction) means? However, a \"specific task\" might be understood as drawing the test point from a *specific* hidden marginal distribution over instances, as usually done in PAC learning. Also, isn't this suggested formulation might be better? For example, think of a point that can only suffer attacks of a very small size, but on the other hand is not likely to be drawn as a test point. Isn't it better to define the lethal dose to be higher, than what reflects in the conjecture, in this case? (because a wrong prediction on this point is not lethal).\n3. In definition 1: What does \"plausible learners\" mean? In what sense are they plausible?\n4. The paragraph that comes after definition 1 is not clear to me. What are the \"classes\" here? In what sense is this setting the easiest?\n5. Isn't the proof of Lemma 1 is just by the fact that the VC-dimension of the corresponding hypothesis class is $k$? Just think of the $k$ labels as all possible binary labelings of $\\log_2(k)$ data points.\n6. The paragraph that comes after definition 2 is not clear to me. Why is this setting so much harder compared with the setting of definition 1? Also, definition 2 seems like a generalization of definition 1, and if that is indeed the case, perhaps it is good to mention that. Yes, except for what is written in question 2 about the formulation of the conjecture." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "GYrTIJ6Hzz-", "R3HNHAd-4Vw", "iADEwrvxXUU", "9KutH6bzTQW", "snmEK91-UZc", "M3eLJ5nIbsd", "LZsCF016AwM", "h8LBeC6_W7f", "nips_2022_PYnSpt3jAz", "KG4CSzISFQI", "Xwm2zsO3QNj", "IpfL2hJEPE7", "RjUsKWmQRV_", "AXwHg8sPbPX", "EfIevcvvWwp", "-6u3ZObdlBk", "nips_2022_PYnSpt3jAz", "nips_2022_PYnSpt3jAz", "nips_2022_PYnSpt3jAz", "nips_2022_PYnSpt3jAz" ]
nips_2022_VvOcK2DGM7G
Unsupervised Causal Generative Understanding of Images
We present a novel framework for unsupervised object-centric 3D scene understanding that generalizes robustly to out-of-distribution images. To achieve this, we design a causal generative model reflecting the physical process by which an image is produced, when a camera captures a scene containing multiple objects. This model is trained to reconstruct multi-view images via a latent representation describing the shapes, colours and positions of the 3D objects they show. It explicitly represents object instances as separate neural radiance fields, placed into a 3D scene. We then propose an inference algorithm that can infer this latent representation given a single out-of-distribution image as input -- even when it shows an unseen combination of components, unseen spatial compositions or a radically new viewpoint. We conduct extensive experiments applying our approach to test datasets that have zero probability under the training distribution. These show that it accurately reconstructs a scene's geometry, segments objects and infers their positions, despite not receiving any supervision. Our approach significantly out-performs baselines that do not capture the true causal image generation process.
Accept
This paper proposes a NERF-based object-centric VAE generative model, which it argues is a more "causal" generative model than prior attempts. While the approach is somewhat elaborate, it is described well, and involves a novel and quite well-motivated combination of several previously proposed components. The MCMC sampler that improves the VAE encoder for out-of-distribution settings is an interesting additional contribution beyond the generative model, even though it is naturally computationally inefficient. While the experimental settings are somewhat toy (all synthetic data), the proposed method is shown to enjoy substantial gains over some baselines representative of the closest related work, and the new baselines and ablation studies added during the response period are helpful. After quite an engaging response period, the four thoughtful reviewers all agreed that the merits outweigh the shortcomings. I concur and recommend acceptance.
train
[ "IBeOwj3IHvz", "Z9l6BVvnT_M", "9su8fnNTGN", "06TBGz62iLh", "K2qjTWFxTd", "eq4XX5t5H1a", "_wZiZc6J6cl", "y3Px5v_VW2e", "inuFxkDPL6e", "leYYxpqzrrS", "zxRZn7uIxsL", "tDW8W6O3RSf", "2ie0EkiE1zMy", "QgRhAR4IN0x", "kl5fJ5Sq2kt", "26uApj6gTJQ", "mR-egRdUESP", "fyK-4_LBOVS", "ULhH2dVbFdo", "9f1Ql_0wqvx", "8gD810Sw-ZS", "n8zQG3DKluh", "ig5EYClfIUv", "KoUY902ctYs", "VJCcC86eb02" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you once again for your review, we are glad to hear we have resolved your concerns.\n\n> by \"analysis\" I meant more trying to understand how it does that - for example - a simple latent traversal may shed some light on what structures this latent holds.\n\nThanks for the suggestion, we agree this would be informative and will add latent space traversals to the camera-ready version of the manuscript.\n\nWe have now run the following additional experiments that you requested:\n\n1. IODINE as an additional 2D baseline with iterative amortized inference\n2. ablation experiments on our model with...\n - encoder (amortized inference) replacing MCMC inference in our model on OOD data\n - removing the high-level prior $p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$\n\nIn addition, following the request of Reviewer uNLg, we now also report the performance of Slot-Attention using our MCMC inference scheme instead of amortized inference.\n\nIn the top-level comment (named \"Requested Experiments\"), we give a summary of these results; for full details, please see the updated manuscript.\n\n*We thank you again for engaging in the discussion, and for your valuable suggestions!*", " \nWe have now run the following additional experiments you requested, to examine the benefit of different aspects of our approach:\n\n1. ablation experiments on our model with...\n - encoder (amortized inference) replacing MCMC inference\n - unstructured full-scene latent representation replacing per-object structure in the generative model\n - removing the high-level prior $p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$\n2. Slot Attention baseline with MCMC inference instead of amortized inference\n\nIn addition, following the request of Reviewer tfzG, we have added IODINE as a further 2D baseline.\n\nIn the top-level comment (named \"Requested Experiments\"), we give a summary of these results; for full details, please see the updated manuscript.\nWe hope this now addresses your concern regarding lack of ablation study and rigorous analysis of how MCMC helps the model; we kindly ask that you consider raising your score as promised in your review.\n", " We have now run the following additional experiments you requested:\n\n1. Slot Attention with MCMC inference\n2. beta-VAE baseline\n3. ablation experiments on our model with...\n - encoder (amortized inference) replacing MCMC inference\n - unstructured full-scene latent representation replacing per-object structure in the generative model\n - removing the high-level prior $p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$\n\nIn addition, following the request of Reviewer tfzG, we have added IODINE as a further 2D baseline.\n\nIn the top-level comment (named \"Requested Experiments\"), we give a summary of these results; for full details, please see the updated manuscript.\nWe hope this now addresses all your concerns regarding evaluation; we kindly ask that you consider raising your score if so.\n\n*We thank you again for engaging in the discussion, and for your valuable suggestions!*", " # Additional Baselines\n\n\nHere we provide results from the additional baselines requested by reviewers. As with those already included in the paper, these baselines show a significant drop in performance on out-of-distribution images. Here we show only a subset of results for brevity; please see the updated paper and supplementary material for full results.\n\n\n### Slot-Attention with MCMC (requested by Reviewer uNLg)\n\nHere we adapt and apply our proposed MCMC inference to the learnt decoder of Slot-Attention (recall that we already provided results from Slot-Attention in the paper, but using their encoder). In this comment, we only show results on object segmentation, measuring mean segmentation covering (mSC; higher is better) -- please see the paper for more results. Note that Slot-Attention cannot perform 3D tasks, such as depth estimation.\n\n| GQN Mean Segmentation Covering |Test (I.I.D.) |O.O.D. |\n|----------------|-------------------------------|-----------------------------|\n|Ours |$\\mathbf{0.88}$ |$\\mathbf{0.89}$ | \n|Slot-Attention \t\t |0.67 |0.56 |\n|Slot-Attention with MCMC \t\t |0.56 |0.54 |\n\nComparing inference methods on Slot Attention, we see that MCMC under-performs relative to amortized inference on the IID test set (which is expected, since the encoder was trained on such data). \nSlot-Attention with MCMC inference successfully reconstructs the input image during its test-time optimization, but this comes at the cost of lower segmentation performance. In contrast, our method performs well on both reconstruction and segmentation.\nHowever, using MCMC results in a much smaller gap between IID and OOD settings, demonstrating the benefit of our proposal of using MCMC. \nStill, both variants of Slot Attention perform significantly worse in both settings than our own method.\n\n\n### $\\beta$-VAE (requested by Reviewer uNLg)\n\nHere we give results from $\\beta$-VAE. This cannot perform any vision tasks apart from reconstruction, so we only measure reconstruction quality, using PSNR (higher is better). \n\n| GQN PSNR |Test (I.I.D.) |O.O.D. |\n|----------------|-------------------------------|-----------------------------|\n| Ours\t\t |$\\mathbf{24.1}$ |$\\mathbf{21.8}$ |\n| beta-VAE\t\t |20.6 |15.6 |\n\nWe see significantly lower performance with $\\beta$-VAE than our method, particularly in the OOD setting, even though $\\beta$-VAE is intended to learn a disentangled latent space (which has been hypothesized to give greater robustness to distribution shifts).\n\n\n### IODINE (requested by Reviewer tfzG)\n\nFinally, we also provide results from another 2D baseline, IODINE, which uses iterative amortized inference. We tuned slot count, number of IAI iterations, pixel noise std, learning rate, and gradient clipping. For brevity we only show results for object segmentation, measuring mean segmentation covering (higher is better); please see the paper for more results. Note that IODINE cannot perform 3D tasks, such as depth estimation or NVS.\n\n| GQN Mean Segmentation Covering |Test (I.I.D.) |O.O.D. |\n|----------------|-------------------------------|-----------------------------|\n|Ours |$\\mathbf{0.88}$ |$\\mathbf{0.89}$\n|IODINE \t\t |0.54 |0.53 | \n\nIODINE with iterative amortized inference successfully reconstructs the input image during its test-time optimization, but this comes at the cost of lower segmentation performance. In contrast, our method performs well on both reconstruction and segmentation. We see that the iterative amortized inference scheme results in only a small drop in segmentation performance on OOD data compared with IID; however overall performance is substantially lower than with our model and its MCMC inference scheme (note that the original IODINE paper does not show successful results on textured data like our GQN dataset).\n\n", " \n# Ablation Study\n\n### Ablating MCMC inference (amortised inference vs MCMC)\n\nHere we provide an ablation study on the effects of our novel MCMC scheme. We compare it with the standard approach of amortised inference (i.e. an encoder network predicts the posterior parameters). To keep this comment brief, we only give segmentation results on GQN dataset; see the updated paper for more tasks and datasets. The following table shows mean segmentation covering (mSC; higher is better).\n\n| GQN Mean Segmentation Covering |Test (I.I.D.) |O.O.D. |\n|----------------|-------------------------------|-----------------------------\n|MCMC (ours) \t\t |0.88 |$\\mathbf{0.89}$\n|Encoder |$\\mathbf{0.91}$ |0.55 \n\nThese results confirm that amortised inference is a critical bottleneck for out-of-distribution generalization: though amortised inference performs well when the test distribution is identical to the training distribution (first column), its performance drops significantly on out-of-distribution images (second column), while MCMC holds up.\n\n\n### Ablating generative model structure (MCMC on our model vs MCMC on unstructured generative model)\n\nNext we analyse the effects of using our compositional model compared to a non-compositional model, which has one latent variable rather than one per object (similar to NeRF-VAE), but otherwise with the same architecture as ours). Note the rest of the method is intact (e.g. we perform MCMC inference on both approaches). To keep this comment brief, we only show depth relative error (lower is better) on GQN; please see the paper for full results. \n\n| GQN Depth Relative Error |Test (I.I.D.) |O.O.D. |\n|----------------|-------------------------------|-----------------------------|\n|MCMC on structured generative model (ours) \t\t |0.031 |$\\mathbf{0.034}$ |\n|MCMC on unstructured generative model |0.031 |0.221 |\n\nThese results demonstrate that our proposed compositional generative model significantly improves out-of-distribution generalization: though both models perform similarly on IID test data (first column), the unstructured model performs significantly worse on out-of-distribution images (second column).\n\n\n### Ablating high-level prior over scene variables **$p_{\\theta}(\\mathbf{z}^s|\\mathbf{z}^g)$** \n\nFinally analyse the effects of our high-level prior over scene variables $p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$. Here, we evaluate samples generated by the model, comparing FID (lower is better) with ablated model, which samples $\\mathbf{z}^s$ from the prior. \n\n|FID |GQN |Arrow |\n|----------------|-------------------------------|-----------------------------|\n|Ours \t\t |$\\mathbf{80.3}$ |$\\mathbf{141.4}$ |\n|Ours without **$p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$** |200.4 |275.7 |\n\nThese results demonstrate that our hierarchical model with its high-level prior $p_{\\theta}(\\mathbf{z}^s\\mid\\mathbf{z}^g)$ is necessary to correctly model the density of scenes: our model samples plausible scenes as it can model relationships between objects, while the ablated model performs much worse.\n", " In the following comment thread, we provide the requested ablation studies and additional baselines. Both demonstrate the points we argue in the paper. We summarize the results here; please see the updated manuscript for full results.", " We are glad to hear we have found the source of confusion and clarified that our model can learn complex relationships among object positions and appearances! We will add a further explanation in the paper.\n\n>However, I hope it would not be too troublesome for the authors to explain further how this difference is achieved and what happens when there are more objects than the maximum number in the training data.\n\nThis is achieved by the neural network (parameterised by $\\theta$) that takes as input the high-level variable $\\mathbf{z}^g$, and outputs parameters for all latent variables $\\mathbf{z}^s$ (representing object appearances, positions, etc.). This network (whose outputs are denoted $\\zeta_\\theta(\\mathbf{z}^g)$ and $\\xi_\\theta(\\mathbf{z}^g)$ in sec. 3.1 of the paper) is fully-connected (see sec. 6 of the supplementary for the architecture). This allows outputting non-identical distributions for each object’s latents, and capturing relationships between them. Note that this model does not define the probability of latents in the OOD setting with more objects present than during training. Indeed, this is impossible in general – e.g. if during training we always see four objects arranged in a square, then it is not clear what the allowable positions might be for a fifth object.\n\n>To clarify, I didn't suggest training a generative model via extending the existing methods to be completely probabilistic. I hope there is an ablation that runs MCMC inference on the learned generator/decoder from either Slot Attention or uORF\n\nThank you for the clarification! We will run MCMC inference on the pretrained Slot-Attention model as you suggest. We’ll post results here as soon as this is complete. To further answer your original question of *“[whether] the generative model design or the MCMC inference contributes more to OOD performance”*, we will also provide results from our model but with amortised inference instead of MCMC, to explicitly assess the benefit of the latter. Would this (in addition to beta-VAE and IODINE baselines) upgrade your review from “limited evaluation” to “no major concerns with respect to evaluation”? \n", " All in all I would say your responses have answered most of my concerns.\n\nI realize the global latent is the one responsible for modeling global scene structure - by \"analysis\" I meant more trying to understand *how* it does that - for example - a simple latent traversal may shed some light on what structures this latent holds. \n\nNevertheless - assuming the IODINE baseline would be included in the paper and comparison, as well as the respective discussion I would say I am leaning more towards accepting the paper (which strengths I mentioned in my original review).", " Thanks for your response. I have increased the rating since some of my concerns have been addressed. Here are some follow-ups:\n\n> each $p_\\theta(\\mathbf{z}^{shape}_{i} , \\mathbf{z}^{col}_{i} , \\mathbf{z}^{pos}_{i} | \\mathbf{z}^g)$$ is a different distribution for each object $i$\n\nThis is interesting since it really touches upon my concerns about some (conditional) i.i.d. assumptions in the model. However, I hope it would not be too troublesome for the authors to explain further how this difference is achieved and what happens when there are more objects than the maximum number in the training data. \n\n> MCMC on existing methods\n\nTo clarify, I didn't suggest training a generative model via extending the existing methods to be completely probabilistic. I hope there is an ablation that runs MCMC inference on the **learned** generator/decoder from either Slot Attention or uORF. From what I read, the proposed MCMC inference is somewhat general and should be directly applicable to any slot-based generators. Please correct me if I misunderstood, thank you! ", " > Both Slot Attention and uORF learn a mixture decoder… This will help answer if the generative model design or the MCMC inference contributes more to OOD performance.\n\nWhile it is theoretically possible to extend these two existing discriminative baselines to be generative, and to apply our novel MCMC inference scheme on the resulting models, this is a substantial research project in itself, and does not justify criticism of our model. We kindly ask you to reconsider whether it is fair to penalise our work for not comparing with non-trivial and hypothetical extensions of previous approaches. *Please let us know* if you still request extension of baselines, we will try to provide these results before the deadline. Meanwhile, we are running ablation experiments for our own model, to demonstrate the benefit of each aspect (e.g. MCMC vs. amortised inference); we’ll post the results here in the next few days.\n\nThe fact that other approaches could be developed and compared with ours (e.g. a hypothetical extension of uORF that is generative and incorporates a scene-level prior, and uses our proposed MCMC inference scheme) does not diminish our key contributions – that is, a novel MCMC scheme, a novel causal model of images allowing interventions, counterfactuals and mathematically-principled OOD inference, and empirical results that significantly out-perform existing methods.\n\n> In OOD generalization, how does the model know which latent variable experience a distribution shift? In the reported experiments, do you manually pick latent variables to change priors to uninformative ones based on your knowledge of the different types of OOD?\n\nYes, we assume it is known which variable is subject to a distribution shift, but not what the updated distribution is. As mentioned, in future work this could be achieved via the generative model itself – a principled way to detect distribution shift is to measure whether the probability of an observation is below some predetermined threshold under the non-intervened distribution. However, this is beyond the scope of the present work (and far beyond existing works on unsupervised segmentation). As other reviewers noted, ours is the first work to take a step in this exciting direction.\n", " Thanks for the quick response and clarifications!\n\n>I was just curious if the relation in ARROW is too particular for the designed independence assumption. What if the relation is the augmented with \"the attribute of color and shape are correlated instead of independent\"? \n\nWe emphasise that our model does **not** assume independence among the different scene variables constituting $\\mathbf{z}^s$; it only assumes *conditional* independence given $\\mathbf{z}^g$. Relationships between these variables are modelled by $p_{\\theta}(\\mathbf{z}^s|\\mathbf{z}^g)$, which can in principle model any relationship among object locations, shapes and colors. Naturally this includes the relation in the ARROW dataset, and would also include the color/shape correlation you mention. It also includes more complex correlations in the GQN dataset described in sec. 8.1 of the supplementary (e.g. certain shapes and colors of object always appear near certain walls).\n\nOne possible source of confusion is the reader may assume that in Equation 2, $p_\\theta(\\mathbf{z}^{shape}\\_{i} , \\mathbf{z}^{col}\\_{i} , \\mathbf{z}^{pos}\\_{i} | \\mathbf{z}^g)$ models the same distribution for **all** objects $i$. This is not the case – in our model each $p_\\theta(\\mathbf{z}^{shape}\\_{i} , \\mathbf{z}^{col}\\_{i} , \\mathbf{z}^{pos}\\_{i} | \\mathbf{z}^g)$ is a different distribution for each object $i$. These distributions are controlled by a neural network (parameterised by $\\theta$), which takes as input the high-level variable $\\mathbf{z}^g$, and maps it to separate means for each object’s latent variables (just like a VAE decoder takes the latent variable and maps it to all pixel means). Similar notation is consistently used for such models in the generative modelling literature. However, given that it is important for readers to understand how our model can learn relationships between objects, we will clarify this explicitly in the text. We hope this addresses your concern – assuming independence would indeed be a strong limitation, but this is not the case in our work!\n\n>Would the shape and the color variable still be able to be disentangled?\n\nOur NeRF architecture is designed to ensure object colors and shapes are disentangled. We achieve this by allowing only the shape variable to influence the density (i.e. opacity) of 3D points. Specifically, an object's NeRF is made out of two neural networks: the first takes as input a 3D point and the object’s shape variable and outputs a 1-dimensional opacity (implicitly defining the 3D shape) and an embedding $\\mathbf{h}$; a second network takes the embedding $\\mathbf{h}$ and the object’s appearance variable and outputs an RGB color. Hence, opacity only depends on the latent shape variable, not color; this also allows changing object color with latent appearance variable without changing object’s 3D shape. This architecture is discussed in section 6 of the supplementary, but we'll also add a brief note in the main text. We emphasise that the fact that these variables are disentangled does *not* preclude them from being correlated via the high-level prior. By disentanglement we mean that modifying either shape or color affects only the corresponding aspect of the generated image; this can still be true even when they have a dependency through a common ancestor variable ($\\mathbf{z}^g$ in our case).\n", " Thanks authors for the detailed reply. However, I find some major questions in my review unanswered. To help better understand my concerns, here are some clarifications:\n\n> object relationships\n\nI didn't mean the model knows explicitly what the relation is. I was just curious if the relation in ARROW is too particular for the designed independence assumption. What if the relation is the augmented with \"the attribute of color and shape are correlated instead of independent\"? Would the shape and the color variable still be able to be disentangled? \n\n> mcmc on other existing models for ablation study\n\nBoth Slot Attention and uORF learn a mixture decoder, which can easily parametrize a pixel-wise Gaussian generator with a small sigma. As for the prior, authors may consider the proposed idea of using uninformative prior, which can make the ablation more specific (the same MCMC inference with the same OOD handling on different generative models). This will help answer if the generative model design or the MCMC inference contributes more to OOD performance. \n\n> knowledge of distribution shift\n\nIn OOD generalization, how does the model know which latent variable experience a distribution shift? In the reported experiments, do you manually pick latent variables to change priors to uninformative ones based on your knowledge of the different types of OOD? ", " >How is the conditional model p(z_s|z_g) structured? I looked at the supplementary material and it wasn't clear - how is the mapping for a single latent to multiple slots done? is it just an MLP which scales with the number of slots?\n\nYes, it is an MLP mapping the Gaussian variable $\\mathbf{z}^g$ to a list of object variables. We'll clarify this in the paper. It would be interesting future work to consider a permutation-invariant set decoder.\n\n> Follow-up to the above: can you change the number of objects without affecting the number parameters in the model? Consequently - can you instantiate a model with a different number of slots than used in training?\n\nYes, we can instantiate a model with a different number of slots than used in training though it will not allow us to trivially use the learnt global prior. This is orthogonal to our work and could be addressed by using a different prior (e.g. autoregressive) or as in GENESIS-v2 (Engelcke, 2021). \n\n>What prevents the object location to be learned in a non-canonical position and compensate with the appropriate shift through the inferred position? There's nothing constraining the object to be learned around the origin.\n\nObjects can be learned with an offset from the origin within their canonical reference frame, yes. However, this will result in a corresponding offset in the peak of the Gumbel-Softmax distribution over locations, still allowing consistent rendering of the scene, and learning of contextual relations among objects. Furthermore, note that working in a canonical space still has several advantages over prior mixture models. First, rendering an image with canonical object representation is more efficient as it requires less memory (it only requires rendering NeRF in fixed canonical space) while a non-canonical representation requires sampling points in the full scene volume, which quickly becomes intractable for a large scene. Second, from a representation learning perspective, the representation of an object is ideally invariant to its position, whereas learning objects in non-canonical space wastes model capacity by requiring different representations for each position. Third, from a causality perspective, answering probabilistic and counterfactual queries about object position requires a causal model incorporating position explicitly, along with appropriate conditional distributions.\n\n\n>Have you tried the actual learned encoder on OOD data instead of using MCMC inference? I know this is claimed not to work in the paper but I didn't see an experiment showing this.\n\nWe agree this is important for rigorous evaluation so we'll include these results in the ablation study. We will post the results of these new experiments here as a comment once they are ready, and will also update the manuscript.\n\n>Follow-up to the above: have you tried amortized iterative inference for evaluation of OOD? (IODINE works on unseen combinations of color and shape, for example)\n\nIODINE only showed qualitative results in their figure 7 without quantitative evaluation of OOD colors/shapes. Nevertheless, as mentioned, we'll add IODINE as a baseline, to give an example of how a method with iterative amortized inference performs. However, there is a theoretical argument that discriminative models and non-structured generative models cannot be trivially adjusted/intervened to model distribution shift, hence performing inference on OOD data that should have a zero probability is not mathematically sound.", " Thank you for your valuable suggestions! We are glad that you valued our work’s originality, thoroughness and our proposed encoding of object positions. We note that your major concern is the lack of a 2D baseline and ablation study. First, we'll address the claimed lack of 2D baseline by providing the requested 2D baseline (IODINE); also note that we did already include a strong 2D baseline (Slot-Attention). Second, we’ll add an ablation study on the global latent variable and on MCMC inference. Finally, based on your questions, we'll edit the text, adding details on architecture and explicitly enumerating the benefits of our canonical-space object representation over prior works using spatial mixture models (uORF, ObSuRF).\n\n> I am not convinced by the \"Causality\" claim of the model - I am not a causality person, but I feel this is a stretch - this model is not more \"causal\" than IODINE, for example - it does, maybe, reflect the underlying generative process better (through cameras for example) but I feel this is a bit of stretch \n\n\nPer the Stanford Encyclopedia of Philosophy, “A causal model entails the truth value, or the probability, of counterfactual claims about the system; it predicts the effects of interventions; and it entails the probabilistic dependence or independence of variables included in the model”. Our model satisfies these criteria. For example, we can intervene on the distribution of object layouts or positions, without affecting that of object appearances (unlike IODINE). Similarly, our method allows calculating counterfactuals, such as editing the position of an object in a given image, by encoding the image into our latent representation (i.e. calculating the conditional distribution on object positions and appearances given pixels), then intervening on its position only. Notably, our structured causal model achieves this even without requiring interventional data for training. This is a significant step from current “statistical” unsupervised segmentation models which do not define counterfactual or interventional distributions and cannot perform inference in the OOD setting. This is because prior works cannot easily be intervened to account for distribution shift as they are neither generative nor define a probability density over scenes/objects (e.g. Slot-Attention and uORF), or they model the scene with a single latent variable (e.g. NeRF-VAE).\n\n>the choice of comparing only to NeRF based baselines is a weakness - much of the rendering process, especially in simple scenes like these, can be learned well by 2D models (up to NVS) and these should have been compared to\n\nWe'll add another 2D baseline – IODINE; we’ll post results from this here as soon as they’re ready. However, note that we already compare to Slot-Attention, which uses a 2D spatial broadcast decoder rather than a NeRF. We chose this precisely to test whether a well-known 2D discriminative compositional baseline can generalize to different out-of-distribution axes. Our experiments show (see Table 3 in the supplementary) that it does not generalize well to novel camera viewpoints, hence demonstrating what we argue in the paper – that a 2D baseline is not able to perform such generalization. \n\n>I would want to see analysis of the global latent, and how it affects things\n\nAs requested, we'll include an ablation study on the global latent and explain why it is needed in section 3. Specifically, it is needed to correctly model the probability of entire scenes (i.e. arrangements of objects) - without it, the generative model would incorrectly assume independence of objects (e.g. it would not be able to model object relationships, non-intersection of objects and the fact that objects are not floating in space but are on common floor). In contrast, our model samples plausible scenes, correctly learning non-trivial relationships between objects (e.g. generated scenes for Arrow dataset contain one arrow pointing towards the one odd object).\n", " \n>There are some major things that are not modelled in this work but would be important in the long run — these include orientations (or a more general space of transformations beyond point-to-point translation) and semantics (there is not a way that the concept of “chair” can be learned by training on many images).\n\nWe agree that there are other important latent factors that could be incorporated explicitly in our approach, such as orientations and object classes. Hopefully future works will consider these extensions, including in the OOD setting where they are subject to distribution shifts.\n\n>Another weakness of the approach (mostly having to do with the fact that it requires MCMC) is the running time, which the authors acknowledge to be a limitation. I would recommend mentioning more specifics about time requirements.\n\n\nThank you for your suggestion, we'll provide full details in section 7. We run MCMC until convergence or until 15K iterations are reached. However, on the same note, we want to highlight that our structured causal model allows MH proposals that affect only one object while keeping other variables fixed. This increases efficiency in contrast to MCMC on non-structured models as it allows caching computation and only re-rendering parts of the scene that need to be considered for a proposed change (e.g. just a single object). In contrast, MCMC on non-structured models must render the scene from scratch. Moreover, each MH step need not revert any progress made on other variables: e.g. if the background is perfectly inferred but objects are not, then the MH steps can modify objects but leave the background intact.\n", " Thank you for the encouraging review! We are glad you found our work to be an ambitious step towards an instance-level generative model of scenes – which we agree with you is a “holy grail” of computer vision. As you request, we’ll clarify the factorisation of the probabilistic model and detail how this is implemented. We'll also add an explanation of how the NeRF architecture for appearance representation facilitates disentanglement of shape and color. Lastly we'll include more details on the MCMC inference scheme and its computational efficiency.\n\n>I cannot figure out how are shape and color separated in this model — it’s possible that I am missing something, but shape and color variables seem to be treated symmetrically throughout, so what forces one variable to map to shape and the other to map to color?\n\n\nWe achieve disentanglement (one variable to map to shape and one to color) by allowing only the shape variable to influence the density (i.e. opacity) of 3D points. Specifically, an object's NeRF is made out of two neural networks: the first takes as input a 3D point and the object’s shape variable and outputs a 1-dimensional opacity (implicitly defining the 3D shape) and an embedding h; a second network takes the embedding h and the object’s appearance variable and outputs an RGB color. Hence, opacity only depends on the latent shape variable, not color. This was mentioned in section 6 of the supplementary, but we'll also add a brief note in the main text.\n\n>Related to this question, it would be useful to provide further factorization in Equation 2 to explain the joint distribution of shape, color and position of a single given object.\n\n\nThank you for pointing this out. As requested, we'll give the factorization of shape, color and position (which are in fact conditionally independent given $\\mathbf{z}^g$), in Equation 2. \n\n>I wonder if there is a principled way to decide when it is okay to replace a learned prior with a uniform distribution. In other words, when is it okay to extrapolate from the dataset? Clearly the answer cannot be “all the time” — could the authors shed some light on how to make this decision appropriately?\n\n\nThere are two possible interpretations to this question.\n\nThe first is: how to determine whether or not a given variable (in some OOD test setting) is in fact drawn from a distribution than that seen during training? One approach is to check whether the probability of the observed variable is below some predetermined threshold under the non-intervened (training) distribution. Some work on novelty detection has explored similar ideas in the past, e.g. the classic “Novelty detection and neural network validation” (C. Bishop, 1994), and more recent works in the generative model literature on OOD detection (e.g. “Likelihood Ratios for Out-of-Distribution Detection”, J. Rien et al., NeurIPS 2019). It would be interesting future work to investigate how such techniques may be combined with our approach.\n\nThe second is: how to determine which variables are suitable to have their priors replaced by a uniform distribution in particular? This is an interesting question, and in general depends which mechanism (i.e. conditional) is intervened on. In the case of our scene-level prior $p_{\\theta}(\\mathbf{z}^s|\\mathbf{z}^g)$, it’s appropriate to replace it with a uniform distribution when we know that similar types of objects are going to be seen as during training, and we believe that there is no constraint on where (and in what combinations / compositions) those objects will appear.\n\n", " >Did you apply a new MCMC procedure to your baselines (slot-attn, uORF, NERF-VAE) for OOD setup? If not -- why?\n\nWe agree this is important for rigorous evaluation; so we'll provide an ablation with MCMC on an non-structured generative model. Though we note that it is not possible to apply MCMC to other baselines (Slot-Attention, uORF) as they are non-probabilistic and don’t define a probability density over scenes/objects.\n\n\n>Can you ablate your MCMC procedure for OOD setup and show what scores the model obtains when one uses an encoder to obtain latents?\n\nYes, we’ll provide these results in Table 2.\n\n>I did not understand figure 1, can you maybe elaborate on this picture a little bit more.\n\nWe'll update the caption to be clearer. The aim of figure 1 is to visualise that in contrast to statistical models which represent one distribution, a causal model represents (and generalises to) many different distributions, because it can be intervened on to model a different density over scenes. We show that our model accurately infers scene representation for various OOD images as shown in the figure. In contrast, prior statistical models only support inference on the I.I.D. data but cannot be intervened to generalize to OOD images that have zero probability in the training set.\n\n>I suggest to the author to include a graphical model in the main text to increase readability.\n\nThanks for the suggestion – we’ll add this in section 3.\n", " Thank you for your review! We are glad that you found our work to be a significant step from recent work (uORF, NeRF-VAE) and appreciated ​​the necessity of a novel MCMC inference scheme for OOD image understanding. We note that your main concern is the lack of ablation experiments on model design – to mitigate this, we'll provide ablation experiments on important aspects of the method (compositionality, inference scheme and high-level prior over objects). We will also make the suggested small edits to the manuscript, including drawing the graphical model and clarifying figures.\n\n\n>there is no ablation study on model design -- this is my main concern. I am willing to increase my score if a proper ablation study will be performed.\n\nTo fulfil this request, we will include an ablation study on several important aspects of our method. (1) We will ablate our MCMC scheme, replacing it by amortised inference with an encoder. (2) We will ablate compositionality of our method, instead modelling the scene with a single latent vector. (3) We will remove the high-level prior over objects, and evaluate samples generated by the resulting model. These experiments are currently running, and we’ll post results here once they’re completed.\n\nRegarding the second ablation, we also want to highlight that our structured causal model allows MCMC proposals that affect only one object (while still considering the probability of the entire scene and image). This increases efficiency versus MCMC on non-structured models. First, it allows caching computation and only re-rendering parts of the scene that need to be considered for a proposed change (e.g. just background). In contrast, MCMC on non-structured models renders the scene from scratch. Second, each MH step does not revert any progress made on other variables: e.g. if the background is perfectly inferred but objects are not, then an MH proposal may change only an object, leaving the background intact.\n\n>the architectural choices that make the model more causal are not explicitly stated in one place in the main text, so it’s hard to follow what exactly these choices are. And there is no ablation study on these choices either. Can you explicitly state all architectural choices you made to enable independent mechanisms and perform an ablation study on these choices?\n\nThanks for the suggestion – we'll summarise them in the introduction, and provide an ablation study to evaluate their importance (see above). In short, we require that the generative model reflects the physical process by which images arise (3D objects are placed into a scene, and light rays reflected by them arrive at a camera with some particular viewpoint). The important aspects are:\n1. a non-learnt rendering mechanism – this is guaranteed to generalise correctly to OOD data; it contrasts with prior works that learn the rendering process (e.g. using a CNN that inputs rendered features and outputs the image; e.g. GIRAFFE, Niemeyer, CVPR 2021)\n2. an explicit disentangled representation of the causal variables (e.g. object shapes and positions) – this allows us to perform interventions and counterfactual causal inference (e.g. conditioning on an image to infer the corresponding latents, then manipulating object positions); this contrasts with works that model the scene without disentangling the proper causal variables (object positions, layouts, appearances, etc. – e.g. spatial mixture models).\n3. separation of the mechanisms (conditionals) for per-object appearances and for scene compositions (layouts), so the latter can be intervened on without affecting the former; this contrasts with methods that have a single global decoder (e.g. NeRF-VAE).\n\n>Do you need a two-step generation? The general latent layout of the scene is generated firstly and then latents of objects are generated based on this layout. Can you ablate it?\n\nYes, we do need two-step generation to correctly model the probability distribution over entire scenes (i.e. compositions of objects) – without it, the generative model would incorrectly assume independence of components z^s (e.g. it would not be able to model object relationships, non-intersection of objects and the fact that objects are not floating in space but are on common floor). In contrast, our model samples plausible scenes, correctly learning non-trivial relationships between objects (e.g. generated scenes contain one arrow pointing to one odd object on the Arrow dataset). \n\nTo demonstrate this, we’ll fulfil the request to provide an ablation experiment on the high-level prior, and we'll include the results in Table 2. \n\n", " >the proposed model seems to perform well in this dataset by incorporating this knowledge into the model\n\nWe believe you slightly misunderstood and underestimated our method, by assuming that it requires ground-truth relationships between objects (hence is not fully unsupervised). However, our work is truly unsupervised – it does not require any ground-truth relationships or any labels, and we explicitly state this throughout the manuscript (L14, L34, L142). Our model learns relationships (e.g. the fact that the arrow always points at the odd object) automatically without supervision with the hierarchical prior described in Equation 2, and it successfully samples plausible scenes (Table 2 and Figure 3).\n\nAdditionally, we’ll perform an ablation experiment where the prior in Equation 2 is removed.\n\n>Maybe the authors would like to apply their MCMC samplers to the trained decoder of Slot Attention, uORF, and NeRF-VAE?\n\nFixes:\n* We’ll provide the requested experiment with MCMC on an unstructured generative model. Though note MCMC cannot be applied to the other baselines (Slot Attention, uORF) as they are non-probabilistic and don’t define a probability density over scenes/objects.\n* We’ll provide extra experiments for our model by ablating MCMC with amortised inference.\n\n>Authors claimed to use uniform distribution to replace the learned prior when generalizing to OOD images, how is that implemented and what is the rationale here? More specifically, what is the range of the support space, and why so?\n\nWe use uninformative prior distributions: uniform categorical for location and improper uniform (over the reals, with infinite support) for other variables (shape and appearance embeddings). Rationale: in the OOD setting, we only know that the mechanism describing scene-level relations among objects was intervened on but not what the intervention is; hence, we replace the previously learnt scene prior with a distribution that does not impose any restrictions on possible layouts / compositions of objects.\n\n\n>authors may also want to talk about how statistical assumptions in the proposal distribution in MH sampling may affect the OOD inference\n\nOur proposal distribution covers all components latents (i.e. possible object/background appearance embeddings) that were seen in any scene during training. It expresses the assumption that the OOD scene contains known objects but in novel compositions / positions.\n\nFixes:\n* We’ll include this explanation in section 3.\n\n\n>how can the assumption of knowing there is indeed a distribution shift can be fulfilled in reality\n\n\nA principled way to detect the distribution shift (and therefore to choose to intervene on the variable) is to measure whether the probability of an observation is below some predetermined threshold under the non-intervened distribution. Some work on novelty detection has explored similar ideas in the past, e.g. the classic “Novelty detection and neural network validation” (C. Bishop, 1994), and more recent works in the generative model literature on OOD detection (e.g. “Likelihood Ratios for Out-of-Distribution Detection”, J. Rien et al., 2019). It would be interesting future work to investigate how such techniques may be combined with our approach.\n\n> model shows superior performance in GQN dataset where objects are iid\n\nJust as in Arrow, GQN objects are non-IID - they have relationships as described in Supplementary (e.g. next to an ‘odd’ wall with color different to the others). Hence, both datasets require a high-level scene prior to model correctly.\n", " Thanks for your review! We are glad that you found our work to be well-written, complimented our novel MCMC scheme (as a possible solution for amortised inference being a critical bottleneck for OOD generalization), and valued the structured graphical model that facilitates MCMC. First, we address your concern about comparison to baselines, including adding some new experiments. Second, we’ll provide the requested ablation studies (and more). Finally, we clarify that our model learns object relationships without any supervision, since we believe you misunderstood that our approach cannot do so.\n\n>is it fair to compare with other existing models?\n\n\nYes, we believe so, as no related work can in theory account for the distribution shift. In detail, Slot-Attention and uORF are discriminative approaches that do not trivially support accounting for distribution shifts, while NeRF-VAE models the scene with one latent variable which entangles true causal variables (such as object positions, relationships between components, etc.) and cannot be intervened on to model a different distribution of scenes. In contrast to prior work, our structured explicit generative model can be intervened on, can compute counterfactuals (e.g. calculating conditional distributions on object positions and appearances given an image, then modifying some of these variables), and can perform out-of-distribution inference.\n\nHowever, given the question arose to the reviewer, we’ll also do the following:\n* To strengthen the empirical validation, we’ll add an ablation study on aspects of our model design.\n* We’ll emphasise in the experiments and related work section that prior related work cannot account for distribution shifts.\n* We’ll clarify in the introduction how our model facilitates intervention and counterfactuals.\n\n\n>but why the proposal of each object can be \"considered\" independently?\n\nWe meant that our structured causal model allows Metropolis-Hastings proposals that affect only one object while keeping other variables fixed. It still is a sound MCMC scheme which considers the joint probability of the entire image and latent scene. This approach increases efficiency in contrast to MCMC on non-structured models for two reasons. First, it allows caching computation and only re-rendering parts of the scene that need to be considered for a proposed change (e.g. just background). In contrast, MCMC on non-structured models renders the scene from scratch. Second, each MH step does not revert any progress made on other variables: e.g. if the background is perfectly inferred but objects are not, then an MH proposal may change only an object, leaving the background intact.\n\nFixes:\n* We’ll clarify what me meant by “considered independently”\n* We’ll explicitly describe the efficiency of our MCMC scheme on our structured generative model in section 3.\n\n\n>Slot Attention model does not use either the viewpoint information or multiple views of the same scene. Maybe authors should try a variant of it imitating how GQN extends VAE? I am also curious why the trained Slot Attention model does not generalize to objects with OOD number of objects\n\nuORF (which we already compare to as a baseline) is one possible 3D extension of Slot-Attention; hence, we do not feel it is necessary to add another one. We chose Slot-Attention precisely to test whether a standard 2D discriminative compositional baseline can generalize to different OOD settings: Table 3 in the supplementary shows that it cannot generalize to OOD viewpoints, but it is among the best on generalization to OOD number of objects (consistent with the Slot-Attention paper).\n\nFixes:\n* We’ll clarify this in section 4. \n* We’ll add more analysis based on results in Table 3 in the Supplement.\n\n\n>maybe the authors should try some disentangled VAEs such as beta-VAE?\n\nFixes:\n* We'll provide results using beta-VAE.\n", " We thank the reviewers for their detailed feedback! We are glad that all reviewers found our work to be high-quality, well-written and of high significance. Reviewers complimented it as the first explicit step towards unsupervised out-of-distribution scene understanding, and appreciated the original technical contributions, including an efficient MCMC inference scheme and one-hot representation for 3D object positions. The only major concern raised by reviewers [uNLg, KD5e, tfzG] is the lack of an ablation study. To satisfy this reviewers’ request, we’ll provide the following:\n\n* [uNLg, KD5e, tfzG]: we’ll include ablation studies on main aspects of the model design: 1) inference scheme (MCMC vs amortised inference); 2) compositionality (MCMC on structured vs unstructured generative model); 3) high-level prior $p_{\\theta}(\\mathbf{z}^s|\\mathbf{z}^g)$.\n* [uNLg, KD5e, tfzG]: we’ll add comparisons to two additional requested baselines (IODINE, beta-VAE).\n\nWe will post the results of these new experiments here as a comment once they are ready, and will also update the manuscript.\n\nBased on the reviews, we observed that two important benefits of our method over prior works were unnoticed. Hence, we'll now explicitly emphasise them here and in the manuscript:\n\n* We'll explicitly describe the advantages of our canonical object representation over recent works (spatial mixture models). This includes computational efficiency when rendering unbounded scenes, and not wasting the model's representation capacity to model each object at every possible position.\n* We'll emphasise computational efficiency of the MCMC scheme on our structured graphical model as (1) it allows caching computation and only re-rendering parts of the scene that need to be considered for a proposed change (e.g. just background); (2) each MH step need not revert any progress made on other variables (e.g. if the background is perfectly inferred but objects are not yet, then an MH proposal improving an object will not change the background).\n\nBased on the reviewers’ questions, we will make the following clarifications to the manuscript:\n\n\n* [uNLg, tfzG, KD5e] We’ll add analysis of new ablation studies, clearly stating the benefits of each method’s design choice.\n* [uNLg, tfzG, KD5e] We'll add analysis of results in the experiments section, focusing on different axes of out-of-distribution generalization and the performance of each baseline.\n* [uNLg] To avoid possible misunderstanding, we'll now explicitly state in the text that our work does not use any labelled supervision and does not have any prior knowledge about object relationships; it instead learns these without supervision via the high-level prior $p_{\\theta}(\\mathbf{z}^s|\\mathbf{z}^g)$.\n* [gUkP, tfzG] We'll edit Equation 2 to clarify how our model is factored, including disentanglement of shape and appearance.\n", " This paper proposes a generative model for unsupervised object-centric 3D scene understanding. The proposed model is claimed as \"causal\" as it is baked in with a multi-object Neural Radiance Field with explicit latent variables for color, shape, and object positions. This model is trained with ELBO in Variational Bayes. To demonstrate that the introduction of the \"causal\" structure can help generalize the inference of latent variables to OOD scenes, i.e. scenes not covered in the support set of the training data, the authors further propose an MCMC sampling method for test time only. The proposed model shows superior performance in GQN dataset where objects are iid and ARROW datasets where there is a specific correlation between objects. \\+ This paper is very well written. I liked the narrative in which the authors first describe the representation and the generative models with probabilistic language, then introduce the inductive bias of NeRF, and then move on to introduce training and inference. This makes the method section very easy to follow. \n\n\\+ The introduction of the scene variable is fairly interesting. Existing methods for object disentanglement mostly assume iid prior over objects. The hierarchical design of the generative model echoes the \"conditional independence\" in causality, making the assumption in models more generic. \n\n\\+ Though couples of existing works use NeRF as a scene generator, the explicit modeling of the latent variable and the MCMC sampling in OOD scenes are novel contributions. In particular, the observation that the variational encoder from Variational Bayes training is the critical bottleneck for OOD generalization is very sharp. \n\n\\+ As far as I know, generalizing to novel viewpoints is something not covered by the existing literature on multi-object representation learning. \n\n\n\n\\- Though I liked the idea of using MCMC sampling for inference in OOD images, I have some concerns about its concrete realization. First, to account for the distribution shift in the latent causal factors, the proposed method needs to manually replace the learned prior of latent variables with uninformative distributions. Then is it fair to compare with other existing models since here the model is actually informed of which causal factor is intervened in the distribution shift? Second, the combination of Langevin dynamics and Metropolis-Hastings is interesting, but why the proposal of each object can be \"considered\" independently? \n\n\\- The comparison with the baselines is not completely fair. First, Slot Attention model does not use either the viewpoint information or multiple views of the same scene. Maybe authors should try a variant of it imitating how GQN extends VAE? I am also curious why the trained Slot Attention model does not generalize to objects with OOD number of objects since it should be successful according to the original paper. Second, NeRF-VAE is not encouraged to have structured latent representation, so we kind of know a priori it will fail the composition test, maybe the authors should try some disentangled VAEs such as beta-VAE? Third, the correlation in the ARROW dataset that the arrow object is always pointing to a special object is known a priori and the proposed model seems to perform well in this dataset by incorporating this knowledge into the model. In reality, we don't really know what kind of correlation there are among objects, how would the proposed model still be generic? Apart from the questions I raised in the weakness part above, I would appreciate it if the authors would like to answer the following:\n\n1. Since the way of inference in OOD images is different in the proposed model and existing models (MCMC vs Variational Encoder), it is unclear which component between the explicit generative model and the MCMC inference plays a more critical role in the success of OOD generalization. Maybe the authors would like to apply their MCMC samplers to the trained decoder of Slot Attention, uORF, and NeRF-VAE? I believe this ablation can be beneficial for readers who are interested in OOD generalization. \n2. If time allows, authors may also want to compare these models in datasets where there are more authentic textures, for example, the CLEVR-TEX dataset? \n3. Authors claimed to use uniform distribution to replace the learned prior when generalizing to OOD images, how is that implemented and what is the rationale here? More specifically, what is the range of the support space, and why so? \n\nI will be happy to raise my rating if authors would like to resolve some of my concerns. \n\n--------------------\n\nPost discussion: Thanks authors for providing results on experiments I requested, I have raised my rating. Apart from the limitation authors listed in Sec 4.5, I believe the authors may also want to talk about how statistical assumptions in the proposal distribution in MH sampling may affect the OOD inference, And how can the assumption of knowing there is indeed a distribution shift can be fulfilled in reality. ", " The authors proposed a VAE-based NERF-like object-centric framework to model 3D-scenes. The model represents objects as separate NERFs. Moreover, authors make architectural choices that reflect causal independent mechanisms, so this enables OOD generalization. Plus, a novel MCMC inference scheme is proposed to infer out-of-distribution scenes. The model outperforms other models in OOD setting and performs competitively in In-distribution set-up. Strengths\n\n- the idea and necessity of a novel MCMC inference scheme are intuitive and well-explained\n- the model outperforms baselines in OOD setting and performs competitively in In-distribution set-up.\n- the model is OC and generative (that is novel as far as I understand) compare to its OC counterpart (uORF), so I believe it’s easier to edit a scene object-wise.\n- authors make architectural choices to reflect the causal independent mechanisms.\n\nWeaknesses\n\n- there is no ablation study on model design -- this is my main concern\n- the architectural choices that make the model more causal are not explicitly stated in one place in the main text, so it’s hard to follow what exactly these choices are. And there is no ablation study on these choices either. \n\nOverall, in my opinion, the authors implemented the right priors to make OOD generalization possible and the paper seems novel to me. \n\nI am willing to increase my score if a proper ablation study will be performed.\n Ablation study questions \n\nMain questions\n1) Can you explicitly state all architectural choices you made to enable independent mechanisms and perform an ablation study on these choices? \n2) Do you need a two-step generation? The general latent layout of the scene is generated firstly and then latents of objects are generated based on this layout. Can you ablate it? \n3) Did you apply a new MCMC procedure to your baselines (slot-attn, uORF, NERF-VAE) for OOD setup? If not -- why? \n4) Can you ablate your MCMC procedure for OOD setup and show what scores the model obtains when one uses an encoder to obtain latents?\n\nSmall issues\n1) I did not understand figure 1, can you maybe elaborate on this picture a little bit more.\n2) I suggest to the author to include a graphical model in the main text to increase readability. \n -", " This paper proposes a generative model of images that factors appearance of the image into background and multiple objects — each object is associated with its own NERF based appearance model drawn from a global prior and together can be used to forward render an associated image in a differentiable way. Latent object appearance models are inferred using a VAE-like framework with neural-net encoders.\n\nFinally by replacing certain learned priors by uniform priors, the authors show that their models are able to work in OOD settings quite accurately. Results in this paper are all on synthetic baselines.\n\n Strengths:\n\nA full instance-level generative model of image appearance is arguably one of the “holy grails” of computer vision — and the fact that this paper takes a step in this very ambitious direction and achieves reasonably good results (on synthetic data) is a strength. \n\nThere are two possible families of comparison — NERF based models which generally do not have a probabilistic object based decomposition of the scene, and 2d models of image composition which do capture object based decompositions, but are not able to learn from multi-view data in the same way. Compared to other works along the same lines (e.g. slot attention), the authors also show strong performance in their model’s ability to generalize to out-of-distribution data. \n\nAnother strong point of the paper is the attention paid to reliable optimization. For example, the authors find that modeling positions using a categorical variable helps (compared, e.g, to using naive spatial transformer models which would model pose using a continuous variable). The idea is similar to the way that Faster R-CNN works for object detection --- use a discrete grid to predict at an \"anchor grid\" of positions, then use ROIAlign (which is basically an axis aligned STN) to crop features.\n\n\nWeaknesses:\n\nThere are some major things that are not modeled in this work but would be important in the long run — these include orientations (or a more general space of transformations beyond point-to-point translation) and semantics (there is not a way that the concept of “chair” can be learned by training on many images). Also, even though images can be “out-of-distribution” in the sense of having different numbers of objects, being placed in different locations or having novel compositions, I don’t believe that generalizing to novel types of objects is handled well in this approach. \n\nAnother weakness of the approach (mostly having to do with the fact that it requires MCMC) is the running time, which the authors acknowledge to be a limitation. I would recommend mentioning more specifics about time requirements.\n\n * I cannot figure out how are shape and color separated in this model — it’s possible that I am missing something, but shape and color variables seem to be treated symmetrically throughout, so what forces one variable to map to shape and the other to map to color?\n* Related to this question, it would be useful to provide further factorization in Equation 2 to explain the joint distribution of shape, color and position of a single given object.\n* I wonder if there is a principled way to decide when it is okay to replace a learned prior with a uniform distribution. In other words, when is it okay to extrapolate from the dataset? Clearly the answer cannot be “all the time” — could the authors shed some light on how to make this decision appropriately?\n yes.", " This paper presents a slotted generative model with a NeRF rendering decoder. Each slot parameterizes a scene function and the decoder mixes the scene functions while rendering (summing densities and averaging colors). The model is trained as a 2-stage VAE - first training the slotted model VAE and then a \"global\" VAE which models the marginal posterior distribution of the first model. The slot latents are factorized into \"appearance\" and \"position\" and the latter is one-hot encoded (relaxed with Gumbel softmax) and used as a convolution kernel to \"shift\" the scene function in scene space while rendering.\nThe model receives one or more input views of a scene and is trained as a VAE, much like NeRF-VAE but with slots.\nAn inference procedure is also proposed to work in the case of OOD inputs where the amortized encoder is not expected to work.\nThe method is demonstrated to work nicely on two synthetic datasets - GQN and ARROW.\nThere are some experiments that show that the model is able to learn factorized representation and these can be manipulated. Originality:\n While mostly an amalgamation of existing methods this is an interesting model and the fact that it works, even on synthetic data, is non-trivial. The one-hot encoding of position is a nice touch and I suspect makes the model work better. \n I am, however, not convinced by the \"Causality\" claim of the model - I am not a causality person, but I feel this is a stretch - this model is not more \"causal\" than IODINE, for example - it does, maybe, reflect the underlying generative process better (through cameras for example) but I feel this is a bit of stretch (see more below)\n\nQuality:\n The paper is quite thorough, and all in all quite good. I feel the choice of comparing only to NeRF based baselines is a weakness - much of the rendering process, especially in simple scenes like these, can be learned well by 2D models (up to NVS) and these should have been compared to (for example, GQN and IODINE). I would want to see analysis of the global latent, and how it affects things (the authors do mention this under \"limitations\")\n\nClarity:\n The paper is written nicely and quite clearly, but I feel several elements are not described well - especially the role and structure of the global latent, see below. \n\nSignificance:\n All in all I feel this is an important contribution to the community with some non-trivial advances. * How is the conditional model p(z_s|z_g) structured? I looked at the supplementary material and it wasn't clear - how is the mapping for a single latent to multiple slots done? is it just an MLP which scales with the number of slots?\n\n* Follow-up to the above: can you change the number of objects without affecting the number parameters in the model? Consequently - can you instantiate a model with a different number of slots than used in training?\n\n* What prevents the object location to be learned in a non-canonical position and compensate with the appropriate shift through the inferred position? There's nothing constraining the object to be learned around the origin.\n\n* Have you tried the actual learned encoder on OOD data instead of using MCMC inference? I know this is claimed not to work in the paper but I didn't see an experiment showing this.\n* Follow-up to the above: have you tried amortized iterative inference for evaluation of OOD? (IODINE works on unseen combinations of color and shape, for example) I think the authors addressed the model limitations well in the paper - the above question directly refer to some of the missing points I feel." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "y3Px5v_VW2e", "ig5EYClfIUv", "inuFxkDPL6e", "K2qjTWFxTd", "eq4XX5t5H1a", "nips_2022_VvOcK2DGM7G", "inuFxkDPL6e", "2ie0EkiE1zMy", "leYYxpqzrrS", "zxRZn7uIxsL", "tDW8W6O3RSf", "ULhH2dVbFdo", "QgRhAR4IN0x", "VJCcC86eb02", "26uApj6gTJQ", "KoUY902ctYs", "fyK-4_LBOVS", "ig5EYClfIUv", "9f1Ql_0wqvx", "n8zQG3DKluh", "nips_2022_VvOcK2DGM7G", "nips_2022_VvOcK2DGM7G", "nips_2022_VvOcK2DGM7G", "nips_2022_VvOcK2DGM7G", "nips_2022_VvOcK2DGM7G" ]
nips_2022_PCQyUvAmKs
Don't Pour Cereal into Coffee: Differentiable Temporal Logic for Temporal Action Segmentation
We propose Differentiable Temporal Logic (DTL), a model-agnostic framework that introduces temporal constraints to deep networks. DTL treats the outputs of a network as a truth assignment of a temporal logic formula, and computes a temporal logic loss reflecting the consistency between the output and the constraints. We propose a comprehensive set of constraints, which are implicit in data annotations, and incorporate them with deep networks via DTL. We evaluate the effectiveness of DTL on the temporal action segmentation task and observe improved performance and reduced logical errors in the output of different task models. Furthermore, we provide an extensive analysis to visualize the desirable effects of DTL.
Accept
This paper introduces an approach for incorporating declarative temporal constraints in the training of temporal action segmentation models, in a model-agnostic fashion. Reviewers generally appreciated the proposed approach, but questioned the scalability and generalizability of the constraint curation process and asked for experimental results on more complex and challenging datasets beyond 50Salads and Breakfast. The author responses addressed many of the reviewer concerns, and reviewers responded positively overall. However, not all concerns could be addressed within the rebuttal time; for example, the authors promised to add results on more complex datasets such as Epic-Kitchen and Charades where reviewers were concerned it may be more challenging to generate constraints. After reading the paper and all author and reviewer responses, I believe the contributions of the paper are sufficient for acceptance. However, authors are expected to add the experimental results on additional datasets as promised for the final paper. Baseline numbers from original papers for some of the models that needed to be re-implemented should also be included for comparability.
train
[ "u_PJi06cWh", "sFBqxrGwVDy", "34YbgfqKI8Q", "4o8uq9FFN6l", "kFCqS9-RH5L", "wmr7N_o4EC", "Bp2StD1PttI", "BSoC_N_0FA", "MjW28xif1G", "B3UK9bTAEYw", "ctdFheZXBVE", "zT2i59fW39", "ULmopOvNkej" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the follow-up comment and for recognizing our work. Following the comment, we will update the results under the correct setting in future revisions. ", " Thank you very much for the comprehensive response.\n\nI think curating constraints from only the training annotations is the correct way here, since curating from all the annotations would leak information in the testing splits. Given that the performance gains are not affected evidently, I would not consider this as a significant issue. However, please update all the experiment results to the correct setting in your next version. \n\nAs for other concerns, they are well addressed. I do like this paper since I found how to model action relations important when I was working on this task, and this paper presents an interesting solution.", " > It still looks difficult to build the constraints for epic-kitchen, where the correlation between actions is low and the actions are more complex. This issue could be better discussed, explaining for which datasets the proposed method could be more effective.\n\nThank you for the thoughtful comment. This is a limitation of how currently knowledge is utilized in the paper. The constraint curation method in the paper depends on a subset of knowledge of actions (i.e., correlation expressible in the form of frequency matrices). We agree that when such correlation is sparse, the knowledge provides less information, and DTL will be less effective. \n\nPossible solutions to this include:\n- improving backbone architecture to better model the non-correlation knowledge (e.g., visual information) from data.\n- improving the curation method to better represent non-correlation knowledge (e.g., with human involvement).\n\nStill, we would like to highlight that:\n- DTL is useful for non-correlation temporal knowledge as long as it can be written as temporal logic formulae.\n- Correlation knowledge plays an important role when the video is about an activity made of a series of logically related actions.\n\nEpic-Kitchen differs from any of the datasets we have discussed here because its actions are verb-object pairs, which require two sets of constraints: (1) the affordance constraints on verb-object composition and (2) the temporal constraints similar to what is described in this paper. A comprehensive study of this dataset will be a very exciting assessment of the expressiveness of logic for this task. We will add discussion on this aspect.\n\n> The authors could run their model on Charades, where actions co-occurred at the same time step. It will be interesting to better describe which temporal constraints the proposed method can add.\n\nWhile we believe the provided experiments sufficiently support the claims of our paper, we agree that it is an important and interesting extension to assess DTL on Charades and Epic-Kitchen, where multiple actions co-occur simultaneously. Therefore, following the Reviewer’s comment, we will examine Charades and provide studies in the final version of this paper, and we believe it will add value to this research.\n\nWe can provide some early statistics about the constraints we curated on Charades: we have curated 9668 constraints, of which 2,848 are Backward Dependency, 2,848 are Forward Cancellation, 3,972 are Exclusivity, and none are Implication constraints. \n\nAs a side note, constraints in the paper are effective whether or not simultaneous co-occurrence is allowed. This is because they only regulate the order (by BD constraints) and presence of actions (FC, Ex, and Ip constraints).", " The authors have convincingly answered most of the questions and have provided important details about the proposed method. However, I still have two remaining concerns :\n\n- Building the temporal constraints: From the authors' explanation of how the curation is performed (in an automated manner), it still looks difficult to build the constraints for epic-kitchen, where the correlation between actions is low and the actions are more complex. This issue could be better discussed, explaining for which datasets the proposed method could be more effective.\n\n- Comparison with other temporal reasoning methods:\nYes, this is true that CTRN and GTRM are somewhat orthogonal to the proposed method, but they serve the same purpose. Even though there is no code for a direct comparison, the Charades dataset (for instance, used in CTRN) is public. So maybe, the authors could run their method on this dataset. It would also serve as an example for cases, where actions co-occurred at the same time step. In the case of Charades, this co-occurrence happens in many samples.\nI agree that DTL can work as a complement to these existing temporal models. So, it will be interesting to better describe which temporal constraints the proposed method can add.\n", " > When the base network can better model the temporal relations, the improvement brought by the proposed approach is less important as it is the case of MSTCN and Asformer compared to GRU.\n\nWe appreciate the Reviewer for this observation, but it is not a weakness of DTL. In fact, a more advanced backbone model is naturally better at learning temporal constraints. DTL can be completely useless on an ideal model that can learn all the constraints from data. Unfortunately, such a model is nonexistent at present. \n\n> The model shows improvement especially in the case of small amounts of training data… When the dataset is bigger (more instances), the improvement is not as important, as we can see on 50Salades and BreakFast.\n\nIn our paper, the claim is that DTL is effective on datasets with different natures (shorter or longer, single-themed or multi-themed). Breakfast and 50Salads differ in many ways. While Breakfast has more instances and more activity categories, each of its video are shorter (2097 frames/video compared with 11511 frames/video in 50Salads) and contain fewer actions (6.8 actions/video compared with 19.9 actions/video in 50Salads) than 50Salads. Both datasets used in assessment are challenging in different ways and the constraints can work differently on these datasets. \n\n### References\n[1] Huang, Y., Sugano, Y., & Sato, Y. (2020). Improving action segmentation via graph-based temporal reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14024-14034). \n\n[2] R. Dai, S. Das and F. Bremond. CTRN: Class Temporal Relational Network For Action Detection. In Proceedings of the 32nd British Machine Vision Conference, BMVC 2021, United Kingdom, Virtual, November 22-25, 2021.\n\n[3] Ahn H, Lee D. Refining action segmentation with hierarchical video representations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 16302-16310.\n\n[4] https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility", " > The authors should compare their approach to other methods [1, 2] that use this kind of reasoning on events (i.e. temporal relations) to improve the predictions. \n\nThank you for the pointers to these works. Both GTRM [1] (cited as [24] in the paper) and CTRN [2] are performing data-driven graph-based reasoning, which is orthogonal to the proposed method. DTL is a framework that incorporates constraints into the data-driven training process of temporal analysis models. It does not add new parameters to the backbone like GTRM [1] or propose a whole-new backbone architecture like CTRN [2] to support reasoning. Therefore, while both methods are interesting and worth more discussion in the paper, a direct comparison would be inapplicable. Both methods, however, can serve as the backbone of DTL like MS-TCN and ASFormer. Unfortunately, their source code is unavailable. \n\n> Also it could be interesting to combine both approaches to see if a priori-knowledge and learnt logical constraints can bring improvement to work that learns events relations directly from the data. How complementary are both approaches?\n\nThank you for this interesting question. DTL is complementary with other data-driven reasoning methods because it is agnostic of the backbone’s architecture. We can treat a data-driven reasoning module as a part of the backbone and train it with DTL. \nTo demonstrate this, we apply DTL on HASR [3], a reasoning module similar to GTRM [1] and whose source code is released (we need source code to rerun experiments on the same hardware [4] so that performance gain is fairly computed).\n\n**Table R8: Performance gain on 50Salads**\n| Model | Edit | F1@10 | F1@25 | F1@50 | Acc |\n|----------------|:----:|:-----:|:-----:|:-----:|:----:|\n| GRU+HASR | 13.8 | 13.0 | 14.1 | 13.0 | -0.4 |\n| GRU+HASR+DTL | 14.3 | 13.5 | 14.4 | 13.3 | 0.3 |\n| MSTCN+HASR | 5.1 | 5.5 | 5.8 | 5.2 | 0.8 |\n| MSTCN+HASR+DTL | 6.2 | 6.4 | 5.9 | 5.8 | 2.1 |\n\nWe would like to clarify that DTL does not aim to downplay or negate the importance of architecture design in temporal reasoning (e.g. advance from GRU to MSTCN, then to the ASFormer). Our point is that DTL can work as a complement to existing temporal models despite their different architectures (i.e. GRU/recurrent, MSTCN/convolutional, and ASFormer/transformer-like). \n\n> It is not clear whether the model can work in case of datasets where different actions can happen at the same time (co-occurrent events) as in Epic-Kitchen?\n\nIt is possible to describe constraints about event co-occurrence in temporal logic given its expressiveness and extendability. For example, let us define operator $G$ as $Gx=\\neg F(\\neg x)$, which is true when event x is true in all time steps. A constraint that enforces co-occurrence of event $a_i$ and $a_j$ can be written as: $G((a_i \\land a_j) \\lor (\\neg a_i \\land \\neg a_j))$, which reads “in all time steps, $a_i$ and $a_j$ must both occur, or both not occur”. We agree that incorporating those constraints for more complicated cases would be a very interesting part of future research works.\n\n> The logical constraints may be new to computer vision but it has been used for a long time on deep learning and since the contribution they provide is not on computer vision (add a logical loss that was used in other deep learning studies) makes the contribution limited.\n\nWe respectfully disagree with the statement that DTL is to “add a logical loss that was used in other deep learning studies”. The temporal logic loss used in DTL is not a duplicate of existing literature. As discussed in Section 2.3, we agree that logic loss for data-driven models has been studied for a long time. However, most of the efforts have been on propositional logic for non-sequential data, and the temporal aspect is not well studied. While existing literature motivated this work, it is non-trivial to design a logic loss that accounts for the temporal constraints for the action analysis problems. \n\n> The proposed method is easy to use in datasets such as breakfast and 50 salades (datasets with fixed scripts). However in the case of Epic-Kitchen, there are more events and more complex actions, where it is more challenging to define event constraints and so results on this dataset can better prove the authors claims.\n\nThank you for the suggestion. The main claim of this paper is that explicitly enforcing constraints to data-driven models through DTL improves their performance. We believe this claim has been validated by the experiment. Having said that, we agree that the benefit of DTL can be even better shown by experiments on larger datasets with richer types of events and more challenging cross-event relations. It would be a very nice study as we explore more efficient ways to express large-scale temporal constraints with logic formulae. We will add relevant study in the final version of this paper.", " > I would recommend add ablation studies on the hyper-parameter $\\lambda$ and $\\gamma$ to see their sensitivities.\n\nThank you for the suggestion. We performed ablation studies on MS-TCN and GRU on 50Salads using performance gain on F1@25 as the metric. DTL is more sensitive to $\\lambda$ than $\\gamma$. A too large weight on temporal constraints can negatively affect the training because it essentially requires the model to focus more on temporal constraints than on the training data.\n\n**Table R5: F1@25 performance gain of DTL on GRU or MSTCN with different $\\lambda$**\n| $\\lambda$ | 0.01 | 0.05 | 0.1 | 0.2 | 0.5 |\n|-----------|:----:|:----:|:---:|:---:|:----:|\n| GRU | 2.4 | 4.7 | 6.0 | 2.5 | -3.2 |\n| MSTCN | 1.0 | 2.0 | 3.5 | 2.2 | 0.5 |\n\n**Table R6: F1@25 performance gain of DTL on GRU or MSTCN with different $\\gamma$**\n| $\\gamma$ | 0.5 | 1.0 | 10.0 |\n|----------|:---:|:---:|:----:|\n| GRU | 6.7 | 6.0 | 5.6 |\n| MSTCN | 3.1 | 3.5 | 1.2 |\n\n> How the examples are selected for Fig.5 and Fig.6. Are they representative?\n\nThey are manually selected to show the efficacy of DTL and are representative. To show this (i.e. DTL can improve backbones to make its output consistent with constraints), we compare the percentage of satisfied constraints of backbone-only models and backbone+DTL on the test set. The results below show that backbone+DTL satisfy more constraints:\n\n**Table R7: Average percent of constraints that the model output satisfies on the test set**\n| Model | 50Salads | Breakfast |\n|--------------|:--------:|:---------:|\n| GRU | 70.4% | 77.2% |\n| GRU+DTL | 93.2% | 100.0% |\n| MSTCN | 69.4% | 61.3% |\n| MSTCN+DTL | 89.2% | 99.4% |\n| ASFormer | 68.9% | 40.7% |\n| ASFormer+DTL | 84.8% | 66.3% |\n\n> What is the model g for Fig.6? Please discuss more on Fig. 6.\n\nWe used MSTCN as the backbone $g$ for Figure 6. Figure 6 is not showing the strength of different types of constraints. Instead, it shows the effect of different constraints on the logits for different actions. The greenish color for Ip constraints means that Ip constraints tend to increase the logits of actions (“promotive” as in the paper because it encourages actions to occur). On the other hand, the blue-ish color for Ex, FC, and BD constraints means that these three types of constraints tend to decrease the logits of actions (“suppressive” as in the paper). \n\n> It would be interesting if the proposed method could be extended to more constraints.\n\nWe agree that there are many constraints that can benefit DTL in more complicated domains. The two suggested cases can be incorporated in DTL framework as follows: \n\n- Co-occurrence of $a_i$ and $a_j$: $G((a_i \\land a_j) \\lor (\\neg a_i \\land \\neg a_j))$, which reads “in all time steps, $a_i$ and $a_j$ must both occur, or both not occur”. Here, $G x=\\neg F(\\neg x)$ is true when event x is true in all time steps.\n- Combinations of more actions: $F(a_i \\land a_j \\land a_k) \\rightarrow ((\\neg a_i W a_j) \\land (\\neg a_i W a_k))$, which is an extended backward dependency constraint that says $a_i$ is dependent on $a_j$ and $a_k$.\n\n> How many times are the experiments repeated?\n\nThe k-fold cross-validation experiments were performed once with the seed set to 0 for all the experiments for a fair comparison, where $k=5$ for 50Salads and $k=4$ for Breakfast following the protocol in[14][53].\nThe error bars reported in Table 1 and Table 2 are from cross-validation.\n\n### References\n[1] https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility", " > The curation process needs to be described more.\n\nSince Reviewer Zn5F raised a similar issue, please check Message to All Reviewers for our response. We commit ourselves to include the discussion in revision.\n\n> Are they curated from the training split or the whole dataset?\n\nThe constraints are curated from all the annotations (so “training annotations” in line 173 should be corrected to “annotations”). In general, the knowledge expressed as constraints in DTL should cover as much as possible the domain knowledge, which includes the knowledge in different parts of the annotation.\n\nWe compared the difference between the constraints collected from the whole datasets and from each training split, and found that: (a) the number of different constraints is small, and (b) such difference does not affect the conclusion of experiments. The tables below provide statistics about the difference, and the outcome of experiments run with split-specific constraints.\n\nDenote as $C_0$ the constraints curated from all the annotations and $C_k$ the constraints curated only from the kth split’s training annotations. In the two tables below, the center column is $d_k = |C_k \\setminus C_0| + |C_0 \\setminus C_k|$, the number of different constraints between $C_k$ and $C_0$. The rightmost column is $d_k / |C_0|$, the percentage of difference. \n\n**Table R1: for 50Salads: $k=1,2,3,4,5$ and $|C_0| = 313$**\n| Split Number | $d_k$ | $d_k / \\|C_0\\|$ (%) |\n|--------------|:-----:|:---------------:|\n| 1 | 71 | 22.6 |\n| 2 | 0 | 0 |\n| 3 | 8 | 2.5 |\n| 4 | 4 | 1.3 |\n| 5 | 0 | 0 |\n\n**Table R2: for Breakfast: $k=1,2,3,4$ and $|C_0| = 2145$**\n| Split Number | $d_k$ | $d_k / \\|C_0\\|$ (%) |\n|--------------|:-----:|:---------------:|\n| 1 | 37 | 1.7 |\n| 2 | 38 | 1.8 |\n| 3 | 28 | 1.3 |\n| 4 | 34 | 1.6 |\n\nWe also rerun the experiments in Table 1 and Table 2 with split-only constraints. The two tables below show that such difference has limited impact on the performance gain, and does not affect the conclusion of experiments. In the two tables below, settings with “all” use all the annotations as in the paper. Settings with “split” use constraints from each split’s training annotations. \n\n**Table R3: 50Salads: comparison between constraint from all annotations and split annotations**\n| Gain | Edit | F1@10 | F1@25 | F1@50 | Acc |\n|----------------------|:----:|:-----:|:-----:|:-----:|:---:|\n| GRU+DTL (all) | 7.0 | 5.4 | 6.0 | 5.9 | 1.5 |\n| GRU+DTL (split) | 7.1 | 5.6 | 6.0 | 5.5 | 1.3 |\n| MSTCN+DTL (all) | 2.1 | 3.6 | 3.5 | 2.3 | 0.8 |\n| MSTCN+DTL (split) | 1.0 | 2.6 | 3.5 | 3.0 | 1.5 |\n| ASFormer+DTL (all) | 3.8 | 3.9 | 4.3 | 5.0 | 2.9 |\n| ASFormer+DTL (split) | 3.6 | 3.5 | 4.2 | 4.6 | 2.7 |\n\n**Table R4: Breakfast: comparison between constraint from all annotations and split annotations**\n| Gain | Edit | F1@10 | F1@25 | F1@50 | Acc |\n|-------------------|:----:|:-----:|:-----:|:-----:|:---:|\n| GRU+DTL (all) | 1.8 | 3.6 | 3.1 | 2.4 | 0.3 |\n| GRU+DTL (split) | 1.6 | 3.2 | 3.0 | 2.3 | 0.4 |\n| MSTCN+DTL (all) | 0.4 | 1.3 | 2.1 | 2.4 | 0.9 |\n| MSTCN+DTL (split) | 0.5 | 1.2 | 2.0 | 2.1 | 1.1 |\n\nWe expect similar small changes for ASFormer on Breakfast. Given the time constraint, however, we are unable to provide the results here. \n\n> Will the curation codes be released?\n\nYes, we will release all the source code, including the code that generates the temporal logic formulae, as promised in the paper.\n\n> Why the performance of ASFormer reported is different from the original ASFormer paper?\n\nTo avoid overestimating/underestimating the performance gain, we need to retrain all the baseline models to compare with baseline+DTL using the same hardware. This is because exact reproducibility is not guaranteed for CUDA across GPU hardware [1]. For example, for MS-TCN on Breakfast, we obtained a higher baseline performance than the original paper. In this case, using performance from the original paper will cause an overestimation of performance gain. We will add the performance from the original papers for completeness.\n\n\n", " Thank you very much for the comments. In summary, below are our responses regarding the improvement of performance and the curation of constraints.\n\n> The collected temporal constraints already provided external knowledge (similar to labeling or supervision) during the training. It may be natural to expect there will be improvement in model's performance.\n\nThe constraints are extracted from the existing annotations, so strictly speaking we did not introduce extra knowledge but explicitly presented existing knowledge to the model. We agree that improvement is natural when the knowledge in the training dataset (in the form of annotation) can be better learned by the model. This is in fact the motivation of the proposed method – we provide a way that \n\n1. explicitly enforces those constraints in case the backbone model fails to capture them from data, and \n\n2. is compatible with most of the end-to-end trainable models (in the form of logic loss), so our method is complementary to the existing merits of those backbones.\n\n> The generation of the temporal constraints may not be free and is not scalable. It would be better to reveal more details of the generation process.\n\nThe constraints are extracted from the existing annotation in the dataset and does not require manual effort. Please check Message to All Reviewers for the details of this process. We commit ourselves to including the discussion in revision.\n\n> Is there an automatic way to generate such constraints, or we may need to manually select different concepts and explicitly place several constraints as explained in section 3.3 and section E in appendix.\n\nCollecting knowledge, converting it to constraints, and enforcing it via DTL is automatic as long as the types of knowledge are defined. It would be exciting yet challenging to design more general methods to curate free-form knowledge/constraints in future study. We are looking forward to combining those methods with the DTL framework. \n", " We would like to thank all the reviewers for their time and efforts in providing feedback on our paper. The responses to each reviewer’s comments are posted under the corresponding thread. \n\nWe would like to detail the curation of constraints in response to the common issue raised by Reviewers Zn5F and 7jYQ. The curation process is automatic and does not require manual involvement. \n\nFor an input sample of $T$ frames, its annotation can be written as a sequence $[y_1, y_2, …, y_T]$, where $y_i \\in {0, 1, …, N-1}$ is the index for one of the $N$ action categories. We assume we have $M$ such samples, and only one action can occur given any time step.\n\nThe curation process starts with the collection of the following statistics: \n- B: `B[a_i, a_j]` is the frequency of action $a_i$ occurring before $a_j$\n- P: `P[a_i, a_j]` is the frequency of action $a_i$ occurring after $a_j$\n- J: `V[a_i, a_j]` is the number of videos where $a_i$ and $a_j$ occur (but not simultaneously)\n- C: `C[a_i]` is the number of videos where $a_i$ occurs\n\nThe collection procedure is described as the pseudo-code below:\n```\nB, P, J <- zero matrices of size NxN\nC <- zero vectors of size N\nfor sample m in M:\n occur_flags <- zero vectors of size N\n co_occur_flags <- zero matrix of size NxN\n y_1, y_2, ..., y_T <- annotation of sample m \n for t in 1, 2, ..., T:\n if occur_flags[y_t] == 0:\n C[y_t] <- C[y_t] + 1\n occur_flags[y_t] <- 1\n for u in 1, 2, ..., t-1:\n B[y_u, y_t] <- B[y_t’, y_t] + 1\n if co_occur_flags[y_t, y_u] == 0:\n J[y_u, y_t] <- J[y_u, y_t] + 1\n co_occur_flags[y_u, y_t] = 1\n for u in t+1, t+2, ..., T:\n P[y_u, y_t] <- P[y_u, y_t] + 1\n if co_occur_flags[y_t, y_u] == 0:\n J[y_u, y_t] <- J[y_u, y_t] + 1\n co_occur_flags[y_u, y_t] = 1\n```\nThen we generate the constraints as follows: \n```\nfor i in 0, 1, 2, ..., N-1:\n for j in 0, 1, 2, ..., N-1:\n if i != j and J[i,j] > 0 and B[i,j] == 0:\n append_BD(i, j) # action i is “backward dependent” on j\n if J[i,j] > 0 and P[i,j] == 0:\n append_FC(j, i) # action j “forward cancels” j\n if i != j and J[i,j] / C[j] == 1:\n append_Ip(j, i) # action j implies i\n if i != j and J[i,j] == 0:\n append_Ex(i, j) # action i and j is exclusive\n```", " This submission proposes to place temporal logic for temporal action segmentation model training. Compared to previous methods, the authors leverage a Linear Temporal Logic (LTL) formula to evaluate model's prediction and place a new loss item during the training. Experiments show that the proposed methods achieves improvement in performance on two datasets (50Salads and Breakfast datasets). + The motivation and technical details are clear to understand. The authors provide theoretical analysis for the proposed LTL evaluation system during training.\n\n+ The authors provided ablation studies on the two datasets (50Salads and breakfast) in terms of different existing models, which is appreciated. \n\n+ The authors provided the collected temporal constraints on these two datasets, which can further assist future research. There two problems for the temporal logic loss:\n\n1. The collected temporal constraints already provided external knowledge (similar to labeling or supervision) during the training. It may be natural to expect there will be improvement in model's performance.\n\n2. The novelty of the submission lies in that the LTL loss can assist most existing models in temporal action segmentation task. However, the generation of the temporal constraints may not be free and is not scalable. For example, in 220-223, the authors mentioned there are 313 and 2145 constraints on the 50salads and breakfast datasets. It would be better to reveal more details of the generation process (e.g., is there an automatical way to generate such constraints, or we may need to manually select different concepts and explicitly place several constraints as explained in section 3.3 and section E in appendix). Please check the \"questions\" section above. ", " This paper proposed differential temporal logic to model temporal dependencies, such as co-occurrence and ordering. Based on logic operators, four constraints were defined, including backward dependency, forward cancellation, implication and exclusivity. The proposed method is evaluated on two datasets of the temporal action segmentation task. Strengths:\n\n1. The paper is well-motivated and well-written. Modeling temporal dependencies are an important contribution for temporal action segmentation. \n\n2. The proposed differential temporal logic is very interesting. This method is principled and extendable to include more constraints. \n\n3. Effectiveness is shown on two datasets.\n\nWeaknesses: \n\n1. [Critical] How the constraints are curated from the datasets is not clear enough. \n- The curation process need to be described more. \n- Are they curated from the training split or the whole dataset? If they are curated from the training split, there should have been k sets of constraints instead of a single set as in the supplementary.\n- Will the curation codes be released?\n\n2. Why the performance of ASFormer reported in this paper is different from the original ASFormer paper?\n\n3. I would recommend add ablation studies on the hyper-parameter \\lamda and \\gamma to see their sensitivities.\n\n4. How the examples are selected for Fig.5 and Fig.6. Are they representative? What is the model g for Fig.6? From Fig.6, we could see that the Ip constraints are much stronger than others, while in Table 3 the gain from Ip constraints is not as stronger than others. Please discuss this.\n\nRecommandations:\n\n1. It would be interesting if the proposed method could be extended to more constraints, such as:\n- the co-occurrence at the same time step\n- combination of actions in the constraints (e.g., action a happens only if action b and action c have both happened before)\n\n2. How many times are the experiments repeated?\n Please see \"weaknesses\". The limitations were briefly discussed in the paper, while the potential negative societal impact was not. ", " The paper introduces a Differentiable Temporal Logic (DTL), a framework providing a model-agnostic manner of introducing temporal logic constraints to deep networks which can describe relations between actions.\n Strengths :\n Enable to define a priori-knowledge (i.e. temporal logic constraints) to improve the reasoning on events (i.e. temporal relations).\n\nWeaknesses:\n\n- The contribution is limited:\n1) First of all the authors should compare their approach to other methods [1, 2] that use this kind of reasoning on events (i.e. temporal relations) to improve the predictions. For example, unlike the proposed approach, [1] does not use a priori-knowledge but they learn the relations between the events. Hence, it could be a good practice to compare the two approaches, as learnt constraints are often more convenient than manually defined constraints. \n2) Also it could be interesting to combine both approaches to see if a priori-knowledge and learnt logical constraints can bring improvement to work that learns events relations directly from the data. How complementary are both approaches?\n3) Moreover, it is not clear whether the model can work in case of datasets where different actions can happen at the same time (co-occurrent events) as in Epic-Kitchen?\n4) The logical constraints may be new to computer vision but it has been used for a long time on deep learning and since the contribution they provide is not on computer vision (add a logical loss that was used in other deep learning studies) makes the contribution limited. \n\n- The method is very data-specific: experiment on more datasets is needed.\n 5) The proposed Declarative Constraints Temporal/Logic Formula is easy to use in datasets such as breakfast and 50 salades (datasets with fixed scripts). However in the case of Epic-Kitchen, there are more events and more complex actions, where it is more challenging to define event constraints and so results on this dataset can better prove the authors claims.\n6) The proposed model shows improvement especially in the case of small amounts of training data and annotations as their results show a bigger improvement on 50Salades compared to breakfast. \n7) The results show 2 important observations: Firstly, when the base network can better model the temporal relations, the improvement brought by the proposed approach is less important as it is the case of MSTCN and Asformer compared to GRU. Secondly, when the dataset is bigger (more instances), the improvement is not as important, as we can see from comparing improvement on 50Salades and BreakFast. \nSo, the impact is limited.\n\n[1] Huang, Y., Sugano, Y., & Sato, Y. (2020). Improving action segmentation via graph-based temporal reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 14024-14034).\n[2] R. Dai, S. Das and F. Bremond. CTRN: Class Temporal Relational Network For Action Detection. In Proceedings of the 32nd British Machine Vision Conference, BMVC 2021, United Kingdom, Virtual, November 22-25, 2021. \n To conclude the paper needs more discussions/experiments to prove its claims. \n\n- First, it needs a more elaborated comparison with other methods following the same reasoning, such as [1, 2]. \n\n- Secondly, it needs experiments on more complex datasets such as Epic-Kitchen or Charades, to better understand the impact of the proposed approach.\n\n\nAfter rebuttal, the authors answered all my questions convincingly and provided essential details about their proposed method. yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "sFBqxrGwVDy", "BSoC_N_0FA", "4o8uq9FFN6l", "wmr7N_o4EC", "wmr7N_o4EC", "ULmopOvNkej", "BSoC_N_0FA", "zT2i59fW39", "ctdFheZXBVE", "nips_2022_PCQyUvAmKs", "nips_2022_PCQyUvAmKs", "nips_2022_PCQyUvAmKs", "nips_2022_PCQyUvAmKs" ]
nips_2022_tIqzLFf3kk
Rank Diminishing in Deep Neural Networks
The rank of neural networks measures information flowing across layers. It is an instance of a key structural condition that applies across broad domains of machine learning. In particular, the assumption of low-rank feature representations led to algorithmic developments in many architectures. For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear. To fill this gap, we perform a rigorous study on the behavior of network rank, focusing particularly on the notion of rank deficiency. We theoretically establish a universal monotone decreasing property of network ranks from the basic rules of differential and algebraic composition, and uncover rank deficiency of network blocks and deep function coupling. By virtue of our numerical tools, we provide the first empirical analysis of the per-layer behavior of network ranks in realistic settings, \ieno, ResNets, deep MLPs, and Transformers on ImageNet. These empirical results are in direct accord with our theory. Furthermore, we reveal a novel phenomenon of independence deficit caused by the rank deficiency of deep networks, where classification confidence of a given category can be linearly decided by the confidence of a handful of other categories. The theoretical results of this work, together with the empirical findings, may advance understanding of the inherent principles of deep neural networks. Code to detect the rank behavior of networks can be found in https://github.com/RuiLiFeng/Rank-Diminishing-in-Deep-Neural-Networks.
Accept
This paper studied the "rank" of neural networks and showed that deeper network in general will have lower rank. The paper did a detailed empirical study on network rank, as well as some theoretical insights on why rank is likely to decrease as the network becomes deeper, and how the rank decrease can change with or without normalization layers. The paper also demonstrated a "independence deficit" phenomenon which happens when the rank of the output layer is too low. Overall the reviewers feel that the paper gives interesting observations and nice intuitive explanations.
train
[ "Ktf6ANaq35", "O0hByKwLsYv", "NbtfIuV3ttP", "Vr00movCt16", "m-3F5lnwJIR", "p3EkXL3cyMG", "dHDJMvNsYJj", "A_u3ofv665Z", "l3mv-7mVHH2", "1oh6w5xKLfs", "WNbTK4UgSzi", "O18hIUDvph8", "yMd27bY6XN", "3s37zZZpxdO", "6o35yk8l01T", "9GpuphXn_ZQ", "FGa8IibP0Y1", "UJMt8Qm8I0R", "OKKaDv7xCU6", "zgEK6m68Fkz" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their clear and detailed response. This has resolved some of the potential misgivings I had regarding the relationship to other work, and I am happy to increase my score. ", " I thank the authors for clarifying my concerns and address my curiosities. The additional results on Resnets at initialization vs after training could indeed inspire new directions toward controlling the evolution of the rank during training. Also, more work could be done in future works around the role of normalization layers.\n\nI agree with Reviewer Gfbz regarding the original ambiguity in the definition for the rank. However, I believe the authors improved their manuscript by updating their definition and discuss its implications.\n\nOverall, I am still convinced of my score (7), hence advocating for acceptance. \n\n", " Dear Reviewer 4Q14,\n\nThe author-reviewer discussion period is about to end. We want to know whether our previous response can relieve or successfully address any of your initial concerns. If not and you have any further questions, please feel free to share with us so that we can use our last chance to address them. We want to thank you again for your valuable time and efforts. We will be more than happy to receive your response and resolve any further problems.", " Dear Reviewer Gfbz,\n\nThe author-reviewer discussion period is about to end. Do our responses resolve your initial concerns, and are there any further questions regarding this paper and our new response? We sincerely hope that you can freely share your opinions and suggestions with us and engage with us in the discussion, which is very important for us. We want to thank you again for your time and efforts in improving this work!\n\nWe are looking forward to hearing from you.", " The authors are very grateful for the reviewer's support! Thank you again for your valuable efforts in improving this work!\n\nFor the role of the bias, yes, it will restore some ranks in many deep networks. We can understand its role from the below two different perspectives.\n\n**1. The role of bias in blocks like attention networks**\nFor the networks involving quadric terms like $\\sigma_1(W_1x+b_1)^T\\sigma_2(W_2x+b_2)$, it is clear that the bias will influence the rank of networks. In the simplest form, $f(x)=[(x+b_1)^T (x+b_2), (x+b_3)^T (x+b_4) ]^T$, then $J_f=[2x+b_1+b_2,2x+b_3+b_4]^T$. Then the rank of $J_f$ is decided by the bias vectors. An example is the attention network, where $s_{ij}=q^T_iv_j, q_i=\\alpha_i(A_ix+b_i), v_j=\\beta_j(C_jx+d_j)$.\n\nIn this case, the bias vector influences the rank as they are in fact also the weight vectors; there will be multiplied by some hidden feature representations in the whole network.\n\n\n**2. The role of bias in nonlinearities**\nFor purely linear networks, theoretically, bias will not influence the rank of the network. In fact, the bias term will not influence the Jacobi matrix at all. For example, in $f(x)=W_1(W_2x+b_2)+b_1$, we have $J_f=W_1W_2$ that is independent with the bias vectors. However, in non-linear networks, the nonlinear activations will make the bias responsible for ranks. Consider $f(x)=ReLU(x+b)$, then $J_f$=diag{$\\delta_{x_1+b_1>0},\\cdots,\\delta_{x_n+b_n>0}$}, which is a diagonal square matrix that the $i$-th row and $i$-th column of it is 1 if $x_i+b_i>0$ and is 0 otherwise. Then clearly, the bias vector $b$ will influence the value of the Jacobian matrix and also influence its rank. If $b_i<-sup_{x\\in\\mathcal{X}} x_i$, then $J_f=0$ and $rank(f)=0$. If $b_i>-\\inf_{x\\in\\mathcal{X}}x_i$, then $J_f$ will be always of full rank. The bias vector decides how much information of the input manifold $\\mathcal{X}$ can be preserved in this layer, thus influencing the rank of the network.\n", " Thank you to the authors for their thorough response. I also appreciate the review by Reviewer Gfbz and the authors response there. I am satisfied to increase my score to a 6 (weak accept).\n\nI am curious on the effect of bias parameters on the rank of the hidden layer feature spaces. Intuitively the bias parameters learn to model the average activation of the hidden layer. I appreciate that directions of the ambient space which do not contribute to the rank of the feature space will have little variance, but could the bias (by modelling the shift off of $0$) serve to restore some rank to the layer-wise Jacobian as it will also have to account for the shift from the bias?", " ### **Q: The bottom row of Fig. 1 is not a logarithmic scale.**\n\n**A:** Thank you for raising this point! We have changed it to a log scale in the updates.\n\n### **Q: it appears that the rank decay in the case of resnets may be influenced more by the pooling layers or changes in width than any other operation**\n\n**A:** We have measured the rank decay in ResNets without pooling layers and with the same width. We can find that the new results still follow the same trend of decreasing. The pooling layer does have a considerable effect near the terminal layer, but the overall trend of rank diminishing seems to be independent of it. These results can be found in the paragraph **Influences of the Pooling Layers and Width** of Appendix section B.\n\n### **Q: the behavior of the rank is not connected in a quantitative way with the structure of the network.**\n\n**A:** The quantitative connection between ranks and network structures is indeed an interesting problem. However, in this paper, we intentionally omit the discussion about specific architectures to better serve the main purpose of this work. The main purpose of this paper is to reveal how the two fundamental ingredients, the chain rule of differential and matrix multiplication, can induce rank diminishing in deep networks. From this viewpoint, we can provide more general arguments for the low-rank preference for deep networks. So we intentionally abstract away from specific structures, as\n1) they may weaken the key argument of how the chain rule of differential and matrix multiplication influence the ranks;\n2) the structures for deep networks are numerous; discussions about them will be tedious for 9-page limits;\n3) if only discussing the influences of a few structures, then the results cannot be general enough.\n\nWhile in theorems 4 and 5, we still manage to give two quantitative descriptions of the rank behavior of deep networks. We show that the numerical ranks converge to some fixed constants at an exponential speed. This may address the concern about quantitative descriptions of ranks.\n\n### **Q: how difficult do the authors think it would be to relate some of the predictions in a quantitative way to various aspects of the network architecture?**\n\n**A:** This is a very good question. The most important thing to note about the rank is that it is not continuous with respect to the elements of the matrix. This makes any analytic analysis of it extremely difficult compared with the covariance matrix. Recent advances in multiplicative ergodic theorem and Ginibre ensembles in random matrix theory provide us with chances to investigate it from a probability perspective. \n\nThe math community already knows the joint density distribution of singular values of products of (finite) Gaussian matrices, which is a determinantal point process with a correlation kernel that\nadmits a representation in terms of Meijer G-functions. The Jacobian matrix of networks is a smooth function of the weight matrices. Using this function, ideally, we can do integrals to compute the density function of the singular values of the Jacobian matrix. However, for complex structures, we usually cannot get an analytic density. The quantitative predictions for ranks are easy for cases where we can have an analytic density of singular values (an example is a network of the form $f^k(x)=\\gamma W_k x+b$), while it could be difficult for the remains.\n", " Thank you for the detailed feedback and questions. Below we address the concerns separately.\n\n### **Q: It is unclear that the rank must decrease.**\n\n**A:** There could be a misunderstanding. The purpose of this paper is not to prove that rank must decrease in all cases, which is apparently wrong, as pointed out by the reviewer. This paper's purpose is to explain why we usually observe a low-rank network in practice and to provide arguments to support the low-rank network assumption in various domains of deep learning. Numerous methods are derived from an assumption that low-rank structures are to be preferred (like those mentioned in the first paragraph of sec. 1). Yet currently, few works discuss why the low-rank structures are preferred. So we fully agree that rank will not decrease in many carefully manual designed cases. This fact is orthogonal with the topic of this paper.\n\n\n### **Q: The covariance converges to certain fixed points should essentially be equivalent to the rank of the representation collapsing.**\n\n**A:** Thank you for raising this related direction of research. If the covariance matrix is precisely the fixed point defined in Eq. 2.6 of [2], i.e. $\\Sigma_{i,i'}^{\\*}=q^*(\\delta_{i,i'}+(1-\\delta_{i,i'})c^*)$, then indeed the network should be low-rank. However, convergence is another thing. The covariance converging to a fixed point does not necessarily mean that rank of the networks will collapse or not. In fact, we can have a simple counter-example. Let the input be a standard Gaussian distribution, and the network layer $f^n(x)=\\frac{1}{n}x$. Then the covariance matrix of the n-th layer will be $\\Sigma=\\Pi_{i=1}^n \\frac{1}{i^2}I$ by the property of Gaussian under linear transformation, where $I$ is the identity matrix and the network Jacobi of the n-th layer will also be $J_{F_n}=\\Pi_{i=1}^n \\frac{1}{i}I$. Then we can find $\\Vert\\Sigma-0\\Vert_2\\rightarrow0$, meaning the covariance converges to a fixed point $0$, while both the rank and numerical rank of the Jacobian matrix will stay full rank. Generally, the covariance matrix is continuous with respect to its elements, but the rank of Jacobi is not continuous with respect to the elements of the Jacobi. So their convergence cannot be equivalent. \n\nFrom this viewpoint, the main results of this paper are in fact parallel to the two mentioned works.\n\n\n### **Q: In the simplest case of a network with orthogonal weights and no non-linearities, it is clear for example that there is no decrease in rank, so there are clearly ways that it can be avoided.**\n\n**A:** It may be surprising to find that even the carefully designed network [2] can have rank diminishing at initialization. We think the reason could be that\n1) **rank is not continuous with respect to the elements of the matrix;**\n2) there are unavoidable numerical errors in the network;\n3) accumulated small errors can be large after massive matrix multiplications and cause the numerical rank to lose.\n\nIn fact, from a probability perspective, the rows of the standard Gaussian Jacobi matrix are almost orthogonal to each other as they have zero correlations. However, the numerical rank of this case will still diminish to one when the layers get infinitely deep.\n\nIn section C of Appendix, we measure two metrics for the delta-orthogonal initialized CNN network in [2], \n1) the numerical rank of the Jacobi;\n2) the ratio between the sum of non-largest singular values and the largest singular values $\\frac{\\sum_{i=11}^{3072}\\sigma_i}{\\sum_{i=1}^{10}\\sigma_i}$, where $\\sigma_1\\geq\\sigma_2\\cdots\\geq\\sigma_{3072}$ are singular values of the Jacobian matrix.\n\nWe report the results in the first 4-1004 layers (we omit the first 3 layers as they have downsampling operations). We find that both these two metrics diminish as the layer gets deeper. This indicates that the network still has an intention to lose ranks when the layer depth is very large.\n\n", " ### **Q: Is there a way to quantify the structural impetus?**\n\n**A:** Generally, we think it is very hard to give an exact measure, as it is difficult to sample data along directions in the manifold. However, rough measurements are easy to have. For example, we can do PCA dimension reduction to the input manifold (the data or the input feature manifolds). Then for a given input point, we can add perturbations to its significant PCA components and measure the PCA dimension of the resulting output space. The expectation of decreasing of PCA dimension can be viewed as a rough estimation of the structural impetus. Following this idea, other methods (like [1][2]) that learn the structure of manifolds can also be used to deduce a rough estimation of the structural impetus.\n\n[1] A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science\n\n[2] The Isomap Algorithm and Topological Stability. Science\n\n\n### **Q: Can (and if so, how) their results can be used to design better architectures, initializations, and training procedures that better preserve the rank information?**\n\n**A:** There are some potential directions to improve the rank information based on our theory. Below we discuss a few, perhaps most straightforward ones, separately.\n\n**1) Normalizing the singular value distribution of weight matrices during training.** Previous work on spectral normalization [3] has revealed that normalizing the largest singular value of the weight matrices can help the training of generative models. While according to the theory of numerical rank in this paper, if we can further normalize the top-k largest singular values to be more uniformly distributed, we can preserve the rank of the weight matrices large than $k$. This can hopefully help stabilize the rank of the whole network.\n\n**2) Regularizing the residual terms during training.** As is insightfully pointed out by the reviewer, if the magnitude of the residual term becomes too large, it may lower the rank of the network. So a nature thought will be regularizing the residual terms during training. We can regularize the top-k singular value distribution of the residual term or the weights of its CNN blocks as in the above point to make the residual term has higher ranks. Or we can regularize the magnitude of the residual term directly to make the identity term dominate the network rank.\n\n**3) Use small width feature layers sparingly and take the necessary dimension for the outputs into account when choosing layer width.** Like pointed out in Theorem 2, in a smooth network, once the rank is lost, it can never get back. So if the width of some intermediate feature layer is very small, it will lose rank immediately at this layer. Increasing width in the subsequent layers will not bring the lost rank back. Specifically, the transformer network uses a 192-dimensional feature layer near the output layer. Thus the intrinsic dimension of the output manifold will never exceed 192 theoretically. This could be unwise for the classification of massive and diverse categories where a rank of 192 is obviously too small.\n\n[3] Spectral Normalization for Generative Adversarial Networks, ICLR, 2018\n", " The authors are grateful for the reviewer's valuable feedback and insightful questions. We are encouraged by your support for this work! Below we address the concerns separately.\n\n### **Q: Inconsistency of Residual Network.**\n\n**A:** We are grateful for the reviewer pointing out this inconsistency and offering an insightful opinion. \n\nWe empirically study this issue in the ResNet50 networks. We find that\n\n1) The rank of each layer of ResNet50 at initialization is much higher than after training (taking the 16-th layer of ResNet50 as an example, its numerical ranking before and after training is 530 and 119, respectively.);\n2) The relative magnitude of the Residual term, $\\frac{\\Vert Res(x^i)\\Vert}{\\Vert x^i \\Vert}$, at initialization is much larger than after training (take the 16-th layer of ResNet50 as an example, the ratio raises from 0.5127 to 0.9557 after training).\n\nThis means that the residual connections take effect when they are initialized and have a small magnitude, as is analyzed in Appendix B. While after training, the residual terms become significant and not as effective in preventing rank diminishing. So the reviewer's comment that \"this could be due to the fact that during training the magnitude of $Res(x^i)$ becomes large, hence lowering the rank\" seems to explain this phenomenon very well. \n\nOn the other hand, the skip connection and residual architecture are still effective for deep networks from the following aspects,\n\n1) it stabilizes the training as in the early period of the training, the rank diminishing can still be eased;\n2) it will still be better to have them than not, as the residual term is not large enough to totally eliminate the effect of the identity term.\n\nWe have revised the content in Appendix B according to the above discussion. Thank you again for raising this point!\n\n\n### **Q: Why normalization layer prevents rank diminishing?**\n\n**A:** From the perspective of Theorem 4 and 5, the main reason is that normalization techniques can re-normalize the singular value distribution of the feature representations or the networks. For example, the Batch Normalization is motivated to pull the feature representation back to a normal distribution with identical covariance. The option $\\frac{x-\\mu}{\\sigma}$ will make the singular value distribution of the feature covariance matrix more uniform. This can stabilize the rank of the covariance matrix, which is also the dimension of the feature manifolds under regularization assumptions. We have added this discussion in sec. 5 to enhance the connection of the theoretical results with the practice. Thank you for your suggestions!\n\n### **Q: Intuitive explanation for \"Moving along directions in Theorem 3\".**\n\n**A:** Thank you for this suggestion. We have added two new examples in sec. 4 below this theorem to illustrate our meaning here. Roughly speaking, this means adding some small perturbations to the input can yield exact zero change in the output.\n", " ### **Q: Rephrase the work more in line with Theorem 4 and 5 & other updates as suggested.**\n\n**A:** We are grateful for the reviewer's valuable suggestions on writing and paper origination. We have substantially revised the manuscript as suggested. We summarize the main updates as follows.\n\n1) We shorten sections 3.2 and 3.3, moving the too technical parts to the appendix.\n\n2) We split section 4 into three parts. \n\n i) Section 4.0 and 4.1 are now merged into one section (sec. 4). This section is designed to give a simple principle of rank diminishing for general almost everywhere smooth deep networks. The merged section is more compact and saves more space for the subsequent contents.\n\n ii) Section 4.2 is now becoming an independent section (sec. 5), which aims to study the limiting behavior of ranks in infinitely deep networks. We substantially enhance this section. We add a detailed discussion about why we take the assumptions in Theorem 4 and 5 and how this setting is connected with other aspects of deep learning research. We also discuss how the results of these two theorems will influence the feature representations and how techniques like Batch Norm can help stable training.\n\n iii) Section 4.3 and 5 are now merged into one section (sec. 6), which aims to validate the results of previous sections. The validation is split into two parts: validating the diminishing rule in each layer and validating the low-rank structural in the terminal layer. The latter is further split into a numerical validation (the PCA experiment on the last layer) and a semantic validation (the independence deficit).\n\nThe revised version has now put much more attention on the limiting behavior in Theorem 4 and 5, making it the most significant part of this paper, and appears more clear about the roles of each section in this paper. We thank the reviewer again for the helpful advice.\n", " ### **Q: Relations between structures and network ranks.**\n**Q:** This is really an interesting aspect. But a full discussion of it may be a bit tedious as the structures are quite diverse and numerous in deep networks. So we add a discussion on this topic in Appendix section B, where we discuss the influences of some frequently used architectures on the rank of networks, like skip connections, batch norms, residual networks, and pooling layers. \n\n\n### **Q: What are the connections between section 4, the final layer PCA results, and the independence deficit results?**\n\n**A:** The results of the final layer PCA and independence deficit are to show that **the final layer representations are indeed low-rank** from a numerical perspective and a semantic perspective, correspondingly. This is to support that the diminishing of the ranks is significant for the deep networks, as suggested by Theorem 4 and 5. Without these two results, we may suspect that the diminishing of ranks is modest and can be neglected. The purpose of these two experiments is different from Fig. 1, where we want to directly confirm the **\"existence\"** of rank diminishing. Here we want to confirm the final **\"effect\"** of rank diminishing. So they have to be taken place in the terminal feature layers, where the outputs of the network are given. We have reorganized the paper to emphasize this point.\n\nFor the per-layer feature dimensions, we provide an experiment in sec. F of the appendix. We measure the loss of PCA dimensions in a local neighborhood to support the diminishing of intrinsic feature dimensions. The results reported in appendix section G and Fig. A8 show a trend of drastic decrease.\n\n### **Q: Does the bottom row of Fig.1 report an exponential trend?**\n\n**A:** We have added log scale axes to the bottom row of Fig.1. We can find that they are nearly linear under the log scale axes, which suggests nearly exponential trends of them. We have also rephrased the caption of Fig.1 to turn down the original tone.\n\n### **Q: Figure 1 could also use different colours, especially for the bottom row where distinguishing between Jacobian and Feature rank/dimension is not easy.**\n\n**A:** Thank you for raising this point! We have changed the colors used in the bottom row of Fig.1 to make it easier to distinguish between the Jacobi rank and feature dimension.\n", " The authors are grateful for the detailed and in-depth feedback from Reviewer DH79. We have substantially revised the manuscript as suggested by the reviewer. Below we address the mentioned concerns separately.\n\n### **Q: Does ReLU fit the requirement of almost everywhere smooth?**\n\n**A:** Yes, ReLU fits this requirement. We use \"almost everywhere smooth\" to describe functions that have gradients and arbitrary high-order gradients (which means smooth) except for a zero-measure set (which means almost everywhere). Specifically, the ReLU function is non-differentiable only at the zero point, which is a zero-measure set (collections of finitely many points are all zero-measure sets). Apart from the zero point, ReLU is smooth at each point. Thus ReLU function fit the requirements of \"almost everywhere smooth.\" In fact, most network components are either smooth or smooth except for some isolated points, including ReLu, Pooling, LeakyRelu, Sigmoid, Tanh, SoftMax, Attentions, CNNs, and Dense layers. We have added the exact definition of smooth almost everywhere to the revised version.\n\n### **Q: The conditions of Theorem 5 appear unrealistic.**\n**A:** We make these assumptions mainly for theoretical convenience and interests. The answer to this question can be divided into two parts, why we assume the fixed size of hidden layers and why we assume Gaussian distributions.\n\n**1) Fixed size of hidden layers:** We make this assumption for the convenience of theoretical analysis of the limiting behavior of infinitely deep networks. Otherwise, some variables of interest may not have limits. Here we want to predict the limit of series $\\frac{\\sigma_i(J_{F_L})}{\\sigma_1(J_{F_L})}$ when the depth L goes to infinity. If we do not use a fixed size for hidden layers, then $\\sigma_i(J_{F_L})$ is not well defined, as it may suddenly appear or disappear in the series. In this case, we cannot define the limit $\\lim_{L\\rightarrow\\infty}\\frac{\\sigma_i(J_{F_L})}{\\sigma_1(J_{F_L})}$. In fact, for the same reason, the assumption of fixed hidden layer size is a common practice when analyzing the limiting behavior of infinitely deep networks, as in [1][2][3][4]. Apart from that, some frequently used network architectures, like transformers and RNNs, also satisfy the fixed size assumption.\n\n[1] Deep Equilibrium Models, NeurIPS, 2019\n\n[2] Doubly infinite residual neural networks: a diffusion process approach, JMLR, 2021\n\n[3] Variational Inference for Infinitely Deep Neural Networks, ICML, 2022\n\n[4] Transport analysis of infinitely deep neural network, JMLR, 2019\n\n**2) Gaussian assumptions:** We are interested in this setting as Gaussian is the simplest distribution. Also, according to the central limit theorem, infinitely wide networks tend to have a Gaussian Jacobi matrix after normalization ($\\tilde{J_{ij}} =\\frac{J_{ij}-\\mu_{ij}}{\\sigma_{ij}}$). Thus, studying this setting can lead to much stronger results on rank diminishing due to the simplicity of Gaussian. Yet those results are also intuitive for understanding deep networks.\n\nWe have added the discussion above in the revised version (in sec.5 above Theorem 4 and Theorem 5).\n\n\n### **Q: The aspect of noise increasing network rank.**\n\n**A:** The role of noise in the deep network is indeed an interesting problem. However, when considering their role in the rank of networks and features, it follows that a noisy feature manifold almost always has full dimension, and thus a noisy network almost always has full rank. For small and unwished noise, this will make the discussion about ranks trivial. So we need to remove it in the measuring of numerical ranks. For large and intended noise injected into the network, this property of noise makes it a common practice to increase network robustness and performance like in dropout and other noise injection techniques. In our formulation of numerical ranks, small noise produced by the inaccuracy of the computations can be safely ignored, but large noise that is intended to add will still function as a tool to slow down the decreasing of network ranks. This is contained in the condition of Theorem 1 that only a small noise perturbation (smaller than $\\delta_{max}(\\epsilon)$) can be removed from the numerical ranks. The role of large noise can also be described by Theorem 4 and 5. If we consider the noise perturbed Jacobi distribution as a new distribution, then they can also satisfy the conditions of those two theorems.\n\n\n", " ### **Q: Is there any theoretical explanation about why independence deficit happens for the final feature manifold?**\n\n**A:** Yes, we can have a much deeper theoretical understanding of this based on the property of LASSO regression. For short, it happens when the axis for a category is close to the orthogonal complementary subspace of the low (numerical) rank covariance matrix of the final output space.\n\n**Review the problem and its meaning:** Let's review Eq. 11, where we solve the dependence coefficients. For simplicity, we rewrite it as \n\n$\\lambda^*=\\min_{\\lambda_i=-1}E_{x\\sim P_{data}}[\\Vert\\lambda^Tf(x)\\Vert_2^2]+\\eta\\Vert\\lambda\\Vert_1=E_{x\\sim P_{data}}[\\lambda^Tf(x)f(x)^T\\lambda]+\\eta\\Vert\\lambda\\Vert_1$\n\n$=\\lambda^T E_{x\\sim P_{data}}[f(x)f(x)^T]\\lambda+\\eta\\Vert\\lambda\\Vert_1=\\lambda^T\\mu\\lambda+\\eta\\Vert\\lambda\\Vert_1,$\n\nwhere $f(x)=WF(x)$ is the slice from the input to the final logits layer of the network, $\\mu=E_{x\\sim P_{data}}[f(x)f(x)^T]$. If assume $E[f(x)]=0$, then $\\mu$ is the covariance matrix of the logits output. \n\nFor this problem, we have the following observations:\n1) The first term of this objective $E_{x\\sim P_{data}}[\\Vert\\lambda^Tf(x)\\Vert_2^2]=E_{x\\sim P_{data}}[(\\sum_{j\\neq i}\\lambda_j f_j(x) -f_i(x))^2]$ measures the error of using the linear composition $\\sum_{j\\neq i}\\lambda_j f_j(x)$ to predict the logits for the i-th term $f_i(x)$.\n2) By property of $\\ell_1$ penalty, the second term $\\eta\\Vert\\lambda\\Vert_1$ enforces sparsity to the coffecitents $\\lambda_j, j\\neq i$ thus most of $\\lambda_j$ will be zero.\n\n So the independence deficit (a small cluster of other categories can predict the output for the i category) will happen if and only if 1) $E_{x\\sim P_{data}}[\\Vert\\lambda^{\\*T}f(x)\\Vert_2^2]=\\lambda^{\\* T}\\mu\\lambda^{\\*}$ is approaching zero ($f_i(x)\\approx \\sum_{j\\neq i}\\lambda_j^{\\*} f_j(x),\\forall x$), when 2) $\\eta$ is relatively large so that most $\\lambda_j^{\\*}$ is zero. \n\n**Handling the constraint and $\\ell_1$ penalty:** Note that the value of $\\lambda^*$ is constraint by two factors in the LASSO problem\n1) The i-th element of it $\\lambda_i^{\\*}$ has to be -1.\n2) The norm of $\\lambda^*$ should be small as it is constrained by the regularization term $\\Vert\\lambda^*\\Vert_1$.\n\nCombining this two observations, $\\lambda^*$ should be a vector lying in the hyper-plane $P_i$={$\\lambda\\in\\mathbb{R}^{1000}:\\lambda_i=-1$} and close to the point $(0,...,-1,...,0)$ (the origin point of $P_i$). So, the direction of $\\lambda^*$, $\\frac{\\lambda^*}{\\Vert\\lambda^* \\Vert_2}$ has to be close to the i-th coordinate axis, and $\\Vert\\lambda^* \\Vert_2\\geq 1$ as $\\lambda_i^{\\*}=-1$.\n\n**When can we reach the first requirement ($\\lambda^{\\* T}\\mu\\lambda^{\\*}\\approx0$):** For a vector with a constraint on norm ($\\Vert\\lambda^* \\Vert_2\\geq 1$), it is well known that the value of $E_{x\\sim P_{data}}[\\Vert\\lambda^{\\*T}f(x)\\Vert_2^2]=\\lambda^{\\* T}\\mu\\lambda^{\\*}$ will be small if and only if $\\lambda^*$ is close to the linear subspace spanned by the singular vectors of $\\sqrt{\\mu}$ (where $\\sqrt{\\mu}^T\\sqrt{\\mu}=\\mu$) that correspond to tiny singular values. In this case, $\\lambda^{\\* T}\\mu\\lambda^{\\*}$ is tiny, and the value of it approaches $\\sum_{\\sigma_i<<1}\\sigma_i^2 (q^T_i\\lambda^*)^2\\sim C\\sum_{\\sigma_i<<1}\\sigma_i^2$, where $q_i$ is the singular vector of $\\sigma_i$ (counting repetitions).\n\n**Conclusions:** So we can finally conclude that if the i-th coordinate axis is close to the linear subspace spanned by the eigenvectors of $\\mu$ that correspond to tiny eigenvalues, then there is a $\\lambda^*$ that solves the original problem with a small prediction error $\\lambda^{\\*T}\\mu\\lambda^*$ and a small $\\ell_1$ norm, which will then induce the independence deficit as we explained in **Review the problem and its meaning**. It then follows that only a low-rank (numerical rank) covariance matrix can have many tiny eigenvalues, and their eigenvectors span a large linear subspace. So the independence deficit will happen only when the covariance of the final outputs is low-rank, and it will happen to the i-th category if the i-th coordinate axis is close to some eigenvectors with small eigenvalues.\n\n**Further conclusions and connections with low-rank networks:** The above discussion reveals why the independence deficit will happen to a specific category I. If we do not care about which category it happens to, then the low-rank structure of the outputs is the main reason. The lower the numerical rank of the covariance matrix, the larger the space spanned by eigenvectors of tiny eigenvalues, and the higher probability that some of the eigenvectors will be close to some coordinate axis and induce the independence deficit. \n", " ### **Q: How do we define the rank of function, and why do we assume constant rank in Lemma 1**\n\n**A:** We apologize for the unclearness here. The constant rank assumption is made for the convenience of expressing the diminishing of feature manifold dimensions and pointing out the connection between network ranks and feature manifolds. For the remained part of the theory analysis, the function rank can be either constant or non-constant (i.e. point-wise), and the conclusions will always stand.\n\n**a) Why do we omit the discussion of different ranks:** By our original definition, the rank of function is a pointwise function of the input point $x$. But we can omit the region of non-highest ranks and consider the rank as a constant. The key to omitting the tedious discussion of different ranks in the input domain is [Sard's Theorem](https://en.wikipedia.org/wiki/Sard%27s_theorem) (see the 4-th paragraph of section 'Variants' in this wiki page) we mentioned in Line 67-69. Sard's Theorem tells that **for smooth functions between manifolds, the critical points will be mapped to a zero measure set and thus can be ignored**, where \"critical points\" means points that have lower ranks than the region of the highest rank. So in a deep neural network, we do not need to consider the region of non-highest ranks, as those regions \"contribute zero\" to the feature manifolds; they will be mapped to a low-dimensional sub-manifold in the feature manifolds and thus can be ignored when analyzing intrinsic dimensions of the feature manifolds. Thus we only consider the region of the highest rank in Lemma 1, which admits a constant rank $r$ and is also the highest rank that a function can reach in its input domain. \n\n**b) The main results of the subsequent theorems hold for the pointwise definition of function ranks:** While we only consider the region of the highest ranks, the results in Theorem 2,4,5 (except Eq. 8) are also applicable for any (differentiable) point in the input domain by changing the notation $rank(F)$ into $rank(F(x))$ and $J_F$ into $J_{F(x)}$. This is because those results are deduced from the chain rule of differential, which holds point-wisely at each differential point $x$ of the input domain. In the experiments, we find that the variance of ranks measured in different points is not significant (as is shown in the error bars of Fig. 1), and the diminishing rule maintains when considering the highest ranks, mean ranks, and the lowerest ranks of the networks. \n\n**We have revised the context in sec. 2 to mention the above discussion in the new version of the manuscript. Thank you for raising this point!**\n", " Thank you for the thoughtful feedback and valuable questions! We are encouraged by your comments on the finding of independence deficit! Below we address the questions and concerns separately.\n\n\n### **Q: The PCA Dimension is not accurate for the feature dimension, and it cannot explain the independence deficit**\n\n**A:** PCA is indeed not an accurate measure for the intrinsic dimension. But here, we use the PCA dimension based on the following considerations.\n\n**a) PCA dimension offers an upper bound for the intrinsic dimension:** Indeed, a low dimensional feature manifold can have a high PCA dimension (number of significant PCA components). In this case, the PCA dimension gives an *upper bound* for the feature manifold dimension. If this upper bound is low, then the true feature manifold dimension should be even lower. In some rare cases, a high dimensional feature manifold can also have a low PCA dimension, but a small enough tolerance $\\epsilon$ for the significant PCA components and large enough sample numbers N can rule out these cases for manifolds having interior points. So when $\\epsilon$ is small enough, a low PCA dimension will indicate an equal or even lower intrinsic feature dimension; this will also suggest the independence deficit, which we will explain together with the theory analysis for the source of independence deficit later.\n\n\n **b) In this paper, the task PCA takes on is qualitatively and not sensitive to accuracy:** In this paper, we only use PCA to qualitatively detect whether the intrinsic dimensions of the terminal feature manifolds are very low. The goal of the ClsDim metric is to support that the rank does diminish a lot so that only a very low dimensional final feature manifold remains. This task is not very sensitive to the accuracy of PCA dimensions. The task of supporting rank diminishing accurately is mainly accomplished by the Partial Rank metric in Fig. 1.\n\n**c) It is hard to find a method that is the same widely recognized but better than PCA:** Overall, we have to admit that measuring the intrinsic dimension of manifolds is a very difficult task, and PCA is one of the most commonly used methods for this task. Under good regularity conditions, the PCA dimension and the intrinsic dimension are connected to each other. If assuming that the feature distribution follows the Gaussian distribution $N(\\mu, \\Sigma)$, then the PCA dimension, which estimates the dimension of covariance matrix $\\Sigma$, can also estimate the feature dimension (which equals the dimension of $\\Sigma$). Thus PCA can be viewed as a rough estimation of the intrinsic dimension of feature manifolds.\n", " This paper demonstrates a rank diminishing behavior of deep neural networks considering the mapping from the input space to the feature space of an increasingly deeper intermedate layer. Theoretically, it proves that the rank doesnot increase as the layer depth increases. Experimentally, it demonstrates a general decreasing trend of rank on various NN architectures. This work also empirically demonstrates that the number of major PCA components at the final feature layer is much less than its ambient dimension, which leads to feak correlation between very different categories. Strenghs:\n1. This work systematically studies the evolution of function rank throughout the layer computation and provides theoretical jusfication to the empirically observed rank diminishing behavior.\n2. The finding about the independence deficit of final feature manifolds is very interesting and provides insight to the lack of robustness of DNNs.\n\nWeakness:\n1. Classification dimension estimated by the number of major PCA components in this work is not a good indicator of the feature dimesnion. In fact, a very low dimensional manifold can have high classification dimension. Therefore, the main results about rank diminishing cannot explain the interesting finding about low classification dimension of final feature manifolds. The statement in the abstract that \"independence deficit caused by the rank deficiency of deep networks\" is misleading.\n2. It seems that the definition of the rank of function and lemma 1 implicitly assume that the jacobian of neural network functions has a constant rank over the entire input space of R^n. This is a strong assumption that doesnot hold in general. When this assumption holds for neural networks should be carefully discussed. 1. [related to weakness 2] What exactly is the \"rank of its Jacobi matrix Jf over its input domain X\" in definition of the rank of function? How the rank of function is related to the rank of its Jacobian matrix at a specific point x\\in R^n?\n2. [related to weakness 1] Is there any theoretical explanation about why independence deficit happens for the final feature manifold? The authors adequately addressed the limitations.", " This work aims to study the rank of hidden layer representations of neural networks in relation to how deep the layer is in the network. In particular they note that the rank of the hidden layers diminish monotonically as we observe deeper layers. Numerical measures of rank are proposed and motivated. The primary theoretical concerns are the rank of the Jacobian from the input to the i-th layer of the network (essentially a linear approximation of the network mapping to that hidden layer) and the dimension of the feature space for a hidden layer. The paper further investigates the tolerance of the final hidden layer to dimensionality reduction by applying PCA to features space and projecting onto a decreasing number of eigenvectors. The number of eigenvectors remaining when a significant drop in performance is observed from the dimensionality reduction provides an approximation for the intrinsic dimensionality of the hidden layer. Finally, the paper explores the idea that it is possible to use the logits of different categories to classify another category in a dataset. One example is that by merely using -0.923 as a weight on the logit for the \"triumphal arch\" category it is possible to predict the \"junco\" category without loss of accuracy. # Strengths\n## Originality\nThe paper is fairly original with the primary novelty being the rank metrics used and their justification. Additionally the paper touches on some possible connections between symmetry and rank which to my knowledge have not been explored, however, these connections are mainly pointed out but not discussed or treated theoretically.\n\n## Quality\nThe need for the numerical tools to measure rank is well motivated and the numerical tools themselves makes sense and are justified. The claims that are made appear correct and in-line with the evidence presented.\n\n## Clarity\nThere is some variance in the clarity of the paper for various sections. The writing is clear and understandable and the mathematical notation is consistent and intuitive which helps the clarity in the earlier sections greatly. Sections 3.2 and 3.3 are examples where the notation made potentially tricky sections more manageable. Figure 4 stands out as a very helpful figure. The effort on that is definitely worth it.\n\n## Significance\nThe paper touches on some significant points, like the point linking symmetries to lower ranks. The PCA experiment and the experiment on the using categories as predictors for others may be of general interest to the ML community.\n\n# Weaknesses\n## Originality\nA primary concern of this work is the fact that $Rank(AB) \\leq min{Rank(A), Rank(B)}$. This is even mentioned in the paper below equation 8 and is one of the primary tools for the work. This, however, is a well established principal and quite intuitive. Thus, the finding that the rank of the network decreases with layer depth is not surprising. Two possible interesting points: noise increasing network rank and structure avoiding the rank staying the same across layers are mentioned but do not form part of the analysis. The noise aspect is ignored in the theory and removed through the noise tolerant rank measures. The point on monotone decrease over equality of rank due to structure is discussed briefly.\n\n## Quality\nThe various sections of the paper feel quite loosely connected. Up to Section 4 the work considers whether the rank of the network decreases monotonically. The Section 5 considers PCA just on the final feature space and is the used to point out that low dimensional feature spaces do not hold semantically meaningful features for each category in Section 6. These sections are all related to rank, however, the connections do not seem to go deeper than that. Finally, there are some points where unjustified claims are made (or the phrasing makes these claims appear unjustified). Two examples are \"Theorem 5 that investigates the behaviour of all singular values of deep neural networks\" when theorem 5 requires hidden layers of the same size and assumes the Jacobians have Gaussian elements (which appears to be unrealistic in its own right) and \"The principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components\" where it is not clear that ReLU networks would even fit this requirement.\n\n## Clarity\nTheorem 4 and Theorem 5, which are the most technical aspects of this paper are not given enough space. The clarity of the paper could benefit greatly from a more in-depth treatment of this section. In addition, how the theory of these sections relate to Figure 1 could also be explained more. For example I acknowledge that the shape of the bottom row of Figure 1 is non-linear but to call it exponential (which Theorem 4 and 5 predict) might also be a stretch. Understanding Theorem 4 and 5 would help with interpreting Figure 1. Figure 1 could also use different colours, especially for the bottom row where distinguishing between Jacobian and Feature rank/dimension is not easy. Finally, the notation of Section 5 is not easy to follow, particularly in the meaning of the $i_j$ double subscript where it is not immediately clear what $i$ and $j$ each refer to. Figure 4 does help clarify this a lot and with space constraints fully explaining the new notation may not be feasible.\n\n## Significance\nThis work appears generally significant, however, its significance is hindered by the same issues noted under the originality section. I feel that this work might spend too much time on the potentially quite obvious points of rank diminishing and on introducing the PartialRank and not enough time on the potentially significant points such as Theorem 4 and 5. My primary recommendation would be to rephrase the work more in line with those theorems. No questions as yet. I suggest that the authors be clearer on the conditions required for their theory to help. For example saying \"The principle of rank diminishing describes the behavior of general neural networks with almost everywhere smooth components,\" which does not seem to include ReLU networks but is described as general is unclear.", " The paper studies the dynamics of the rank evolution of the feature maps of a neural network as a function of its depth. By leveraging the abstract definition of rank of a function as the rank of the corresponding Jacobian matrix, the authors can study the rank dynamics in full generality (i.e. without assuming any specific architecture). This results in Theorem 1 (Principle of Rank Diminishing), that finds that the rank of neural network should never increase with depth due to its compositional nature (a neural network can be see as a composition of $L$ functions, where $L$ is the depth). Then, the authors analyze conditions under which the rank strictly diminishes (Theorem 3) and convergence of the rank to specific constants (Theorem 4-5). \n\nFinally, the authors apply their low rank findings to the study of the dependence and correlations between different output classes. They find that the output of some classes of ImageNet (e.g. hamster) can be predicted with a linear combination of the output for irrelevant classes (e.g. broccoli and mouse trap). The authors attribute this problem to the low rank representations of very deep network, as showed by their developed theory. **Strengths**\n1. **Generality and Importance of the Results**: the theoretical results are very general and remarkable, abstracting away from the specific architecture. The only assumption is the compositional nature of the layers, which includes most of architectures but excludes residual networks (as the author mention in the supplementary material). \n\n2. **Paper Organization**: the paper is very clear in explaining the abstract concepts of the first part. Until Theorem 2 (page 4), the theory is easy to digest. At first read, Theorem 1 seems trivial if one thinks about linear networks (i.e. simple product of matrices) and the famous property $\\text{rank}(AB) \\leq \\min(\\text{rank}(A), \\text{rank}(B))$, but the author do a great job to generalize it to any composition of functions through ideas from topology theory. The other two theorems delve deep into the rank diminishing properties of function compositions, showing an exponential decay of the rank with depth. \n\n3. **Independence Deficit of Feature Manifolds**: Section 5 provides a nice application of the theory, and would probably cause follow up works in trying to understand how one can reduce this undesirable effect of strong dependences between semantically different classes. \n\n**Weaknesses**\n1. **Inconsistency of Residual Network**: Skip connections are proposed as a tool to (partially) prevent the rank deficiency problem, and they give a brief theoretical argument in the supplementary material. However, this seems to be in contradiction with Figure 1, where an exponential decay of the rank is observed for ResNets, MLP-Mixers and Transformers, all architectures that adopt skip connections. This could be due to the fact that during training the magnitude of $\\text{Res}(x^i)$ becomes large, hence lowering the rank. At initialization, the magnitude of $\\text{Res}(x^i)$ can be controlled, e.g. with an appropriate factor inversely proportional to the depth (see for instance [1] for this scaling and [2] for its consequences on the rank). In any case, I found it confusing that skip connections are adopted in almost all the architectures used to exemplify the theory (skip connections that according to the authors should have an opposite effect). \n\n2. (minor) **Presentation Style of Structural and Implicit Impetus**: After brilliantly explaining the principle of rank diminishing, in my view the concepts of \"Structural Impetus\" (due to the specific architectural modules) and \"Implicit Impetus\" (due to the very compositions of infinite modules) of rank diminishing could be better explained. In particular, I would invest some extra lines to better explain why normalization layer prevent rank diminishing, and maybe better introduce some concepts ( or instance \"moving along directions\" of Theorem 3 is not properly introduced and in general the current version of the Theorem fails to convey a simple and intuitive explanation).\n\n[1] Hanin, Boris, and David Rolnick. \"How to start training: The effect of initialization and architecture.\" Advances in Neural Information Processing Systems 31 (2018).\n\n[2] Noci, Lorenzo, et al. \"Signal Propagation in Transformers: Theoretical Perspectives and the Role of Rank Collapse.\" arXiv preprint arXiv:2206.03126 (2022). \n 1. I would like to ask the authors if in their view there is a way to quantify what they call the \"Structural Impetus\" for different architectures and normalization layers.\n\n2. Can (and if so, how) their results can be used to design better architectures, initializations, training procedures that better preserve the rank information? I do not see a negative societal impact of this theoretical work.", " This work presents some theoretical results that imply that the rank of the Jacobian between the inputs and features of deep networks is non-increasing with depth. They predict that in some settings it should in fact decrease exponentially with depth to some fixed value. \n\nThey also develop efficient methods to estimate the Jacobian rank of real networks and show empirically that it indeed decreases with depth across a number of different architectures. The effects of depth on the learned representations in deep networks and their geometric structure is an important area of study. While this work contains an interesting combination of theoretical and empirical results, I believe the connection between the two would have to be made more concrete. \n\nThe result about non-decreasing rank follows from the basic compositional structure of the network as the authors suggest, yet it is unclear that the rank must decrease. In fact, there is a vast literature on signal propagation in deep networks that approaches this question from a different angle (by studying covariance between hidden features as a function of depth, in which case convergence to certain fixed points should essentially be equivalent to the rank of the representation collapsing [1, 2]). This literature also highlights ways to avoid this phenomenon with a careful choice of initialization, and relies on modeling the dynamics of the correlations as a function of initialization hyperparameters. This allows one for example to train convnets of depth 10000 [2]. In the simplest case of a network with orthogonal weights and no non-linearities, it is clear for example that there is no decrease in rank, so there are clearly ways that it can be avoided. \n\nAnother related issue is that the results are vague in the sense that the behavior of the rank is not connected in a quantitative way with the structure of the network (i.e. the choice of nonlinearity, initialization, etc). I think the submission would be much more compelling if the results could take these into account and make predictions about their effects on the rank. For example, how is the rank one converges to or the speed of the rank decay related to properties of the network? \n\nAn additional, related concern is the connection between the experiments and the theory. The experiments that attempt to show exponential decay of the rank are not plotted on a logarithmic scale, which makes it hard to understand whether the decay there is indeed exponential or follows some other law. In addition, it appears that the rank decay in the case of resnets may be influenced more by the pooling layers or changes in width than any other operation, yet no mention of this is made in the text. \n\n\n[1] Poole, Ben, et al. \"Exponential expressivity in deep neural networks through transient chaos.\" Advances in neural information processing systems 29 (2016).\n\n[2] Xiao, Lechao, et al. \"Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks.\" International Conference on Machine Learning. PMLR, 2018.\n Following my remarks in the previous section, how difficult do the authors think it would be to relate some of the predictions in a quantitative way to various aspects of the network architecture? Limitations have been addressed " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "NbtfIuV3ttP", "l3mv-7mVHH2", "dHDJMvNsYJj", "3s37zZZpxdO", "p3EkXL3cyMG", "WNbTK4UgSzi", "A_u3ofv665Z", "zgEK6m68Fkz", "1oh6w5xKLfs", "OKKaDv7xCU6", "O18hIUDvph8", "yMd27bY6XN", "UJMt8Qm8I0R", "6o35yk8l01T", "9GpuphXn_ZQ", "FGa8IibP0Y1", "nips_2022_tIqzLFf3kk", "nips_2022_tIqzLFf3kk", "nips_2022_tIqzLFf3kk", "nips_2022_tIqzLFf3kk" ]
nips_2022_k7xZKpYebXL
A Lower Bound of Hash Codes' Performance
As a crucial approach for compact representation learning, hashing has achieved great success in effectiveness and efficiency. Numerous heuristic Hamming space metric learning objectives are designed to obtain high-quality hash codes. Nevertheless, a theoretical analysis of criteria for learning good hash codes remains largely unexploited. In this paper, we prove that inter-class distinctiveness and intra-class compactness among hash codes determine the lower bound of hash codes' performance. Promoting these two characteristics could lift the bound and improve hash learning. We then propose a surrogate model to fully exploit the above objective by estimating the posterior of hash codes and controlling it, which results in a low-bias optimization. Extensive experiments reveal the effectiveness of the proposed method. By testing on a series of hash-models, we obtain performance improvements among all of them, with an up to $26.5\%$ increase in mean Average Precision and an up to $20.5\%$ increase in accuracy. Our code is publicly available at https://github.com/VL-Group/LBHash.
Accept
The paper proposed an interesting lower bound in the learning to hash scenarios and builds on that to show a good algorithm that outperforms several learning to hash methods. There were concerns about the size and scale of experiments which was sufficiently addressed in the rebuttal. The reviewers were not in consensus primarily because of the writing. We think that the writing concerns are fixable and the authors will improve the draft using the reviewers comment for the final version.
test
[ "BgmspayyLAn", "3TxPI3xwWY", "KNtTLynuhZ", "itw6qpHXtL1", "mMGVCl4RIEV", "PoqQ9lOKHZ", "Wp6w2p96eKX", "vGD36N2CT52", "DeDbnTHEMmr", "maxJY0eNaMZ", "ZQ7IqAWSO7r", "jtFHjfFrKz7", "IdTpMPK4lA_", "jZQ7bKv66K9", "8PwhIJl2sn" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are happy to make this paper better. Actually, we have already polished several sections and improved readability in the rebuttal revision (please refer to `General Response`). Considering on your suggestion, we will go on a further proofreading to check if there needs improvements of the paper.\n\n\\\n\\\nBest,\n\nPaper 44 Authors\n", " Thanks for the response. My concern on experiment has been addressed. The writing need to be further polished. ", " We have conducted all experiments based on above settings. Here, we provide the full table:\n\n| Methods | `16 bits` | `32 bits` | `64 bits` |\n|-------------|------------------------|-----------|-----------|\n| HashNet | $36.9$ | $41.2$ | $44.7$ |\n| HashNet_D | $55.0_{\\uparrow 18.1}$ | $56.1_{\\uparrow 14.9}$ | $61.8_{\\uparrow 17.1}$ |\n| DBDH | $37.4$ | $41.1$ | $49.1$ |\n| DBDH_D | $57.2_{\\uparrow 19.8}$ | $57.9_{\\uparrow 16.8}$ | $60.4_{\\uparrow 11.3}$ |\n| DSDH | $39.3$ | $48.4$ | $54.2$ |\n| DSDH_D | $56.0_{\\uparrow 16.7}$ | $59.1_{\\uparrow 10.7}$ | $62.0_{\\uparrow 7.8}$ |\n| DCH | $60.2$ | $62.3$ | $64.1$ |\n| DCH_D | $61.9_{\\uparrow 1.7}$ | $62.9_{\\uparrow 0.6}$ | $64.4_{\\uparrow 0.3}$ |\n| GreedHash | $62.7$ | $63.1$ | $64.9$ |\n| GreedHash_D | $63.1_{\\uparrow 0.4}$ | $64.0_{\\uparrow 0.9}$ | $65.9_{\\uparrow 1.0}$ |\n| CSQ | $64.6$ | $65.0$ | $65.7$ |\n| CSQ_D | $65.7_{\\uparrow 1.1}$ | $66.2_{\\uparrow 1.2}$ | $66.4_{\\uparrow 0.7}$ |\n\nYou could also check it in the updated `Supp. Sec. F`.\n\n\\\n\\\nBest,\n\nPaper 44 Authors", " Dear Reviewer **jH1U**,\n\nWe sincerely appreciate the reviewer's effort and constructive comments. We have provided technical details and analysis to clarify your questions. If you have any further concerns, please feel free to let us know and we are more than happy to answer them.\n\n\n\\\n\\\nBest,\n\nPaper 44 Authors", " Dear Reviewer **N6zv**,\n\nWe sincerely appreciate the reviewer's effort and constructive comments. We have polished the paper to clarify your questions. If you have any further concerns, please feel free to let us know and we are more than happy to answer them.\n\n\n\\\n\\\nBest,\n\nPaper 44 Authors", " Dear Reviewer **HCrQ**,\n\nWe sincerely appreciate the reviewer's effort and constructive comments. We have polished the paper and provided further experiments to clarify your questions. If you have any further concerns, please feel free to let us know and we are more than happy to answer them.\n\n\n\\\n\\\nBest,\n\nPaper 44 Authors", " Now, we update results with `HashNet` as baseline under above setting:\n\n\n| Methods |`16 bits`|`32 bits`|`64 bits`|\n|:-------:|:--------|:--------|:--------|\n| HashNet | $36.9$ | $41.2$ | $44.7$ |\n| HashNet_D | $\\mathbf{55.0}_{\\uparrow 18.1}$ | $\\mathbf{56.1}_{\\uparrow 14.9}$ | $\\mathbf{61.8}_{\\uparrow 17.1}$ |\n\nwhich also confirms effectiveness of our method.\n\nWe are evaluating remaining methods. If further results are avaliable, we will update here as soon as possible.\n\n\\\n\\\nBest,\n\nPaper 44 Authors", " \n[**Part 1 of 2**]\n\n---\n\nThanks for your review and valuable suggestions for us to improve our work. Next we would address your concerns with detailed explanations.\n\n### ***Q1: Is the introduced lower bound tight?***\n**A:** It is a very interesting questions, and we hope following analysis could explain it based on characteristics of our proposed lower bound.\n\nTo determine whether the lower bound is tight is a little bit difficult. We first introduce some concepts and assumptions to make it easier. Let us start at the example placed in beginning of `Supp. Sec. A`.\n\n* **Asm. 1**. Just as mentioned in Supp., we assume that any positive samples do not have the same distances to query. This ensures `Eq. (1)` in `Supp. Sec. A`.\n* **Asm. 2**. Noticed that we are working in the Hamming space where Hamming distances between any two codes are discrete and range from $0$ to $h$, `Eq. (1)` becomes:\n$$\n0 \\leq d\\left(\\mathbf{q}, \\mathbf{tp}^1\\right) < d\\left(\\mathbf{q}, \\mathbf{fp}^2\\right) < \\cdots < d\\left(\\mathbf{q}, \\mathbf{fp}^8\\right) \\leq h.\n$$\n* **Asm. 3**. Let above array be strictly without gaps *i.e.* differences between any side-by-side $d\\left(\\mathbf{q}, \\cdot\\right)$ are $1$.\n\nThen, we would derive the closed form lowest AP by $\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}$ and $\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}$ (detailed in `Q2`), which is proportional to $\\frac{\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$.\n\nSince the lowest AP is derived by $\\frac{\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$ and under the same order of magnitude, we say the proposed lower bound is tight.\n\nWe could further expand $\\frac{\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$ to $\\frac{\\min{\\mathcal{D}_\\mathit{inter}}}{\\max{\\mathcal{D}_\\mathit{intra}}}$ for tight lower bound. From the proposition in main paper, we find that $\\frac{\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}} \\geq \\frac{\\min{\\mathcal{D}_\\mathit{inter}}}{\\max{\\mathcal{D}_\\mathit{intra}}}$. The equality is achieved when query's code $\\mathbf{q}$ is exactly the same as its class-center's code $\\mathbf{c}$. In this circumstance, the tight lower bound is derived by $\\frac{\\min{\\mathcal{D}_\\mathit{inter}}}{\\max{\\mathcal{D}_\\mathit{intra}}}$.\n\nNow, could it be applied to general cases? We give our humble discussion for a simple study. There may have untouched complicated cases which are leaved for future study.\n\n* **Case 1**: Duplicated positives.\\\nIf some samples are hashed to the same binary code, then distances from query to them are all equal. They will appear at the same position of rank list. If they are all true positives or false positives, then we could directly treat them as a single (duplicated) sample and follow above rules. For example:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{fp}^3 ; \\left[\\mathbf{tp}^{4,2} ; \\mathbf{tp}^{5,2}\\right] ; \\mathbf{fp}^6 ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8\n$$\nThe above rank list has two duplicated true positives ($d\\left(\\mathbf{q}, \\mathbf{tp}^{4,2}\\right) = d\\left(\\mathbf{q}, \\mathbf{tp}^{5,2}\\right)$), then, if a swap happens between them and $\\mathbf{fp}^6$, it will become:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{fp}^3 \\mathbf{fp}^4 ; \\left[\\mathbf{tp}^{5,3} ; \\mathbf{tp}^{6,3}\\right] ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8\n$$\nwhere $i, m$ of two duplicated true positives are both increased by $1$. Obviously, the lower bound is still tight.\n* **Case 2**: Mixed positives.\\\nIt is tricky when true positives and false positives have the same distance with query (we call them mixed positives). The sorting algorithm to produce rank list also has impact to determine ranks of these mixed positives. To determine whether the lower bound is tight in this case is hard, but our lower bound is still valid since `line 38 ~ 43` in `Supp. Sec. B` still make sense.\n* **Case 3**: Rank list with gaps.\\\nIf there are gaps in between two distances, *e.g.*, $d\\left(\\mathbf{q}, \\mathbf{fp}^i\\right)$ is very small but $d\\left(\\mathbf{q}, \\mathbf{tp}^{i+1}\\right)$ is very large (this usually happens on outliers), then AP will be far from the lower bound. Please refer to `Q3, edge case 2` for details.\n\nIn conclusion, under assumptions of `1, 2, 3`, our lower bound is tight. Meanwhile, our lower bound also covers common cases in above discussion and makes a strong connection to AP.\n\n---\n\n\\\nPlease refer to the second part for `Q2 ~ Q5`.\n\n", " \n[**Part 2 of 2**]\n\n---\n\n### ***Q2: On which situation the lower bound is equal to the AP?***\nUnder above assumptions in `Q1`, the lowest AP has a closed form derived by our proposed lower bound $\\frac{\\min{\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$ under the same order of magnitude.\n\nAlso from the example:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{fp}^3 ; \\mathbf{tp}^{4,2} ; \\mathbf{tp}^{5,2} ; \\mathbf{fp}^6 ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8\n$$\nwhere $\\mathbf{tp}^{7,3}$ and $\\mathbf{fp}^2$ determine $\\max{\\left(\\mathbf{q}, \\mathbf{tp}\\right)}, \\min{\\left(\\mathbf{q}, \\mathbf{fp}\\right)}$. If we keep two values unchanged (in other words, ranks of two samples unchanged), then, the highest AP will appear when:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{tp}^{3,1} ; \\mathbf{tp}^{4,1} ; \\mathbf{fp}^5 ; \\mathbf{fp}^6 ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8.\n$$\nAnd the lowest AP will appear when:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{fp}^3 ; \\mathbf{fp}^4 ; \\mathbf{tp}^{5,3} \\mathbf{tp}^{6,3} ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8.\n$$\n\nBased on this, we could easily derive that the lowest AP equals to\n$$\n\\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)} - 1 + \\sum_{i=1}^{\\lvert \\mathbf{TP} \\rvert}{\\frac{i}{\\max{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)} - \\min{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)} + i}}\n$$\nwhich is proportional to $\\frac{\\min{\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$.\n\n---\n\n### ***Q3: Is there any intuition on the edge cases that the lower bound is far way from the AP?***\nAccording to answer of `Q1`, some edge cases are further revealed.\n\n* **Edge case 1**: A huge amount of samples are hashed to the same binary code.\\\nBased on `case 1 and 2 in Q1`, here, we will observe many duplicated or mixed samples. Their ranks will be increased / decreased simultaneously, and AP will be significantly changed along with them. We think now the rank list is \"unstable\". Intuitively, this unstability is caused by the poor hashing model, since it could not distinguish differences between samples and simply hashes them to the same code.\n* **Edge case 2**: Gaps in rank list.\\\nFor example, if:\n$$\n\\mathbf{query} \\Rightarrow \\mathbf{tp}^{1,0} ; \\mathbf{fp}^2 ; \\mathbf{fp}^3 ; \\mathbf{tp}^{4,2} \\mathbf{tp}^{5,2} ; \\mathbf{fp}^6 ; \\mathbf{tp}^{7,3} ; \\mathbf{fp}^8,\n$$\nwhere $d\\left(\\mathbf{q}, \\mathbf{fp}^6\\right) \\ll d\\left(\\mathbf{q}, \\mathbf{tp}^{7, 3}\\right) = \\max{d\\left(\\mathbf{q}, \\mathbf{TP}\\right)}$, then, to influence AP, $\\max{d\\left(\\mathbf{q}, \\mathbf{TP}\\right)}$ needs to be significantly decreased until a swap happens between $\\mathbf{fp}^6$ and $\\mathbf{tp}^{7, 3}$. Therefore, AP may be still high but $\\frac{\\min{\\left(\\mathbf{q}, \\mathbf{fp}\\right)}}{\\max{\\left(\\mathbf{q}, \\mathbf{tp}\\right)}}$ is low. We think outliers cause this edge case.\n\n---\n\n\n### ***Q4: Potential use cases of lower bound.***\n**A:** Exploring use cases of proposed lower bound would reveal significance and value of this work. Just as you mentioned in review, such lower bound could be adopted as a criterion for *e.g.* parameter search of hash code length. Since `Figure 1, 5` in paper and above analysis tell us the value of $\\frac{\\min{\\mathcal{D}_\\mathit{inter}}}{\\max{\\mathcal{D}_\\mathit{intra}}}$ partially reflects hash model's performance, we would quickly evaluate model's performance by such metric other than calculating AP or accuracy which is time consuming. Therefore, adopting it as a performance indicator benefits for model tuning or selection, including but not limited to parameter search.\n\n---\n\n\n### ***Q5: How does this lower bound connect to other metrics?***\n**A:** From `Q1`'s answer, we already know the proposed lower bound has a strong connection between AP. As for other metrics including *precision*, *recall*, *F-score*, *accuracy*, *etc.*, we have discussed how AP is related to them in `Supp. Sec. C.`. Specifically,\n\n* Precision at rank $i$ equals to $\\frac{i - m}{i}$. The corollary in `Supp. Sec. B` exactly applies to it.\n* Recall at rank $i$ equals to $\\frac{i - m}{\\lvert \\mathbf{T} \\rvert}$ where $\\mathbf{T}$ is set of all groundtruth samples. Therefore, $\\lvert \\mathbf{T} \\rvert$ is a constant and recall increases $\\mathit{iff}$ &nbsp; $m$ decreases.\n* F-score equals to $\\frac{2}{\\mathit{recall}^{-1} + \\mathit{precision}^{-1}} = \\frac{2}{i / \\left(i - m\\right) + \\lvert \\mathbf{T} \\rvert / \\left(i - m\\right)} = \\frac{2\\left(i - m\\right)}{i + \\lvert \\mathbf{T} \\rvert}$.\n\nWe could see all above metrics are reversely proportional to $m$. Then, analysis in `Supp. Sec. B` is also valid to them.\n\nAs for accuracy, please refer to `Supp. Sec. C.` for detailed explanation.\n\n---\n\n\\\nThanks again for your kind review and valuable suggestions. All above analysis is in the rebuttal revision of supplementary materials.\n\n\\\n\\\nBest,\n\nPaper 44 Authors\n", " \nThanks for your critical review. Considering your suggestions in review, we hope following reponse could help you for understanding our work and demystifying your concerns.\n\nActually, reviewer ***jH1U*** thinks our paper **already has a nice presentation and organization**. Both the other two reviewers mark our work with ***good*** soundness, presentation and contribution.\n\n### ***Q1: Paper structure needs to be improved.***\n**A:** According to your suggestions, we have polished our paper. Please check the uploaded rebuttal revision for changes. Specifically, we add the **Related Works** section to include and describe current advances in learning-to-hash with proper references. We also update descriptions in **Preliminaries** section and successive sections to inrease readability and make the problem we focus on easier to be understood. We have also revised Introduction, Conclusion sections and fixed typos and grammatical errors in paper.\n\n---\n\n### ***Q2: Mathematical writing needs to be improved.***\n**A:** We provide a thorough derivation on the proposition of lower bound. All definitions, corollary and propositions are required to formulate the lower-bound in general. We think we have clearly stated the argumentation flow. Specifically, we first start with an arbitrary rank list and study how AP is influenced by true positives and false positives by introducing *mis-rank* (`line 101 ~ 110`). We then give lower bound of AP by extending mis-rank to $\\min{d\\left(\\mathbf{q}, \\mathbf{tp}\\right)}$ and $\\max{d\\left(\\mathbf{q}, \\mathbf{fp}\\right)}$ (`line 111 ~ 114`). Finally, we generalize above two distances to inter-class distinctiveness and intra-class compactness and propose the final lower bound (`line 115 ~ 125`). Detailed proof is placed in `Supp. Secs. A and B`. If you have any confusion with above derivation, please point out for us to improve it.\n\n---\n\n### ***Q3: Minor issues.***\n**A:** Benefits in terms of \"carbon neutrality\" are based on the consensus of high efficiency and low cost in downstream hash-based tasks such as fast retrieval `[1]`. To obtain nearest neighbor of a query code as retrieved result, similarities between hash codes are obtained by performing `XOR` operation, which is highly optimized with extremely low energy `[2]`, and therefore reduces power consumption and benefits for carbon neutrality. Other issues such as figure descriptions are revised, please check them in the uploaded revision.\n\n---\n\n\n### **References**\n[1] X. Luo, H. Wang, D. Wu, C. Chen, M. Deng, J. Huang, and X.-S. Hua. **A survey on deep hashing methods**. *ACM Trans. Knowl. Discov. Data*, 2022.\\\n[2] H. Naseri and S. Timarchi, **Low-power and fast full adder by exploring new XOR and XNOR gates**. *IEEE Trans. Very Large Scale Integr. Syst.* 26(8): 1481-1493, 2018.\n\n\\\n\\\nBest,\n\nPaper 44 Authors\n", " \nThanks for your kind review and recognize the effectiveness of our work. We would like to provide following response to your major concerns.\n\n### ***Q1: Is the performance improvement still significant when we increase the size of training set?***\n**A:** We follow public benchmark to evaluate performance of hash models (`Sec. 6.1.2` in main paper). Dataset settings are the same with all methods evaluated in paper. According to your valuable suggestion, we conduct a new experiments on the whole ImageNet dataset (`ILSVRC 2012`) with $1,000$ classes to evaluate performance with large training set ($1.2M$ images, $\\sim 10\\times$ larger than training sets adopted in main paper). For retrieval, we adopt ImageNet val set which has $50,000$ images and $50$ for each class. We randomly pick $5$ images of each class as queries ($5,000$ in total) and remaining is adopted to formulate base split ($45,000$ in total). We train networks for $20$ epoch and only update the last hash layers since backbone is already pre-trained on ImageNet. Other settings are same with main paper.\n\nDue to time limitation, we report `CSQ` and `CSQ_D` results at present. Other results will be added in camera-ready version.\n\n| Methods |`16 bits`|`32 bits`|`64 bits`|\n|:-------:|:--------|:--------|:--------|\n| CSQ | $64.6$ | $65.0$ | $65.7$ |\n| CSQ_D | $\\mathbf{65.7}_{\\uparrow 1.1}$ | $\\mathbf{66.2}_{\\uparrow 1.2}$ | $\\mathbf{66.4}_{\\uparrow 0.7}$ |\n\nWe could see for `CSQ`, our performance improvement is still valid under the big training set setting.\n\n---\n\nOther minor issues such as typos and grammatical errors are revised. Please refer to the rebuttal revision of paper. Thanks for your kind remind.\n\n\\\n\\\nBest,\n\nPaper 44 Authors\n", " We are appreciated to reviewers **HCrQ**, **N6zv** and **jH1U** for your kind reviews, and recognizing our work benefits for learning-to-hash methods.\n\nAccording to reviews, **highlights of this paper include**:\n* The proposed lower bound and its theoretical analysis are interesting and important. (**HCrQ**, **jH1U**)\n* The proposed method is reasonable and effective, and experiment is extensive. (**HCrQ**, **jH1U**)\n* The paper has a nice presentation and organization. (**jH1U**)\n\nBoth of reviewers **HCrQ** and **jH1U** mark our work as **good** soundness, presentation and contribution. They suggest acceptance.\n\n**Major weakness of this paper is writing issues**, including:\n* Related works and preliminaries need to be extended to fully understand the problem.\n* The readability of paper needs to be improved.\n\nBased on above summary, the rebuttal version of paper has been uploaded. Specifically,\n\n* We add the ***Related Works*** section to describe current advances in learning-to-hash.\n* We polish the ***Preliminaries*** section to make the problem easier to be understood.\n* Section 4 is simplified to increase readability.\n\nWe have also added analysis from the response to Reviewer **jH1U**, experiments from the response to Reviewer **HCrQ** in supplementary materials.\n\n\\\nWe hope above changes could make the paper better.\n\n\\\n\\\nBest,\n\nPaper 44 Authors\n", " This paper first proves that inter-class distinctiveness and intra-class compactness among hash codes determine the lower bound of hash codes’ performance. And it shows that promoting these two characteristics could lift the bound and improve hash learning. Then it proposes a surrogate model to fully exploit such objective by estimating posterior of hash codes. Extensive experiments reveal effectiveness of the proposed method. Strengths:\n1. The studied problem is interesting and important because a theoretical analysis on criteria of learning good hash codes remains largely unexplored.\n2. The proposed method seems to be reasonable and effective.\n3. Experiment seems to be extensive. \n\nWeaknesses:\n1. There exist some typos and grammatical errors in the paper. \n2. The training sets on all datasets are relatively small.\n Is the performance improvement still significant when we increase the size of training set? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper on hand addresses the problem if hashing indicating that interclass distinctiveness and intraclass compactness determine the lower bound of hand code performance. Based on these assumptions a model is proposed, showing beneficial properties for different applications. Strengths:\n\nIn general, using hashing could be beneficial for many applications.\n\nThe presented results indicate that the approach is beneficial for at least three different applications.\n\n\nWeaknesses:\n\nThere is no clear description of related work. Even though there is a large number of of references, these are quite general and not directly related to the problem on hand.\n\nTo fully understand the problem, the preliminary section need to be extended. Neither the overall problem, nor the required technical details are given on an adequate level. \n\nIn general, the reading flow is hampered by inadequate structure and mathematical writing. The (wrong) overuse of definitions, propositions, and corollary prevents a fluently reading the paper. \n\nIn addition, the mathematical writing needs to be improved. This includes the correct embedding of of equations into the text, missing or insufficiently defined mathematical terms, and a proper argumentation flow.\n\nIn general, the paper would benefit from a careful proofreading. There are countless typographical and grammatical errors, hampering the reading flow.\n\nThe structure of the paper needs to improved. In particular, the introduction, the conclusion, and the discussion need to be structured differently. The argumentation is not straight forward and somehow redundant.\n\nThe caption of Fig. 4 needs to be extended. Even though described in the text, from the figure it is not clear what is shown.\n\nThe meaning of Fig. 5 is not fully clear. In this case a more thorough discussion in the text is necessarily required.The I would argue with benefits in terms of speed but not on \"carbon neutrality\"\n\nAs seen above from the weaknesses, there are several points that would need to be addressed in a revised version to increase the clarity and the reading flow. In particular, the technical contribution needs to discussed in more clear way.\n\nOverall, there are too many flaws for accepting the paper for NeurIPS. The paper on hand is a theoretical contribution, thus there are no limitations in this context.\n\n", " This paper studies the learn-to-hash problem, where we need to transform images into hash codes for fast retrieval. The major research highlight of this paper is the lower bound for hash codes' performance. This paper uses inter-class distinctiveness and intra-class compactness to present a lower bound for the average precision, an evaluation metric for learn-to-hash. Moreover, this paper uses this lower bound as an objective for learn-to-hash. As a result, this paper presents a significant increase in average precision and accuracy. Strengths;\n\n1. The proposed lower bound is useful. It connects the evaluation metric with the current state of the learn-to-hash model. As a result, we can use it as a guide in the model training.\n\n2. The experimental evaluation is extensive. The authors present a comparison with solid baselines. Moreover, the authors study the training efficiency of the proposed objective with Cauchy and BCE.\n\n3. The paper provides a theoretical analysis of the proposed lower-bound and its gradient when we use it as a loss. In the supplementary material, this paper also includes necessary implementation details. 1. This paper introduces a lower bound for AP using inter-class distinctiveness and intra-class compactness. Is this lower bound tight? On which situation the lower bound is equal to the AP? Is there any intuition on the edge cases that the lower bound is far way from the AP?\n\n2. This paper introduces the strength of the proposed lower bound as an objective function. Is there any more potential use case for this lower bound other than the objective? For instance, can we use this lower bound in the parameter search of hash code length?\n\n3. This paper presents improvements in both mAP and accuracy using the proposed lower bound as an objective. Since the lower bound is designed for AP. Is there a connection between it and the accuracy so that we observe this improvement. In fact, this leads to a more general question: how does this lower bound connect to other metrics such as recall, and precision? \n In general, this paper has a nice presentation and organization. Both theoretical analysis and experimental evaluation are presented for better illustration. However, it would be better to provide a deeper analysis of the functionality of the lower bound. I would like to see the answers to the raised questions in the previous section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "3TxPI3xwWY", "KNtTLynuhZ", "Wp6w2p96eKX", "8PwhIJl2sn", "jZQ7bKv66K9", "IdTpMPK4lA_", "ZQ7IqAWSO7r", "8PwhIJl2sn", "8PwhIJl2sn", "jZQ7bKv66K9", "IdTpMPK4lA_", "nips_2022_k7xZKpYebXL", "nips_2022_k7xZKpYebXL", "nips_2022_k7xZKpYebXL", "nips_2022_k7xZKpYebXL" ]
nips_2022_yam42JWePu
Fine-Grained Semantically Aligned Vision-Language Pre-Training
Large-scale vision-language pre-training has shown impressive advances in a wide range of downstream tasks. Existing methods mainly model the cross-modal alignment by the similarity of the global representations of images and text, or advanced cross-modal attention upon image and text features. However, they fail to explicitly learn the fine-grained semantic alignment between visual regions and textual phrases, as only global image-text alignment information is available. In this paper, we introduce LOUPE, a fine-grained semantically aLigned visiOn-langUage PrE-training framework, which learns fine-grained semantic alignment from the novel perspective of game-theoretic interactions. To efficiently estimate the game-theoretic interactions, we further propose an uncertainty-aware neural Shapley interaction learning module. Experiments show that LOUPE achieves state-of-the-art performance on a variety of vision-language tasks. Without any object-level human annotations and fine-tuning, LOUPE achieves competitive performance on object detection and visual grounding. More importantly, LOUPE opens a new promising direction of learning fine-grained semantics from large-scale raw image-text pairs.
Accept
This paper addressed the fine-grained visual-language alignment from the perspective of game-theoretic interactions. It received diverse scores with three weak accept and one week reject. The technical novelty is acknowledged by all reviewers. The initial reviews raised concerns about unclear explanations of Shapley, insufficient experiments, and comparison. During the discussion, the authors have addressed most of the concerns by adding tables to compare with existing works, adding zero-shot classification experiments in 11 datasets, and sufficient discussion with previous works. Reviewer H9ch mentions possible unfair comparison. The authors have reimplemented the CLIP with a fair encoder and the performances have been consistently improved. The meta-reviewers thus suggest accepting this paper. However, as highlighted by most reviewers, the method is too complex and hard to make readers easily follow, especially the Shapley part. It is strongly encouraged to revise the camera-ready paper. And the proposed 240M dataset is better released to facilitate future research to follow and compare, and explained about filter strategy and possible ethical issues, as addressed by two reviewers and promised by the authors.
train
[ "pM-U2gFln_c", "lrsIGPAEu-m", "_15ypRX5enS", "9eop7sDY_k", "LjDZBc55Odz", "6V1SheG4rmT", "wELlQD5mr2", "hiNg_nM4POg", "xRYba-MmdQu", "LvLqx2UnYDG", "EAfUOpmPmNFH", "AmPztFDZFvN", "QiQmCgL6x66", "znyhaAhqVmI", "GVS7ZjYozdy", "uv2TveQx4m2", "9BD34rZhuVo", "H3J6wa9dBkc", "BdG819mkmTc", "nwondwNlCYw", "2FbKEd39FJ", "1zoaMybpSWE", "l5tgVE-tTLW", "fZO8iJAAi3Y", "IJKVhnV6w-a", "xzaspfuARWx", "VZqC7EKJ4wI", "ygKFprg6Iy5" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, since the discussion stage is about to end, do you have any major concerns or suggestions? We are happy to discuss with you.", " Dear reviewer, do you have any further concerns or suggestions? We are very delighted to discuss with you.", " We really appreciate your precious time and valuable suggestions. As you nicely point out, we will carefully improve the organization and expression of the methodology section.", " Thank the authors for their response. \n\nThe rebuttal answers/clarifies most of my questions as well as my concerns.\nI still strongly suggest that the methodology section could be simplified or re-organized to improve the paper's readability (as well as its impact).\n\nOverall, I recommend the paper for acceptance.", " Thanks for your positive and insightful feedback. We really appreciate your constructive review and your precious time. We quite agree with your suggestion of a further comparison against other models on acceptable-scale public datasets (YFCC15M, LAION). We will include it as an important future work.", " Thanks the authors for those experiments and explanation. Most of my concerns are solved. As for pre-training dataset, I still hope to see a more fair comparison against other models on acceptable-scale public datasets (YFCC15M, LAION). I will raise my score.", " Dear reviewer, we have tried to address your concerns in our earlier responses. If you have any additional questions or suggestions, we are very happy to discuss with you.", " Dear reviewer, we have tried to address your concerns in our earlier response. If you have any further questions or suggestions, we are very happy to discuss with you.", " We appreciate the reviewer for the positive and insightful feedback. We quite agree with your suggestion and will explore it as an important future work. Also, inspired by your nice suggestion, we suppose that it is interesting and potential to enhance our LOUPE model by additional supervision from pre-trained detectors or human annotations, which might be able to work in a mutually enhanced way. Thanks for your insightful suggestion!", " Thanks for your prompt reply. As you said, CLIP uses a logistic regression classifier, which is a typical linear classifier. We follow the setting as claimed in Appendix A.3.Evaluation of CLIP paper. According to Appendix A.3.Evaluation of CLIP paper, we take the representations from the frozen image encode as input features and train a linear classifier, *i.e.*, logistic regression. For the further hyperparameter configuration of linear probing, it is not provided in both the CLIP paper and its official code implementation. Thus, we implement these hyperparameters as we reported in the table. We hope we have addressed your concerns. Waiting for your kind reply at any time. Thank you!", " I didn't find the description of linear probing configuration in CLIP paper, could you tell me where this part is? If I didn't miss something, CLIP uses logistic regression.\n", " I would like to thank the authors for their timely responses. Most of my concerns are adequately addressed. The only remained suggestion is the comparison with more related baselines, which I believe would better demonstate the effectiveness of the work.\n\nIn general, I believe this paper is an interesting touch towards learning fine-grained multi-modal relations, and the result seems to be promising. From my perpespective, the quality of this paper is above the acceptance threshold for NeurIPS2022.", " Many thanks for your prompt reply!\n\n**1) Did A1(2) use the full 240M data? How many GPUs did the CLIP\\* experiment use and how long did it take?**\n\n**A1:** Thanks for your important concern. The CLIP* is also pre-trained on the full 240M dataset using 128 V100 GPUs, keeping almost the same pre-training details (*e.g.*, optimizer, learning schedule) as our LOUPE. The CLIP* takes about 22 days to train on 128 cards.\n\n**2) Can you provide results on the 11 datasets in A2 respectively? And how is the corresponding hyperparameter configuration?**\n\n**A2:** Thanks for your suggestion. We provide the detailed results in the following tables.\n\ni. Results (top-1 accuracy) of zero-shot image classification over 11 datasets.\n| | CIFAR10 | Food101 | StanfordCars | SUN397 | Flowers102 | Country211 |\n| :-------- | :------: | :------: | :----------: | :------: | :--------: | :--------: |\n| CLIP | **96.2** | 92.9 | 77.3 | 67.7 | 78.7 | 34.9 |\n| **LOUPE** | 95.9 | **94.3** | **79.9** | **69.8** | **87.4** | **37.8** |\n\n| | FER2013 | Aircrafts | OxfordPets | Caltech101 | ImageNet |\n| :-------- | :------: | :-------: | :--------: | :--------: | :------: |\n| CLIP | **57.7** | 36.1 | 93.5 | 92.6 | 75.3 |\n| **LOUPE** | 53.3 | **54.9** | **94.1** | **93.9** | **76.1** |\n\n \n\nii. Linear probing performance (top-1 accuracy) over 11 datasets.\n| | CIFAR10 | Food101 | StanfordCars | SUN397 | Flowers102 | Country211 |\n| :-------- | :------: | :------: | :----------: | :------: | :--------: | :--------: |\n| CLIP | **98.0** | 95.2 | 90.9 | 81.8 | 99.2 | 46.4 |\n| **LOUPE** | 97.6 | **96.0** | **92.1** | **82.6** | **99.5** | **49.3** |\n\n| | FER2013 | Aircrafts | OxfordPets | Caltech101 | ImageNet |\n| :-------- | :------: | :-------: | :--------: | :--------: | :------: |\n| CLIP | **72.9** | 69.4 | 95.1 | 96.5 | 83.9 |\n| **LOUPE** | 70.7 | **80.2** | **95.5** | **97.5** | **85.7** |\n\n \n \n\nFor linear probing evaluation, we follow the same setting as CLIP. Specifically, we freeze the whole backbone model and use the final representation of the [CLS] token as the global image representation. Then, we train a linear classifier on the global image representation and report the top-1 accuracy for each dataset. The following table shows the hyperparameter configuration on these 11 datasets.\n\n| Image Size | Training Epochs | Batch Size | Optimizer | Learning Rate | Weight Decay |\n| :--------: | :-------------: | :--------: | :-------: | :-----------: | :----------: |\n| 224*224 | 100 | 512 | AdamW | $ 3e^{-5}$ | 0.1 |\n\nWe hope we have addressed all of your concerns. Waiting for your kind reply at any time. Thank you!\n\n", " 1)Did A1(2) use the full 240M data? How many GPUs did the CLIP* experiment use and how long did it take?\n\n2)Can you provide results on the 11 datasets in A2 respectively? And how is the corresponding hyperparameter configuration?", " We thank all the reviewers for their insightful and valuable comments! Overall, we are encouraged that they find that:\n\n1. The idea of learning fine-grained region-phrase alignment from the perspective of game-theoretic interactions is **quite interesting and novel** **(all reviewers)**.\n2. Viewing weakly-supervised region-phrase alignment in terms of game theory is a **promising direction to explore** (Reviewer yCwd).\n3. It is **quite impressive** that our method can be used as a zero-shot object detector after pre-training on raw image-text pairs without bounding-box annotations (Reviewer gkzN).\n\nWe have revised the manuscript according to the reviewers' comments. The main changes we made include:\n\n1. In Appendix I, we add experiments of zero-shot image classification over 11 datasets.\n2. In Appendix J, we add experiments of linear probing over 11 datasets.\n3. In Appendix K, we add a training cost-performance comparison table and discuss the training efficiency of our method.\n4. In Appendix L, we add a comparison table to highlight key differences of our LOUPE with various methods. Also, we add a detailed discussion with some related works (*i.e.*, FILIP, RegionCLIP, X-VLM).\n5. In Appendix E, we add more details about pre-training dataset construction.\n\nNext, we address each reviewer's detailed concerns point by point. We hope we have addressed all of your concerns. Discussions are always open. Thank you!", " **Q6: I am confused about Eq.(12). The authors claim the loss function is derived from the regression loss function. Could the authors provide a detailed derivation process? From my perspective the second term $\\beta_2\\sigma$ should be $\\beta_2log(\\sigma)$.**\n\n**A6:** Thanks for your question. We respectfully clarify our expression in Line 236. We do not mean that Eq. 12 is derived from a specific regression loss. Instead, we mean the form of Eq. 12 is a regression loss function, where we optimize the mean squared error between $\\hat{\\mathfrak{I}}$ and $\\mathfrak{I}^*$. As you correctly point out, the second term is $log(\\sigma)$ in original noisy label learning papers. Here, we replace $log(\\sigma)$ with $\\beta_2\\sigma$ because we empirically find it is more numerically stable and can also achieve good performance. We will clarify this in revision.\n\n**Q7: I am also concerned about the training time shown in table 3. It seems like the additional cost is still huge even with the approximation module. I wonder if the authors can provide a training cost-performance trade-off compared with other works? Some qualitative analysis would also do the trick.**\n\n**A7:** Thanks for your constructive suggestion. Please refer to our reply to the Q2 of you. As you nicely suggested, we will include this discussion in the next version.\n\n**Q8: If possible, I would like to see a comparison between the proposed model and some missing related works, like RegionCLIP and X-VLM that were mentioned above. These approaches share similar insight and can also perform zero-shot transfer to several downstream tasks.**\n\n**A8:** Thanks for your suggestion. Although both RegionCLIP and X-VLM focus on fine-grained semantics, they rely on pre-trained object detectors or manual bounding-box annotations. Therefore, they are limited to a closed set of object categories, which is pre-defined by the detectors or annotation labels. In contrast, our LOUPE can detect open-vocabulary categories of objects, without resorting to any human annotations or pre-trained detectors. Furthermore, our LOUPE can handle more diverse scenarios, such as zero-shot visual grounding. Based on the above analysis, we suppose that our LOUPLE might perform better on open-vocabulary object detection. We appreciate that the reviewer points out a meaningful and interesting direction for future research. In the future, we will attempt to compare our LOUPE with these methods in open-vocabulary object detection.", " **Q2: Training efficiency. I understand that the authors use a sub-network to predict the real Shapley value to save excessive computation costs. Nevertheless, the training time in table 3 (comparing line 1 and line 4) shows that the proposed approach still involves 60~70 training time. This can be a great burden, especially for large-scale pre-training.**\n\n**A2:** Thanks for raising this concern. Although our proposed Shapley interaction modeling increases the training time of per iteration, it enables our model to converge with fewer total iterations by encouraging our model to learn fine-grained region-phrase alignment beyond coarse image-text alignment. As you nicely suggested in the **Questions** section (Q7), we compare the training cost and performance of our LOUPE with other works in the following table. As shown in the following table, our LOUPE achieves the best performance while using relatively small GPU days (128 GPUs $\\times$ 20 days).\n\n| Method | &nbsp; &nbsp; &nbsp; GPUs | Training Time | Flickr30K I2T | Flickr30K T2I | MSCOCO I2T | MSCOCO T2I |\n| :-------- | :--------: | :-----------: | :-----------: | :-----------: | :--------: | :--------: |\n| CLIP | 256 V100 | 12 days | 88.0 | 68.7 | 58.4 | 37.8 |\n| ALIGN | 1024 TPUv3 | - | 88.6 | 75.7 | 58.6 | 45.6 |\n| FILIP | 192 V100 | 24 days | 89.8 | 75.0 | 61.3 | 45.9 |\n| **LOUPE** | 128 V100 | 20 days | **90.5** | **76.3** | **62.3** | **50.1** |\n\nFurthermore, our proposed Shapley interaction modeling:\n\n1. enables our model to perform object detection in a zero-shot manner, avoiding expensive and time-consuming human annotations;\n\n2. avoids using off-the-shelf object detectors (*e.g.*, Faster R-CNN) to extract visual features. Recent studies [A, B] have noticed that extracting visual features using object detectors greatly slows down the training (about 20 FPS per GPU) and requires more GPU memory.\n\n[A] ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. Kim et al. ICML 2021.\n\n[B] Filip: Fine-Grained Interactive Language-Image Pre-training. Yao et al. ICLR 2022.\n\n**Q3: How is the score function in the game defined? Does it need to be carefully designed?**\n\n**A3:** Thanks for your question. As you correctly understand, the score function needs to be carefully designed with the goal of reflecting the contribution of semantic regions and phrases. For token-level Shapley interaction modeling, the score function is defined as the global similarity between images and texts. To compute $v_1(S)$, we keep tokens in $S$ and mask tokens in $X \\setminus S$ to zeros. Therefore, only tokens in $S$ contribute to the score function. For semantics-level Shapley interaction modeling, the score function is formulated as a carefully-designed fine-grained similarity score, which computes similarity based on region-phrase alignment scores. For more details, we respectfully refer the reviewer to Line 197-207 of the original paper. We will further clarify this in revision.\n\n**Q4: The alignment between image regions and text phrases implicitly involves the concept of object. This may lead to similar limitations to the previous works that relied on pre-trained object detectors, and hard to generalize to object-free inputs. Also, according to line185, all regions are constrained within a scale of K patches. I wonder how does the model deal with objects (concepts) of different scale, e.g., apple and sky?**\n\n**A4:** (1) Thanks for raising an important point. As mentioned in the differences with RegionCLIP, our LOUPE focuses on semantically rich phrases, which might contain diverse context. For example, we will extract the phrase \"a man drinking wine alone\" from the sentence \"A woman looks at a man drinking wine alone\". This phrase involves not only the object \"man\" and \"wine\", but also the action \"drinking\". Therefore, our LOUPE can learn a boarder set of visual concepts (*e.g.*, objects, actions, relations) from the large-scale image-text data.\n\n(2) Thanks for pointing out a confusing notation. K is not a fixed hyper-parameter. Different regions might have different numbers of patches, which is determined by the scale of the predicted bounding boxes. Now, we revise the notation as $\\mathcal{R}_i = \\\\{ \\mathbf{x}\\_{i, k}^I \\\\}\\_{k=1}^{K_i}$, where $K_i$ is the number of patches for $\\mathcal{R}_i$. We will include this revision and clarify this in the next version.\n\n**Q5: Line 230. How long do we need to train the UNSIL module?**\n\n**A5:** Thanks for your question. It takes about 20 hours to warm-start the UNSIL module.", " We sincerely appreciate the reviewer for the constructive and insightful feedback. We are encouraged that the reviewer finds our idea is quite interesting and novel. We will explain your concerns point by point.\n\n**Q1: Some missing related works. Similar to this paper, there are also works that attempt to address the problem of missing fine-grained information. For example, RegionCLIP[1] also finds regions in the image and aligns them with text phrases. Off-the-shelf language parsers are similarly considered. Also, RegionCLIP is able to perform zero-shot object detection tasks. Another work, X-VLM[2], shares a similar insight. I believe these works are more related to the paper, and a detailed discussion on the connections and differences with these works is suggested.**\n\n**A1:** Thanks for the nice suggestion. Our LOUPE is different from RegionCLIP in the following aspects:\n\n1. **RegionCLIP** uses pre-trained Region Proposal Network (RPN) to detect regions in images. However, RPN is usually pre-trained on pre-defined object categories (*e.g.*, 80 classes for MSCOCO), which can not cover extensive categories of objects in the large-scale pre-training dataset. Furthermore, since the RPN casts excessive demand on memory and computation, existing methods (*i.e.*, RegionCLIP) usually fix the parameters of RPN and regard region detection as pre-processing step, disconnected with vision-language pre-training. Thus, the performance of RegionCLIP is also restricted by the quality of the RPN. In contrast, our **LOUPE** learns to identify semantic regions of images by token-level Shapley interaction modeling, which is more scalable and enables our LOUPE to learn a broader set of visual concepts from large-scale pre-training datasets. For example, as shown in the right case of Figure 4, LOUPE successfully recognizes the leash region and aligns it with the “a leash” phrase. Note that the “leash” category has never appeared in any existing object detection datasets.\n2. **RegionCLIP** constructs a pool of object concepts from the image-text corpus and aligns visual regions with these concepts. These concepts are usually individual nouns (*e.g.*, boy, kite, bus). In contrast, our **LOUPE** focuses on phrases that involve rich context (*e.g.*, \"a boy running on the grass\"). By aligning visual regions with phrases that contain rich semantic context, our LOUPE can learn a boarder set of visual concepts (*e.g.*, objects, actions, relations) from the large-scale pre-training dataset.\n\nAs for X-VLM, the main differences lie in three-fold:\n\n1. **X-VLM** is trained on well-annotated datasets, where regions with bounding-box annotations are provided and each of them is associated with a description text. Such a manner is time-consuming and hard to scale to larger raw image-text data from the web. Our **LOUPE** differs as we are trained on noisy image-text pairs from the Internet.\n2. **X-VLM** takes ground-truth regions as input and is trained to predict bounding boxes supervised by the regression loss on the ground-truth coordinates. In contrast, our **LOUPE** learns to identify semantic regions of images without such strong supervision signals from human annotations.\n3. **X-VLM** has ground-truth alignment information between regions and their corresponding description texts, which provide strong supervision signals for region-text matching. By comparison, our **LOUPE** learns the fine-grained region-phrase alignment from game-theoretic interactions.\n\nWe will include these discussions in the next version according to your nice suggestion.", " **Q3: Why are the loss weights of $L_{CMC}, L_{TSA}$ and $L_{FSA}$ set to 1:1:1?**\n\n**A3:** Thanks for your question. We empirically find that it already performs well when we assign these losses the same weight without carefully tuning the weight hyper-parameter. Thanks for your suggestion, and we will explore different weight hyper-parameters in the future.\n\n**Q4: Are public datasets, such as CC12M, included in the 240M dataset? In addition to the description part in Supplementary Material E, it is hoped that a more detailed introduction to the 240M dataset can be provided.**\n\n**A4:** Thanks for your concern. Our pre-training dataset does not include any well-annotated public datasets, such as CC12M, COCO, and Visual Genome. As suggested, we elaborate more details in the following:\n\n1. **Raw image-text pair collection.** We first harvest large-scale noisy image-text pairs from the web and design multiple filtering rules to improve the quality of the web data.\n2. **Image-based filtering.** Following ALIGN [D], we remove pornographic images and keep only images where both dimensions are larger than 200 pixels. Also, we remove the images whose aspect ratio is larger than 10. To prevent from leaking testing data, we remove the images that appear in all downstream evaluation datasets (*e.g.*, MSCOCO, Flickr30K). \n3. **Text-based filtering.** We remove the repeated captions and keep only English texts. The texts that are shorter than 3 words or longer than 100 words are discarded. As ALIGN [D], we also remove the texts that contain any rare token (outside of 100 million most frequent unigrams and bigrams from the raw dataset).\n4. **Joint image-text filtering.** Although the above filtering rules have filtered out many noisy data, it is hard to detect the mismatched image-text pairs, where the texts do not accurately describe the visual content of the images, resulting in undesirable noisy signals to vision-language pre-training. Inspired by BLIP [E], we train a discriminator as a filtering model to predict whether the text is matched to the image. Specifically, the filtering model consists of an image encoder and an image-grounded text encoder, which takes the cross-attention to fuse image features and text features. The filtering model is trained on CC12M dataset using image-text contrastive loss and image-text matching loss.\n\nWe will add the details in the next version according to the reviewer's suggestion.\n\n\n\n[D] Scaling up visual and vision-language representation learning with noisy text supervision. Jia et al. PMLR 2021.\n\n[E] BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. Li et al. ICML 2022.", " We appreciate the reviewer for the valuable comments. Our response to the reviewer’s questions is as follows.\n\n**Q1: The experiments are not sufficient. The model structures(both image encoder and text encoder) of different methods in Tabel 1 and Table 2 are inconsistent and the comparison in unfair.**\n\n**A1:** (1) Thanks for raising this concern. Our text encoder is implemented by BERT-Small, which is consistent with most methods (*e.g.*, ALIGN uses BERT-Large, ALBEF uses BERT-Base, UNITER uses BERT). For the image encoder, we observe that it varies with different methods (*e.g.*, FILIP uses ViT-L, ALIGN uses EfficientNet, UNITER uses Faster R-CNN, X-VLM uses Swin). As discussed in Appendix G, in our work, we adopt the Swin-L as our image encoder due to the following considerations:\n\n1. The shifted windowing scheme of Swin Transformer achieves linear computational complexity with respect to image size, which is more efficient than ViT. This merit is particularly beneficial to the vision-language pre-training as we need to process large-scale images (240M).\n2. The hierarchical architecture of Swin Transformer is more flexible to model semantic regions at various scales.\n\n(2) To further verify the real performance gain from our proposed fine-grained semantically aligned vision-language pre-training framework, we implement a variant version of CLIP that adopts Swin-L as the image encoder, using the same training dataset as our LOUPE. As shown in the following table, comparing CLIP* with CLIP, the Swin-L image encoder does bring some improvements over CLIP. However, there is still a clear performance gap between CLIP* and our LOUPE. With the same architecture, our LOUPE has 2.68 points higher average R@1 than the CLIP* over two datasets. This further verifies that the main performance gain comes from our proposed fine-grained semantically aligned vision-language pre-training framework. Notably, we observe that the text-to-image retrieval of our implementation is obviously higher than CLIP. This phenomenon has also been confirmed by [B, D] (see Row 1 and Row 2 in the table). We suppose that it might be caused by some training details or the dataset collection of CLIP. We refer the reviewer to Appendix G for more details.\n\n| | Image Encoder | Flickr30K I2T | Flickr30K T2I | MSCOCO I2T | MSCOCO T2I |\n| :-------- | :-----------: | :-----------: | :-----------: | :--------: | :--------: |\n| ALIGN | EfficientNet | 88.6 | 75.7 | 58.6 | 45.6 |\n| FIILIP | ViT-L | 89.8 | 75.0 | 61.3 | 45.9 |\n| CLIP | ViT-L | 88.0 | 68.7 | 58.4 | 37.8 |\n| CLIP* | Swin-L | 88.7 | 74.3 | 59.3 | 46.2 |\n| **LOUPE** | Swin-L | **90.5** | **76.3** | **62.3** | **50.1** |\n\n[B] Filip: Fine-Grained Interactive Language-Image Pre-training. Yao et al. ICLR 2022.\n\n[D] Scaling up visual and vision-language representation learning with noisy text supervision. Jia et al. PMLR 2021.\n\n**Q2: Can plain Dual-Encoder be used for zero-shot classification? How is the result on ImageNet dataset?**\n\n**A2:** Thanks for the constructive suggestion. Our answer is yes. As you nicely suggested, we add the zero-shot image classification and linear probing experiments over 11 datasets. For zero-shot image classification, our LOUPE outperforms CLIP with average improvement of 3.1%. Specifically, our LOUPE achieves 76.1% top-1 accuracy on ImageNet, surpassing CLIP by 0.8%. For linear probing evaluation, LOUPE achieves average improvement of 1.6% over CLIP. Specifically, LOUPE achieves 85.7% top-1 accuracy on ImageNet, surpassing CLIP by 1.8%. Please refer to our reply to the Q1 of Reviewer gkzN for more detailed results.", " **Q2: The training time needed is increased by 65% (1.17->1.93).**\n\n**A2:** Thanks for raising a concern about training efficiency. \n\n(1) Although our proposed Shapley interaction modeling increases the training time per iteration, it enables our model to converge with fewer total iterations by encouraging our model to learn fine-grained region-phrase alignment beyond coarse image-text alignment. As shown in the following table, our LOUPE achieves the best performance while using relatively small GPU days (128 GPUs $\\times$ 20 days).\n\n| Method | &nbsp; &nbsp; &nbsp; GPUs | Training Time | Flickr30K I2T | Flickr30K T2I | MSCOCO I2T | MSCOCO T2I |\n| :-------- | :--------: | :-----------: | :-----------: | :-----------: | :--------: | :--------: |\n| CLIP | 256 V100 | 12 days | 88.0 | 68.7 | 58.4 | 37.8 |\n| ALIGN | 1024 TPUv3 | - | 88.6 | 75.7 | 58.6 | 45.6 |\n| FILIP | 192 V100 | 24 days | 89.8 | 75.0 | 61.3 | 45.9 |\n| **LOUPE** | 128 V100 | 20 days | **90.5** | **76.3** | **62.3** | **50.1** |\n\n(2) Indeed, the proposed Shapley interaction modeling increases the training time per iteration, but it enables our model to learn fine-grained region-phrase alignment from raw image-text pairs without any object-level human annotations. As you nicely recognize, our LOUPE can be used as a zero-shot object detector without any fine-tuning. Compared with the expensive cost of human annotations, the increased training time might be acceptable. Meanwhile, manual annotations for extremely diverse object categories in the real world are unscalable and even impossible while our model demonstrates a promising alternative, that is, learning fine-grained semantics from raw texts about images, which are easily available and contain a broader set of visual concepts. For example, as shown in the right case of Figure 4, LOUPE successfully recognizes the leash region and aligns it with the “a leash” phrase. Note that the “leash” category has never appeared in any existing object detection datasets.\n\n(3) On the other hand, our method is much more efficient than methods that rely on off-the-shelf object detectors (*e.g.*, Faster R-CNN) to extract visual features. Recent studies [A, B] have noticed that extracting visual features using object detectors greatly slows down the training (about 20 FPS per GPU) and requires more GPU memory. Thus, our model avoids such a heavy burden while being able to identify semantic-rich visual regions without any pre-training detectors or human annotations.\n\nWe respectfully hope the reviewer could reconsider the superiority and scalability brought from our LOUPE.\n\n\n\n[A] ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. Kim et al. ICML 2021.\n\n[B] Filip: Fine-Grained Interactive Language-Image Pre-training. Yao et al. ICLR 2022.\n\n**Q3: The dataset used seems to be a private dataset and it's hard to convince people that the comparison is fair enough. Although the scale of the proposed dataset is smaller, the quality is not compared and it matters a lot in CLIP-like pretraining. Also, it's not indicated whether the dataset will be released or not. If not, it's hard for people to reproduce this work.**\n\n**A3:** Thanks for the suggestion. As sufficient data is a prerequisite for vision-language pre-training, recent CLIP, ALIGN, and FILIP construct datasets with 400M, 1800M, and 340M image-text pairs, respectively. Since they are not publicly available, we also collect 240M noisy image-text pairs from the Internet. Note that we do not include any well-annotated datasets, such as CC12M, COCO, and Visual Genome. To facilitate future research and fair comparison, we will carefully review our collected dataset and consider releasing it in the future. Moreover, as we understand the cost of large-scale pre-training might be unaffordable for colleges and individual researchers, we plan to release the code and pre-trained model to promote more future research on downstream tasks and applications. In addition, we notice a recently released dataset LAION-400M [C], which makes it possible to fairly benchmark the performance of large-scale vision-language pre-training models. We would evaluate our model on this dataset as a future work.\n\n\n\n[C] LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. Schuhmann et al. Arxiv: 2111.02114.", " We sincerely thank you for your comprehensive comments and constructive advice. We will explain your concerns point by point.\n\n**Q1: The evaluations on zero-shot image classification and linear probing are missing. Those are important to prove whether the learned visual backbone is strong and robust. In CLIP, they reported the above two evaluations in 20+ datasets to prove the transferability of their visual backbone.**\n\n**A1:** Thanks for the constructive suggestion. Following the same setting as CLIP, we add the **zero-shot image classification** and **linear probing** experiments over 11 datasets and report the top-1 accuracy in the following tables. Moreover, we evaluate the transferability of our LOUPE on **vision-language generation task**, *i.e.*, image captioning.\n\n(1) For zero-shot image classification, as shown in the following table, our LOUPE outperforms CLIP with average improvement of 3.1%. Notably, on ImageNet, the largest dataset among 11 datasets, our LOUPE surpasses CLIP by 0.8%. Also, we observe that LOUPE achieves substantial performance gains on several fine-grained image classification datasets (*i.e.*, Flowers102 and Aircrafts). It demonstrates the superiority of our LOUPE on fine-grained semantics understanding.\n| | CIFAR10 | Food101 | StanfordCars | SUN397 | Flowers102 | Country211 |\n| :-------- | :------: | :------: | :----------: | :------: | :--------: | :--------: |\n| CLIP | **96.2** | 92.9 | 77.3 | 67.7 | 78.7 | 34.9 |\n| **LOUPE** | 95.9 | **94.3** | **79.9** | **69.8** | **87.4** | **37.8** |\n\n| | FER2013 | Aircrafts | OxfordPets | Caltech101 | ImageNet |\n| :-------- | :------: | :-------: | :--------: | :--------: | :------: |\n| CLIP | **57.7** | 36.1 | 93.5 | 92.6 | 75.3 |\n| **LOUPE** | 53.3 | **54.9** | **94.1** | **93.9** | **76.1** |\n\n(2) For linear probing evaluation, as shown in the following table, our LOUPE outperforms CLIP with average improvement of 1.6%. Notably, on ImageNet, the largest dataset among 11 datasets, our LOUPE surpasses CLIP by 1.8%.\n\n| | CIFAR10 | Food101 | StanfordCars | SUN397 | Flowers102 | Country211 |\n| :-------- | :------: | :------: | :----------: | :------: | :--------: | :--------: |\n| CLIP | **98.0** | 95.2 | 90.9 | 81.8 | 99.2 | 46.4 |\n| **LOUPE** | 97.6 | **96.0** | **92.1** | **82.6** | **99.5** | **49.3** |\n\n| | FER2013 | Aircrafts | OxfordPets | Caltech101 | ImageNet |\n| :-------- | :------: | :-------: | :--------: | :--------: | :------: |\n| CLIP | **72.9** | 69.4 | 95.1 | 96.5 | 83.9 |\n| **LOUPE** | 70.7 | **80.2** | **95.5** | **97.5** | **85.7** |\n\n\n(3) Furthermore, we evaluate our LOUPE on vision-language generation task, *i.e.*, image captioning, to demonstrate the generalization ability of the learned cross-modal representations by our LOUPE. As shown in the following table, our LOUPE achieves competitive performance on all metrics, which verifies the strong generalization ability of our model on downstream vision-language generation tasks. We refer the reviewer to Appendix F for more details.\n\n| Method | BLEU@4 | METEOR | CIDEr | SPICE |\n| ----------- | :------: | :------: | :-------: | :------: |\n| VLP | 36.5 | 28.4 | 117.7 | 21.3 |\n| OSCAR-Large | 37.4 | 30.7 | 127.8 | 23.5 |\n| VinVL-Large | 38.5 | 30.4 | 130.8 | 23.4 |\n| BLIP-ViT-L | 40.4 | - | 136.7 | - |\n| LEMON-Large | 40.6 | 30.4 | 135.7 | 23.5 |\n| **LOUPE** | **40.9** | **31.5** | **137.8** | **24.3** |", " **Q3: The paper lacks convincing motivation on why using Shapley value instead of token-wise alignment (FILIP [42]) is beneficial. Although in lines 97-99, the paper mentions that FILIP has quadratic complexity, the proposed method also suffers from combinatorial complexity, which could be worse.**\n\n**A3:** Thanks for raising an important point. The superiorities of using Shapley Interaction modeling are mainly three-fold:\n\n1. We suppose that directly computing token-wise alignment between every patch token and word token is not efficient and meaningful because an individual word token or patch token might not contain complete semantics. A semantic-rich phrase (*e.g.,* “a girl in a blue coat”) usually consists of multiple words, and its corresponding visual region is composed of multiple patches. Also, some words (*e.g.*, \"is\", \"the\") and patches (*e.g.*, background pixels) are not meaningful. Based on this insight, our LOUPE differs as we first propose token-level Shapley interaction modeling to aggregate patches into semantic-meaningful regions, and then introduce semantics-level Shapley interaction modeling to explicitly model the fine-grained semantic alignment between semantic-meaningful regions and phrases.\n2. Although FILIP computes token-wise similarity to simulate the fine-grained alignment, it can only learn implicit alignment from the indirect supervision of image-text contrastive loss, lacking training signals to explicitly encourage semantic alignment between visual regions and textual phrases. In contrast, our Shapley interaction modeling provides explicit supervision signals (*e.g.*, the alignment matrices visualized in Figure 4) to learn the fine-grained alignment. The consistently superior performance of our LOUPE than FILIP over all metrics also demonstrates the benefit of explicit fine-grained alignment learning. \n3. FILIP can not be directly applied to object detection and visual grounding through implicit token-wise alignment learning while our LOUPE can immediately transfer to these tasks without any fine-tuning. It is because the proposed Shapley interaction modeling enables our model to identify semantic regions and align these regions with language. As shown in Table 2, without any bounding-box annotations and fine-tuning, our LOUPE achieves competitive performance across four object detection and visual grounding benchmarks.\n\nFurthermore, FILIP has quadratic complexity with respect to the number of patch tokens and word tokens while we only compute the alignment between every region and phrase, and the number of regions and phrases is usually much less than the number of word and patch tokens. We will clarify this in the next version.\n\n**Q4: Although the idea of directly learning to approximate the Shapley value is interesting, it is unclear from the paper how the module can learn to approximate Shapley interaction and estimate its uncertainty. Specifically, what is the input of this module? Is it the set of all visual+textual tokens? Is the uncertainty estimation reliable as neural networks are notorious for being very confident in their predictions?**\n\n**A4:** Thanks for your question. We have implemented three versions of models to approximate Shapley interaction and they take all visual and textual tokens and the index corresponding to the target region or region-phrase pair as input. We respectfully refer the reviewer to Appendix D, where we provide the implementation details of these three versions.\n\nTo investigate the reliability of the uncertainty estimation, we measure the correlation coefficient between the relative error and the uncertainty based on the results reported in Figure 2 (b) and (c). The correlation coefficient is a statistical measure to quantify the correlation degree between two variables. The values range between -1 and 1. A correlation of 1 indicates a strong positive correlation. Specifically, in Figure 2 (b) and (c), we test the estimation model on 1000 samples and report their mean uncertainty and relative error. The estimation model predicts a target Shapley interaction value and corresponding uncertainty $\\sigma$ for each testing sample. We compute the relative error for each prediction according to the results computed by the sampling-based method. Then, we measure the correlation coefficient between the relative error and the uncertainty. For token-level Shapley interaction, the average correlation coefficient is 0.74. For semantics-level Shapley interaction, the average correlation is 0.86. The results indicate that our Shapley interaction learning module tends to estimate higher uncertainty for testing samples with larger relative errors. Therefore, the uncertainty estimation is a reliable indicator, which helps us to determine whether to use the neural Shapley interaction learning module or the sampling-based method.\n", " We sincerely thank you for the valuable comments. We are encouraged to see that our work is recognized as novel and interesting. We will explain your concerns point by point.\n\n**Q1: The technical section could be challenging to follow, especially for computer vision audiences without prior knowledge of Shapley value. The reviewer believes it would be very helpful if the paper could include a table to highlight key differences of the proposed work with various methods used for vision-language pretraining. If the reviewer understands correctly, the main distinction compared to other works would be the proposal of using Shapley interaction as soft pseudo labels for fine-grained image regions instead of using contrastive learning between image-caption pairs. However, this is not clear and emphasized enough in the paper.**\n\n**A1:** Thanks for your nice suggestion. We highlight key differences in the following table. As you correctly understand, our LOUPE differs as it explicitly learns fine-grained region-phrase alignment from the novel perspective of game-theoretic interactions, without resorting to any object-level human annotations or pre-trained Region Proposal Network (RPN). Notably, the human bounding-box annotations are usually limited to the pre-defined object categories, and the RPN can only detect regions belonging to the pre-defined categories of pre-training object detection datasets. Thus, the methods that use human bounding-box annotations or pre-trained RPN usually suffer from detecting novel objects beyond the pre-defined categories. In contrast, our LOUPE learns from large-scale raw image-text pairs, which are more scalable and contain a broader set of visual concepts. For example, as shown in the right case of Figure 4, LOUPE successfully recognizes the leash region and aligns it with the “a leash” phrase. Note that the “leash” category has never appeared in any existing object detection datasets. We will include these analyses in the next version according to your valuable suggestion.\n\n| Methods | Coarse-grained image-text alignment | Fine-grained region-phrase alignment | Ways to learn fine-grained region-phrase alignment |\n| :------------------------------------ | :---------------------------------: | :----------------------------------: | :--------------------------------------: |\n| CLIP, ALIGN, DeCLIP | $ \\checkmark$ | - | - |\n| ImageBERT, UNITER, FILIP, ViLT, ALBEF | $ \\checkmark$ | $ \\checkmark$ | Implicit supversion signals from end-to-end training (*e.g.,* Image-Text Contrastive loss) |\n| GLIP, X-VLM, RegionCLIP | $ \\checkmark$ | $ \\checkmark$ | Human bounding-box annotations or supervised pre-trained Region Proposal Network |\n| **LOUPE** | $ \\checkmark$ | $ \\checkmark$ | Explicit alignment information quantified by game-theoretic interactions |\n\n**Q2: The reviewer is unclear how Shapley value could be used as soft pseudo labels. For example, in line 180, the paper mentions that the game score v1, which is used to compute the Shapley interaction, is the similarity between image and text. However, it appears that the paper does not mention any training phase for learning image-textual similarity. Then how would this similarity be computed correctly? Are there any pretraining phases or pretrained models employed for similarity computation?**\n\n**A2:** We appreciate the reviewer's concern about the reliability of Shapley value. As you nicely point out, the reliability of Shapley value depends on the performance of computing image-text similarity. In practice, we first pre-train the image encoder and text encoder only based on the image-text contrastive loss in the first epoch and add Shapley interaction modeling in the remaining epochs. Also, the zero-shot transfer performances on object detection and visual grounding verify the reliability of Shapley value. We will clarify this in the next version.", " The paper addresses the problem of vision-language pretraining, whose goal is to learn a strong backbone network transferable into downstream tasks such as object detection or visual grounding.\nThus, the paper proposes a framework for aligning between visual and textual tokens during training motivated by Shapley value from game theory.\nSpecifically, the model computes the Shapley values to determine the interaction between related image patches, which form objects or related image regions and noun phrases corresponding to object types in images.\nThe Shapley values for patches in image regions and pairs of image regions and noun phrases are used as soft pseudo-labels to train the alignment between textual and visual modalities.\nThe paper also introduces an approximation mechanism to reduce the complexity of Shapley value computation via sampling.\nThe paper evaluates the performances of Image-Text Retrieval, Object Detection, and Visual Grounding on MSCOCO and Flickr30k datasets.\n\n------- After rebuttal -----\n\nThe rebuttal sufficiently addresses my concern as well as confirming my understanding.\nThus, I keep my rating and recommend the paper for acceptance. #### Strength:\n+ The formulation of semantic generation as well as semantic alignment under Shapley interaction is interesting and novel. Moreover, viewing weakly supervised visual-textual alignment in terms of game theory is a promising direction to explore.\n+ Learning to predict Shapley more, as well as its uncertainty, seems to significantly reduce the training complexity.\n+ The paper compares with appropriate baselines to demonstrate its effectiveness.\n\n#### Weakness:\n+ The technical section could be challenging to follow, especially for computer vision audiences without prior knowledge of Shapley value. The reviewer believes it would be very helpful if the paper could include a table to highlight key differences of the proposed work with various methods used for vision-language pretraining. If the reviewer understands correctly, the main distinction compared to other works would be the proposal of using Shapley interaction as soft pseudo labels for fine-grained image regions instead of using contrastive learning between image-caption pairs. However, this is not clear and emphasized enough in the paper. \n+ The reviewer is unclear how Shapley value could be used as soft pseudo labels. For example, in line 180, the paper mentions that the game score $v_1$, which is used to compute the Shapley interaction, is the similarity between image and text. However, it appears that the paper does not mention any training phase for learning image-textual similarity. Then how would this similarity be computed correctly? Are there any pretraining phases or pretrained models employed for similarity computation? \n+ The paper lacks convincing motivation on why using Shapley value instead of token-wise alignment (FILIP [42]) is beneficial. Although in lines 97-99, the paper mentions that FILIP has quadratic complexity, the proposed method also suffers from combinatorial complexity, which could be worse. \n+ Although the idea of directly learning to approximate the Shapley value is interesting, it is unclear from the paper how the module can learn to approximate Shapley interaction and estimate its uncertainty. Specifically, what is the input of this module? Is it the set of all visual+textual tokens? Is the uncertainty estimation reliable as neural networks are notorious for being very confident in their predictions? + Please refer to the weakness section.\n + I believe the work doesn't seem to have any potential negative societal impact.\n+ It would be interesting to investigate the bias from web data in future directions. Would the model propagate these biases from captioned images toward downstream tasks?", " This paper proposed a fine-grained semantically aligned vision-language pre-training framework. Both visual token-level semantics alignment and phrase-region level semantics alignment are studied. To be more specific, the paper measures the shapley interaction of visual regions to text as a supervision signal for region generation module and visual backbone. Then the alignment between regions and phrases is also supervised by Shapley interaction between them. Further, an uncertainty-aware learning module is introduced to predict Shapley interaction to save the time cost of sampling-based estimation. Strengths:\n1. The methodology is interesting and novel. It's essential to exploit the local correlation between image and text in contrastive pre-training. The idea of introducing Shapley interaction as supervision seems to fit the problem well.\n2. LOUPE can outperform recent works trained with a similar scale of data in retrieval tasks. And the ablation is comprehensive.\n3. The region generation model can also be used as a zero-shot object detector, which is quite impressive.\n\nWeaknesses:\n1. The evaluations on zero-shot image classification and linear probing are missing. Those are important to prove whether the learned visual backbone is strong and robust. In CLIP, they reported the above two evaluations in 20+ datasets to prove the transferability of their visual backbone. \n2. The training time needed is increased by 65% (1.17->1.93). \n3. The dataset used seems to be a private dataset and it's hard to convince people that the comparison is fair enough. Although the scale of the proposed dataset is smaller, the quality is not compared and it matters a lot in CLIP-like pretraining. Also, it's not indicated whether the dataset will be released or not. If not, it's hard for people to reproduce this work. \n\n================================\nAfter rebuttal, weakness 1 and 2 have been alleviated. So I change the score. Please refer to weaknesses. It's not guaranteed that the dataset they use doesn't contain unsuitable images, text, or personal information. ", " This paper propose a fine-grained semantically aligned vision-language pre-training framework(LOUPE) from game-theoretic interactions. Experiments on image-text retrieval, object detection and visual grounding tasks demonstrate the effectiveness of LOUPE. \n 1. The motivation is clear. LOUPE propose Phrase-Region Semantic Alignment to achieve fine-grained vision-language pre-training.\n\n2. The hybrid Shapley interaction learning strategy is interesting for me. 1. The experiments are not sufficient. The model structures(both image encoder and text encoder) of different methods in Tabel 1 and Table 2 are inconsistent and the comparison in unfair.\n\n2. Can plain Dual-Encoder be used for zero-shot classification? How is the result on ImageNet dataset?\n\n3. Why are the loss weights of $L_{CMC}$, $L_{TSA}$ and $L_{FSA}$ set to 1:1:1?\n\n4. Are public datasets, such as CC12M, included in the 240M dataset? In addition to the description part in Supplementary Material E, it is hoped that a more detailed introduction to the 240M dataset can be provided. The authors have adequately addressed the limitations and potential negative societal impact of their work", " This paper focuses on learning fine-grained vision-language alignment under the widely used contrastive-based pre-training paradigm. The authors first point out that previous pre-training frameworks, *e.g.,* CLIP, only models global image-text alignment while neglecting the fine-grained semantic features. Alignments though can be achieved by interacting between modalities with the help of certain mechanisms like cross-attention, it lacks explicit and strong supervision to encourage meaningful patch-phrase correspondence, which proved to be quite useful for reasoning downstream tasks. The authors address the limitation by explicitly matching phrases in the text with regions of patches in the image. Different from previous approaches, the authors design light modules to achieve patch-to-patch and region-to-phrase alignment by introducing supervisions derived based on the Shapley value in game theory. Practically, Shapley interaction values are used as the criteria to estimate whether two patches belong to the same region and whether an image region corresponds to a certain phrase. In addition, to address the training efficiency problem when computing Shapley values, an additional estimator is adopted to predict the true values. Extensive experiments show the zero-shot ability of the proposed model on retrieval tasks, detection tasks, and visual grounding tasks. Ablative studies on the new losses and Shapley value approximation module are conducted. Limitations and social impacts are also discussed.\n\nOverall, this paper shares an interesting idea by incorporating game theory mechanisms in the process of text/image alignments. This kind of supervision is free of pre-trained detection/segmentation models and does not require sophisticated text/image labels. The proposed model shows significant improvements on the zero-shot transfer ability and outperforms baseline approaches by a large margin. + Strengths\n + This paper is well organized and easy to follow. The authors first give a brief introduction on the concept of Shapley values and Shapley interaction values and elaborate on how it works under the framework of multi-modal pre-training. The derivation seems to be sound and easy to understand. The main results and ablative studies help to better understand the proposed model.\n + The idea in this paper is quite interesting and seems novel to me. The idea of aligning image regions with text phrases has been explored in previous works, while most of them rely on additional label information and fail to provide strong supervision. Introducing Shapley value in the aligning process sounds reasonable and practically shows promising results.\n \n+ Weaknesses\n + Some missing related works. Similar to this paper, there are also works that attempt to address the problem of missing fine-grained information. For example, RegionCLIP[1] also finds regions in the image and aligns them with text phrases. Off-the-shelf language parsers are similarly considered. Also, RegionCLIP is able to perform zero-shot object detection tasks. Another work, X-VLM[2], shares a similar insight. I believe these works are more related to the paper, and a detailed discussion on the connections and differences with these works is suggested.\n + Training efficiency. I understand that the authors use a sub-network to predict the real Shapley value to save excessive computation costs. Nevertheless, the training time in table 3 (comparing line 1 and line 4) shows that the proposed approach still involves 60~70 training time. This can be a great burden, especially for large-scale pre-training.\n + Please see the **Questions** section for some detailed concerns.\n\n[1] Zhong, Yiwu, et al. \"Regionclip: Region-based language-image pretraining.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n[2] Zeng, Yan, Xinsong Zhang, and Hang Li. \"Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts.\" arXiv preprint arXiv:2111.08276 (2021). I have stated my major concerns in the **Weaknesses** section. The following are some detailed questions and suggestions.\n\n+ How is the score function in the game defined? Does it need to be carefully designed?\n+ The alignment between image regions and text phrases implicitly involves the concept of object. This may lead to similar limitations to the previous works that relied on pre-trained object detectors, and hard to generalize to object-free inputs. Also, according to line185, all regions are constrained within a scale of K patches. I wonder how does the model deal with objects (concepts) of different scale, *e.g.,* apple and sky?\n+ Line 230. How long do we need to train the UNSIL module?\n+ I am confused about Eq.(12). The authors claim the loss function is derived from the regression loss function. Could the authors provide a detailed derivation process? From my perspective the second term $\\beta_2 \\ \\sigma$ should be $\\beta_2 \\ {\\rm log}(\\sigma)$.\n+ I am also concerned about the training time shown in table 3. It seems like the additional cost is still huge even with the approximation module. I wonder if the authors can provide a training cost-performance trade-off compared with other works? Some qualitative analysis would also do the trick.\n+ If possible, I would like to see a comparison between the proposed model and some missing related works, like RegionCLIP and X-VLM that were mentioned above. These approaches share similar insight and can also perform zero-shot transfer to several downstream tasks. The authors have adequately addressed the limitations and social impacts in the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "VZqC7EKJ4wI", "wELlQD5mr2", "9eop7sDY_k", "l5tgVE-tTLW", "6V1SheG4rmT", "hiNg_nM4POg", "VZqC7EKJ4wI", "xzaspfuARWx", "AmPztFDZFvN", "EAfUOpmPmNFH", "QiQmCgL6x66", "uv2TveQx4m2", "znyhaAhqVmI", "nwondwNlCYw", "nips_2022_yam42JWePu", "9BD34rZhuVo", "H3J6wa9dBkc", "ygKFprg6Iy5", "nwondwNlCYw", "VZqC7EKJ4wI", "1zoaMybpSWE", "xzaspfuARWx", "fZO8iJAAi3Y", "IJKVhnV6w-a", "nips_2022_yam42JWePu", "nips_2022_yam42JWePu", "nips_2022_yam42JWePu", "nips_2022_yam42JWePu" ]
nips_2022_FurHLDnmC5v
Sample Complexity of Learning Heuristic Functions for Greedy-Best-First and A* Search
Greedy best-first search (GBFS) and A* search (A*) are popular algorithms for path-finding on large graphs. Both use so-called heuristic functions, which estimate how close a vertex is to the goal. While heuristic functions have been handcrafted using domain knowledge, recent studies demonstrate that learning heuristic functions from data is effective in many applications. Motivated by this emerging approach, we study the sample complexity of learning heuristic functions for GBFS and A*. We build on a recent framework called \textit{data-driven algorithm design} and evaluate the \textit{pseudo-dimension} of a class of utility functions that measure the performance of parameterized algorithms. Assuming that a vertex set of size $n$ is fixed, we present $\mathrm{O}(n\lg n)$ and $\mathrm{O}(n^2\lg n)$ upper bounds on the pseudo-dimensions for GBFS and A*, respectively, parameterized by heuristic function values. The upper bound for A* can be improved to $\mathrm{O}(n^2\lg d)$ if every vertex has a degree of at most $d$ and to $\mathrm{O}(n \lg n)$ if edge weights are integers bounded by $\mathrm{poly}(n)$. We also give $\Omega(n)$ lower bounds for GBFS and A*, which imply that our bounds for GBFS and A* under the integer-weight condition are tight up to a $\lg n$ factor. Finally, we discuss a case where the performance of A* is measured by the suboptimality and show that we can sometimes obtain a better guarantee by combining a parameter-dependent worst-case bound with a sample complexity bound.
Accept
Strong paper studying the sample complexity of learning heuristic functions for GBFS and A*. The reviewers were especially impressed with the theoretical results and find the paper a worthwhile contribution to this conference.
train
[ "HLCXecSB-eH", "RB04sS2eTPc", "ouofRAj0lR3", "87Oj0VKH2dc", "16a411ceZGh", "xF9sijer7F-", "yqNw-wVhD7", "pRkxp-LdAdb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answer. I encourage you to include a version of this in the final version, if accepted.", " Thanks for your answer, and \nI look forward to reading \"An example of improving the upper bound with special heuristics,\" which may appear in the final version. Initially, I thought this result was pessimistic, but there seems to be some room for practical impact.", " We sincerely thank the reviewer for providing valuable comments. We answer the following question. \n\n> What are the practical implications of the proposed bounds? Many interesting domains for GBFS/A\\* search have exponentially large implicit search spaces (e.g., planning, probabilistic inference, games, etc.) which means that 'n' in this case could be extremely large and therefore one would require an exponentially large training dataset.\n\nAs the reviewer mentioned, our result is sometimes pessimistic since $n$ can be extremely large (although it is tight for GBFS and A* for integer edge-weight graphs due to the lower bound). If we encounter such huge graphs in practice, what we should consider next is how to design simple heuristic functions to reduce the sample complexity. Roughly speaking, our bounds depend on $n = |V|$ mainly because $n$ heuristic function values $\\rho_v$ for all $v \\in V$ are independently learnable. Therefore, using simple heuristic functions with fewer learnable parameters than $n$ would be an effective workaround when $n$ is huge. For an example of such simple heuristic functions, please see the [response to Reviewer zGFs](https://openreview.net/forum?id=FurHLDnmC5v&noteId=87Oj0VKH2dc). \n\nFurther discussion on how to design such heuristic functions would go beyond the scope of our work since it requires more instance-specific analysis; therefore, we have left it for future work. Our result on the fundamental case where all $\\rho_v$ ($v \\in V$) are learnable will be an important first step to understanding the theoretical aspect of learning-based heuristic search, such as [9, 11, 25, 29, 33]. ", " We appreciate the reviewer's careful reading and insightful comments. We respond to the following comment.\n\n> I don't see any major weaknesses. I would suggest that another interesting direction here is looking at A* for planning. The graph is obviously exponentially large, so the bounds here are useless, but it has a compact representation (e.g. the STRIPS model). Could some heuristics be learned efficiently in that setting?\n\nThank you for the constructive suggestion. As the reviewer mentioned, some path-finding instances have extremely many vertices but have compact representations. Such representations are sometimes helpful in reducing the sample complexity. Below is an illustrative example. \n\n&nbsp;\n### An example of improving the upper bound with special heuristics\nSuppose that a graph with $n$ vertices is given as a STRIPS model, where each vertex $v \\in V$ corresponds to a state represented by a binary vector $\\boldsymbol{q}\\_v \\in \\\\{0, 1\\\\}^\\ell$. Then, we have $n = 2^\\ell$ vertices, i.e., there are exponentially many vertices in $\\ell$. As an example of simple heuristic functions, we assume that a heuristic function value $\\rho\\_v$ for each $v \\in V$ is given as $\\rho\\_v = \\boldsymbol{q}\\_v^\\top \\boldsymbol{\\theta}$, where $\\boldsymbol{\\theta} \\in \\mathbb{R}^\\ell$ is a vector of $\\ell$ learnable parameters. If we apply GBFS to the graph, a similar discussion to the proof of Theorem 1 implies that the behavior of GBFS is determined by a total order on $n$ values $\\\\{ \\boldsymbol{q}\\_v^\\top \\boldsymbol{\\theta} \\\\}\\_{v \\in V}$. Such a total order is unique for all $\\boldsymbol{\\theta} \\in \\mathbb{R}^\\ell$ in an identical region whose boundaries are given by up to $\\binom{n}{2}$ hyperplanes of form $(\\boldsymbol{q}\\_v - \\boldsymbol{q}_{v'})^\\top \\boldsymbol{\\theta} = 0$ for $v, v' \\in V$. Thus, Sauer's lemma implies that given any $N$ instances, there are at most $\\left( \\mathrm{e}\\binom{n}{2} N \\right)^\\ell$ such regions. To shatter the $N$ instances, we need $\\left( \\mathrm{e}\\binom{n}{2} N \\right)^\\ell = \\Omega(2^N)$, implying an $\\mathrm{O}(\\ell \\lg n) \\simeq \\mathrm{O}(\\ell^2)$ upper bound on the pseudo-dimension. That is, even though there are $n = 2^\\ell$ vertices, the upper bound depends only polynomially on $\\ell$. A similar result is true for A* if edge weights are integer, but A* for general weights does not enjoy such a $\\mathrm{poly}(\\ell)$ bound as it incurs an additional $\\mathrm{O}(n)$ factor. \n\n&nbsp;\n\nThe above example implies that we can sometimes reduce the sample complexity by using compact representations of graphs to design appropriate heuristic functions with fewer parameters. Studying how to design such appropriate heuristic functions would be an interesting and important future direction. We will elaborate more on this point in the final version.", " We are grateful to the reviewer for providing thoughtful comments. We answer the following question.\n\n> I don't know very well about PAC analysis, but following the steps in the paper, the claims all make sense. Although the assumption is simplified to a very limited case, where the graph has the same terminal node and the sampling procedure ends up visiting roughly all the nodes in the graph (n and N in the equations), the upper and lower bounds may characterize the theoretical limits for sample efficiency for learning heuristics. \n> \n> How to use these bounds when we learn heuristics for solving graph search problems? Can you provide a concrete example with the theoretical bounds?\n\nWe can use our PAC bounds similarly to those in statistical learning theory. In words, PAC bounds guarantee that the expected performance on future instances becomes closer to the empirical performance observed on $N$ sampled instances as $N$ grows.\n\nFor example, suppose that path-finding instances on graphs with $n$ vertices are drawn from an unknown distribution. We consider accelerating GBFS/A* applied to those instances by learning good heuristic functions. As is often the case in practice, GBFS/A* with empirically good heuristics runs very fast. However, this empirical observation alone provides no theoretical guarantee on how fast it can be on future instances drawn from the unknown distribution (particularly because learning of heuristic functions may result in overfitting to sampled instances). \n\nIn such situations, we can use our PAC bounds to guarantee the expected running time on future instances. Roughly speaking, with high probability, the expected running time on future instances can be bounded by the observed empirical running time plus about $\\mathrm{O}\\left(\\sqrt{\\frac{\\mathrm{poly}(n)}{N}} \\right)$, where $\\mathrm{poly}(n) = n$ for GBFS and $n^2$ for A*. Thus, if we are given about $N \\gtrsim \\frac{\\mathrm{poly}(n)}{\\epsilon^2}$ instances, the expected deviation from the empirical running time is at most $\\epsilon$. In summary, we can use PAC bounds to translate the empirical performance observed on sampled instances into theoretical bounds on the expected performance on future instances.", " This paper studies the sample complexity for learning heuristic functions for GBFS/A* search on a graph with a fixed number of nodes n. The analysis uses PAC learning framework, and the main results show the upper and lower bound of pseudo dimensions of a class of utility functions in which each utility function associates a search task to a scalar value between 0 and H. The paper also continues to provide upper bounds on the expectation of gaps between the optimal costs and the suboptimal costs, where the expectation was taken over the search task sampled from some distribution D, and the bounds are given in terms of the number of samples and the number of nodes.\n Strengths are mathematical analysis of the sample complexity for learning heuristic functions for graph search tasks using GBFS/A*.\n\nWeaknesses are that this analysis emphasizes theoretical aspects and missing practical implications of the upper bounds.\n I don't know very well about PAC analysis, but following the steps in the paper, the claims all make sense.\nAlthough the assumption is simplified to a very limited case, where the graph has the same terminal node\nand the sampling procedure ends up visiting roughly all the nodes in the graph (n and N in the equations),\nthe upper and lower bounds may characterize the theoretical limits for sample efficiency for learning heuristics.\n\nHow to use these bounds when we learn heuristics for solving graph search problems?\nCan you provide a concrete example with the theoretical bounds?\n I think this work is not relevant to this section.", " The paper presents bounds on the sample complexity required for learning heuristic functions to guide greedy best-first search and A* search for solving the shortest path problem in a given graph. The classical approach to best-first search (and heuristic search in general) is to provide it with a handcrafted heuristic (which is typically obtained by solving a relaxed version of the original problem) in order to guide it more effectively towards the optimal solution. However, more recent work aims to learn the guiding heuristic directly from some training data which could be more appealing in some cases. Therefore, deriving bounds on how much data is required to learn a heuristic function with certain guarantees is called for.\n The paper is fairly well written and organised. The quality of the presentation is overall very good and therefore the paper is relatively easy to follow. Most of the concepts and technical details are introduced and discussed if a fairly clear manner. \n\nI think the paper needs a more detailed running example. Otherwise it's not very easy to follow the details especially for a reader who's not very familiar with this research area.\n\nMinor comments:\n\n- Definition 1: there is a typo, h(y_i) \\geq t_i instead of h(y_i) \\geq z_i\n 1. What are the practical implications of the proposed bounds? Many interesting domains for GBFS/A* search have exponentially large implicit search spaces (e.g., planning, probabilistic inference, games, etc.) which means that 'n' in this case could be extremely large and therefore one would require an exponentially large training dataset.\n\n[Post Rebuttal] Thanks for your answers. They have clarified my concerns. see above", " This theoretical paper presents sample complexity bounds for learning\nheuristics for A* and best-first search. It shows an O(nlogn) upper\nbound on the pseudo dimension of BFS and O(n^2logn) for A*, with\nOmega(n) lower bounds for both. It shows that the upper bounds are\nnearly tight, but can be improved for A* when bounding edge weights\nand variable degrees. Moreover, when learning a potentially suboptimal\nheuristic function, the paper gives an upper bound on the\nsuboptimality.\n The paper is relatively straightforward, in the sense that it gives\nclear questions and clear answers. It is well written, and explains\nthe weaknesses of the results, namely the relatively big gap between\nthe bounds on the pseudodimension of A*, as well as give some\nexplanation why it is hard to bridge them.\n\nI don't see any major weaknesses. I would suggest that another\ninteresting direction here is looking at A* for planning. The graph is\nobviously exponentially large, so the bounds here are useless, but it\nhas a compact representation (e.g. the STRIPS model). Could some\nheuristics be learned efficiently in that setting?\n\n----------\n\nTypos, etc:\n\nDefn: you use t_1, ..., t_N for the values in the text and z_1...z_N\nin the formula\n\n107: disrtibution\n\n154: gaurantees\n None No direct societal impact." ]
[ -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, 1, 3, 2 ]
[ "87Oj0VKH2dc", "16a411ceZGh", "yqNw-wVhD7", "pRkxp-LdAdb", "xF9sijer7F-", "nips_2022_FurHLDnmC5v", "nips_2022_FurHLDnmC5v", "nips_2022_FurHLDnmC5v" ]